Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I have tables like these: ``` Fields (ID int, Name varchar(20)) ID Name =============== 1 FieldName 2 FeildDesc Container (ID int, Name varchar(100)) ID Name ==================================== 1 C1 2 C2 3 C3 ContainerField (ContainerId int, FieldId int, FieldValue varchar(100)) ContainerId FieldId FieldValue ==================================== 1 1 Container1 1 2 Container1 Desc 2 1 Container2 2 2 Container3 Desc 3 1 Container3 ``` I'd like to have my result look like: ``` ContainerId Name Desc =================================== 1 Container1 Container1 Desc 2 Container2 Container2 Desc 3 Container3 NULL ``` I've tried to do LEFT OUTER JOIN between ContainerField and Field but that didn't seemt to work. I'd appreciate it if anyone could help.
Join the table twice, once for the names and once for the descriptions. ``` select c.Id as ContainerId, cfn.FieldValue as Name, cfd.FieldValue as Desc from Container c left join ContainerField cfn on cfn.ContainerId = c.ID and cfn.FieldId = 1 left join ContainerField cfd on cfd.ContainerId = c.ID and cfd.FieldId = 2 ``` As you can see, not groups, or aggregations. Also, you don't really need the Fields tabel itself in the query, although it's good to have it in your database, because it enforces referential integrity (if you have the proper keys) and it provides a kind of documentation. But you might use an enum for that as well. Alternatively, if you have not just two fields, but an unknown number of them, you might want to consider `pivot tables`. I must admit I'm not really good with those, especially in TSQL, so I'll provide you with just a link for you to explore: <http://technet.microsoft.com/en-us/library/ms177410(v=sql.105).aspx> Or hopefully someone else can give you a specific example.
No need for an outer join, an inner join will do it. You're also not using anything from `Container` (if you want to display the container name instead of ID, you can do an inner join with it to get that). ``` SELECT c.id AS ContainerID, MAX(CASE WHEN f.name = 'FieldName' THEN c.FieldValue END) AS Name, MAX(CASE WHEN f.name = 'FieldDesc' THEN c.FieldValue END) AS `Desc` FROM ContainerField AS c JOIN Fields AS f ON c.FieldID = f.id GROUP BY ContainerID ``` [DEMO](http://www.sqlfiddle.com/#!2/fdcf03/1) Note that you have a misspelling in your `Fields` table: `FeildsDesc` should be `FieldsDesc`.
SQL Outer Join?
[ "", "sql", "t-sql", "" ]
My table has 1 row 300,000 records and a few duplicate records they don't have a primary key so i came to the conclusion i need to copy all the distinct rows in the table im using this line of code ``` select * into newtable from (select distinct tag from Tags) ``` but i keep getting a syntax error 'incorrect syntax at 'end of file', expecting as, id, quoteID.
You have to name a subquery: ``` from (...) as SubQueryAlias ``` In your case (the `as` is usally optional, except in Oracle, where it is not allowed): ``` select * into newtable from (select distinct tag from Tags) tags ```
In MySQL, you would use `create table as`: ``` create table newtable as select distinct tag from Tags; ```
Sytax error when select distinct rows in a sql table
[ "", "mysql", "sql", "duplicates", "" ]
Is there a way to truncate an nvarchar using DATALENGTH? I am trying to create an index on a column, but an index only accepts a maximum of 900 bytes. I have rows that consist of 1000+ bytes. I would like to truncate these rows and only accept the first n characters <= 900 bytes.
Can be this sql useful, Just update the table for that column. ``` Update Table Set Column = Left(Ltrim(Column),900) ```
Create a COMPUTED COLUMN that represents the data you want to index, then create an index on it. ``` ALTER TABLE MyTable ADD ComputedColumn AS LEFT(LargeNVarcharColumn,900); CREATE NONCLUSTERED INDEX MyIndex ON MyTable ( ComputedColumn ASC ); ``` --- **Reference:** * [computed\_column\_definition](http://msdn.microsoft.com/en-us/library/ms186241.aspx) * [Indexes on Computed Columns](http://msdn.microsoft.com/en-us/library/ms189292.aspx)
SQL Server - Truncate Using DATALENGTH
[ "", "sql", "sql-server", "t-sql", "datalength", "" ]
This seems like something that should be fairly common, so this could be as simple as giving me the right vocabulary for what I'm describing. I want to return the results of a query and in a column I want to return a repeating number sequence of 1, 2, 3. Desired Result Set: ``` Num Result --------------- 1 result1 2 result2 3 result3 1 result4 2 result5 ``` Now I can return a basic list of three numbers using the basic : ``` SELECT @ROW := @ROW + 1 AS ROW FROM 'use any table with data in it' t join (SELECT @ROW := 0) t2 LIMIT 3 ``` I just can't find a way of blending this with my query to return the result I want. The best I've come up with returns a repeating 1,2,3 but repeats each item in the query for each number (as you'd expect). ``` Num Result --------------- 1 result1 2 result1 3 result1 1 result2 2 result2 ``` Any ideas on how to achieve this? Edit: ``` CREATE TABLE Results (Result varchar(10)); INSERT INTO Results (`Result`) VALUES ('result1'); INSERT INTO Results (`Result`) VALUES ('result2'); INSERT INTO Results (`Result`) VALUES ('result3'); INSERT INTO Results (`Result`) VALUES ('result4'); INSERT INTO Results (`Result`) VALUES ('result5'); CREATE TABLE Randoms (Random varchar(10)); INSERT INTO Randoms (`Random`) VALUES ('whatever1'); INSERT INTO Randoms (`Random`) VALUES ('whatever2'); INSERT INTO Randoms (`Random`) VALUES ('whatever3'); --Initial Query SELECT Result FROM Results ORDER BY Result ASC; --Closest I've come (not close enough) SELECT b.Num, a.Result FROM Results a, (SELECT @ROW := @ROW + 1 AS Num FROM Randoms t join (SELECT @ROW := 0) t2 LIMIT 3) b ORDER BY a.Result ASC; ```
I think you can do this... ``` SELECT MOD(i,3)+1 , result FROM ( SELECT @i:=@i+1 i , result FROM results , ( SELECT @i:=2 ) vals ORDER BY result ) x; ```
Although you can do this with `mod()`, I think an `if()` is simpler: ``` SELECT @row := if(@row < 3, @row + 1, 1) as Num, a.Result FROM Results a CROSS JOIN (SELECT @row := 0) vars ORDER BY a.Result ASC; ``` I'm not sure what the table `randoms` is for. It doesn't seem necessary to the problem. [Here](http://www.sqlfiddle.com/#!2/2feeed/7) is a SQL Fiddle.
Repeating Running Numbers along side MySQL Results
[ "", "mysql", "sql", "" ]
I have a scenario where I have to select multiple rows from table, I have multiple rows of one record but with different status, at times I have two identical rows with identical data for status < for that case I canted to select Non zero for the first occurrence and set 0 for the remaining occurrences. Below is the Image to show and I have marked strike-out and marked 0 for the remaining occurrence. And body could suggest better SQL Query: ![Image](https://i.stack.imgur.com/QribQ.png) Here is the Query: I am getting zero value for status 1 for ID =1 but I need to show first as regular and then 0 if that status repeats again. ``` CREATE TABLE #Temp (ID INT, ItemName varchar(10), Price Money, [Status] INT, [Date] Datetime) INSERT INTO #Temp VALUES(1,'ABC',10,1,'2014-08-27') INSERT INTO #Temp VALUES(1,'ABC',10,2,'2014-08-27') INSERT INTO #Temp VALUES(1,'ABC',10,1,'2014-08-28') INSERT INTO #Temp VALUES(2,'DEF',25,1,'2014-08-26') INSERT INTO #Temp VALUES(2,'DEF',25,3,'2014-08-27') INSERT INTO #Temp VALUES(2,'DEF',25,1,'2014-08-28') INSERT INTO #Temp VALUES(3,'GHI',30,1,'2014-08-27') SELECT CASE WHEN Temp.Status = 1 THEN 0 ELSE Temp.Price END AS Price, * FROM (SELECT * FROM #Temp) Temp DROP TABLE #Temp ``` Here is the result: ![result](https://i.stack.imgur.com/T2c3b.png)
You might modify your inner select using Row\_Number() and set price to Zero for RowNumber > 1. ``` SELECT CASE WHEN Temp.RowNumber > 1 THEN 0 ELSE Temp.Price END AS Price, * FROM ( SELECT *,Row_Number() over (PARTITION by ID,Status ORDER BY ID,Date) AS 'RowNumber' FROM #Temp ) Temp Order by ID,Date ```
You can try this: ``` ;WITH DataSource AS ( SELECT RANK() OVER (PARTITION BY [ID], [ItemName], [Price], [Status] ORDER BY Date) AS [RankID] ,* FROM #Temp ) SELECT [ID] ,[ItemName] ,IIF([RankID] = 1, [Price], 0) ,[Status] ,[Date] FROM DataSource ORDER BY [ID] ,[Date] ``` Here is the output: ![enter image description here](https://i.stack.imgur.com/bSy8E.png)
Selection One Entry Only As Non Zero In SQl Select
[ "", "sql", "sql-server", "t-sql", "select", "" ]
I have these two tables: **popular\_song** ``` song_name | rate | country_id ------------------------------ Tic Tac | 10 | 1 Titanic | 2 | 1 Love Boat | 8 | 2 Battery | 9 | 2 ``` **country** ``` conutry_id | country -------------------------- 1 | United States 2 | Germany ``` What I'd like to achieve is to get the most poular song in each country, e.g.: ``` song_name | rate | country -------------------------- Tic Tac | 10 | United States Battery | 9 | Germany ``` I've tried this query: ``` SELECT MAX(rate), song_name, country FROM popular_song ps JOIN country cnt ON ps.country_id = cnt.country_id GROUP BY country ``` But this doesn't work. I've tried looking at questions like "Order by before group by" but didn't find an answer. **Which mysql query could achieve this result?**
There is a trick that you can use with `substring_index()` and `group_concat()`: ``` SELECT MAX(rate), substring_index(group_concat(song_name order by rate desc separator '|'), '|', 1) as song, country FROM popular_song ps JOIN country cnt ON ps.country_id = cnt.country_id GROUP BY country; ``` EDIT: If you have big tables and lots of songs per country, I would suggest the `not exists` approach: ``` select rate, song country from popular_song ps join country cnt on ps.country_id = cnt.country_id where not exists (select 1 from popular_song ps2 where ps2.country_id = ps.country_id and ps2.rate > ps.rate ); ``` Along with an index on `popular_song(country_id, rate)`. I recommended the `group_concat()` approach because the OP already had a query with a `group by`, so the trick is the easiest to plug into such a query.
You can use another self join to popular songs table with the max rating ``` SELECT ps.*,cnt.country FROM popular_song ps JOIN (SELECT MAX(rate) rate, country_id FROM popular_song GROUP BY country_id) t1 ON(ps.country_id = t1.country_id and ps.rate= t1.rate) JOIN country cnt ON ps.country_id = cnt.conutry_id ``` [**See Demo**](http://sqlfiddle.com/#!2/2d4be/6)
Get the max value in a specific group of rows
[ "", "mysql", "sql", "database", "greatest-n-per-group", "" ]
I have a Scalar SQL function thats returns a decimal value, and this function is used in many stored procedures across my database. Now in some procedures, I need to set a value based on some criteria inside the function. To make it clearer, depending on some variables used in calculating the result of the function, I want to set another variable inside the Stored procedure, and return it to the client. I don't want to change how the result is returned or the return type of the function. I am thinking of doing it by inserting the new value i want into an sql table and then reading it from the procedure, But is there another or better way to do it? Thanks
You have a couple of options 1) Change it from a function to a stored procedure, and add an output parameter. 2) Change it from a scalar function to a table valued function returning a single row, with the additional value as an additional column. If you need to preserve the existing function signature then just create a new table valued function that does the work (As per option 2 above), and modify your existing function to select from the new table valued function. Here is some example code demonstrating this: ``` -- the original scalar function CREATE FUNCTION dbo.t1(@param1 INT) RETURNS INT AS BEGIN RETURN @param1 + 1 END GO -- a new table valued function, that returns 2 values in a single row CREATE FUNCTION dbo.t2(@param1 INT) RETURNS TABLE AS RETURN (SELECT @param1 + 1 AS [r1], @param1 + 2 AS [r2]) GO -- the modified original function, now selecting from the new table valued function CREATE FUNCTION dbo.t3(@param1 INT) RETURNS INT AS BEGIN RETURN (SELECT r1 FROM dbo.t2(@param1)) END GO -- example usage SELECT dbo.t1(1) SELECT * FROM dbo.t2(1) SELECT dbo.t3(1) ```
No, you cannot. Functions are severely limited in SQL Server and do not allow any side effects. What you can do, however, is convert your scalar function into a table function. In it, you can return a table with as many columns as you need, so returning more than one value is not a problem.
Return two values from a scalar SQL Function
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "" ]
I'm new in coding stored procedures. Basically I would just like to transfer top 10 records of table 1 to table 2(which I created). My code can't insert the records I selected from table 1. I'm not sure if I'm missing something or I'm doing it wrong. Please help. Thanks in advance! Note: 1.) Table 1 and Table 2 are with the same datatypes. 2.) I need parameters for each column for me to be able to call them in my c#.net graph Here's my SP: \*\* > ``` > USE [GDW] > GO > SET ANSI_NULLS ON > GO > SET QUOTED_IDENTIFIER ON > GO > ALTER PROCEDURE [dbo].[sp_BI_Get_AsiaTopTen] > > AS BEGIN > DECLARE > @fyname nvarchar(120), > @fmonth nvarchar(120), > @fmonth_num int = null, > @world_area nvarchar(120) = null > > SELECT TOP 10 > @fyname= [FYName], > @fmonth= [FMONTH], > @fmonth_num= [FMONTH_NUM], > @world_area= [World_AREA] > FROM [dbo].[tbl_BI_FastReport1_Temp] > WHERE [World_AREA] = 'Asia' > AND [FMONTH_NUM] = '201401' > > INSERT INTO [dbo].[tbl_BI_AsiaTopTen] > ( > [FYName], > [FMONTH], > [FMONTH_NUM], > [World_AREA] > ) > VALUES > ( > @fyname, > @fmonth, > @fmonth_num, > @world_area > ) > > END > ``` \*\*
``` ALTER PROCEDURE [dbo].[sp_BI_Get_AsiaTopTen] AS BEGIN SET NOCOUNT ON; INSERT INTO [dbo].[tbl_BI_AsiaTopTen] ([FYName],[FMONTH],[FMONTH_NUM],[World_AREA]) SELECT TOP 10 [FYName] , [FMONTH] , [FMONTH_NUM] , [World_AREA] FROM [dbo].[tbl_BI_FastReport1_Temp] WHERE [World_AREA] = 'Asia' AND [FMONTH_NUM] = '201401' -- ORDER BY SomeCOlumn END ```
Try something like this instead: ``` INSERT INTO [dbo].[tbl_BI_AsiaTopTen] ( [FYName], [FMONTH], [FMONTH_NUM], [World_AREA] ) SELECT TOP 10 [FYName], [FMONTH], [FMONTH_NUM], [World_AREA] FROM [dbo].[tbl_BI_FastReport1_Temp] WHERE [World_AREA] = 'Asia' AND [FMONTH_NUM] = '201401' ```
STORED PROC - select then insert
[ "", "sql", "sql-server", "stored-procedures", "" ]
I have a table name `tbl_Employees` ``` id Name Designation 1 Rajeev Developer 2 Deepak Developer 3 Pankaj Sales 4 Siksha Sales 5 Parul Sales 6 Nikita HR 7 Dinesh Account 8 Mahiman Travel 9 Mukesh Adevertising 10 Pulkit Marketing 11 Diksha Database 12 Gurinder Database ``` I am using the select query as ``` select * from tbl_Employees ``` Now I am trying to show 5 rows on my `.aspx` page. now my query is that the 5 rows data is randomly change on every `PageLoad` event.
You can use `NEWID()` to select records rendomly as below: ``` SELECT TOP(5) * FROM tbl_Employees ORDER BY NEWID() ```
``` select top 5 * from [yourtable] order by newid() ```
Random records Select in mysql
[ "", "sql", "sql-server", "" ]
I have tried this query, but it returns an error : ``` Select DATA_TYPE From INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'Customers' AND COLUMN_NAME IN (Select * From INFORMATION_SCHEMA.COLUMNS Where TABLE_NAME = 'Customers'); ``` Error : > Only one expression can be specified in the select list when the subquery is not introduced with EXISTS.
Just use ``` Select COLUMN_NAME , DATA_TYPE From INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'Customers' ``` BTW if you use the `in` clause, then select only a single column in the subquery and not all ``` where COLUMN_NAME IN (Select col from table) ``` and not ``` where COLUMN_NAME IN (Select * from table) ```
I would be careful with collation and use: ``` Select DATA_TYPE From INFORMATION_SCHEMA.COLUMNS WHERE lower(TABLE_NAME) = 'customers'; ```
How to display data type of all columns in a sql query
[ "", "sql", "types", "sqldatatypes", "" ]
I am using the following query. In this query I want to apply the where clause based on passed parameter. But the issue is that where clause is like **'value = if parameterVal = 'I' than NULL else NOT NULL'** I've build a query like this ``` SELECT * FROM MASTER WHERE Column1 IS (CASE WHEN :Filter = 'I' THEN 'NULL' ELSE 'NOT NULL' END) ``` but it's not working. Help me solve this. **UPDATE** Updating question to elaborate question more clearly. I've one table MASTER. Now I am passing one parameter in query that is Filter (indicated by :Filter in query). Now when the Filter parameter's value is 'I' than it should return the following result. ``` SELECT * FROM MASTER WHERE Column1 IS NULL ``` but if the passed argument is not equal to 'I' than, ``` SELECT * FROM MASTER WHERE Column1 IS NOT NULL ```
``` SELECT * FROM MASTER WHERE (Filter = 'I' AND Column1 IS NULL) OR (Filter <> 'I' AND Column1 IS NOT NULL) ```
If you really insist on using a CASE the `SELECT` could be rewritten as: ``` SELECT * FROM MASTER WHERE CASE WHEN COLUMN1 IS NULL AND FILTER = 'I' THEN 1 WHEN COLUMN1 IS NOT NULL AND FILTER <> 'I' THEN 1 ELSE 0 END = 1 ``` [SQLFiddle here](http://sqlfiddle.com/#!4/685b0/1) Frankly, though, I think that this is very difficult to interpret, and I suggest that @MAli's version is better.
Case when statement in SQL
[ "", "sql", "oracle", "" ]
does someone know how to speed up following query: ``` select count (*) from table where column1 between DATEADD(day, -90, convert(date, getdate())) AND getdate() and column2 is not null ``` This query takes 20 sec. for 8.000 rows I think/know the bottleneck is the conversion, but it is necessary...
It is unlikely to be an index issue. There is a [bug](https://connect.microsoft.com/SQLServer/feedback/details/630583/incorrect-estimate-with-condition-that-includes-datediff) in sqlserver-2008. It should be fixed in newer versions of sqlserver Try this instead: ``` declare @from datetime = DATEADD(day, -90, convert(date, getdate())) declare @to datetime = getdate() select count (*) from table where column1 between @from and @to and column2 is not null ``` You can read about a similar problem [here](https://stackoverflow.com/questions/18241977/query-runs-slow-with-date-expression-but-fast-with-string-literal/18242413#18242413)
Your query is fine. The conversions are on constants, rather than on the column. Instead, you need an index. This will probably help: ``` create index idx_table_column1_column2 on table(column1, column2); ``` This is a covering index, so only the index will be used to satisfy the query.
Speed up query containing convert
[ "", "sql", "sql-server", "" ]
I have a `Group` table with the columns `GroupId` and `GroupName`. Also there is another table called `Group_Student` which keeps track of which group a student belongs. It contains the columns `GroupId` (foreign key to the `Group` table) and `StudentId`. I would like to know on how I can write a SQL query which lists the `GroupName`, `GroupId` and the number of students in each group. Example, if the `Group_Student` table contains the following entries ``` GroupId StudentId ------------------- 1 2 1 3 2 4 ``` Then the SQL query should produce the following output ``` GroupName MemberCount ------------------------ ABC 1 DEF 2 ``` Kindly let me know how I can write the SQL for this. I'm using SQL Server 2005. Thanks in advance.
Here is a sql fiddle of this working <http://sqlfiddle.com/#!3/0f8a5/2/0> ``` select groupname, [group].groupid, count(*) as 'MemberCount' from [group] inner join group_student on [group].groupid = group_student.groupid group by groupname, [group].groupid ```
``` SELECT GroupName, Group.GroupID, COUNT(StudentId) AS MemberCount FROM Group INNER JOIN Group_Student ON Group.GroupID = Group_Student.GroupID GROUP BY Group.GroupID ```
SQL to get Count of Members in each group
[ "", "sql", "sql-server", "t-sql", "" ]
i am very new in Laravel,currently working with Laravel4.I am trying to add a multiple column search functionality in my project.I can do single column eloquent search query,But honestly i have no idea how to do multiple column eloquent search query in laravel.I have **two drop down menu** 1.Locatiom 2.blood group. i want to search an **user** having certain **blood group** against certain **location**.That is, user will select a **location** and **blood group** from those two drop down menu at a time and hit the search button. In my database,i have two column, one contains the **location** and another contains the **blood group** of a certain user. Now,what should be the eloquent query for such a search?
Simply chain `where` for each field you need to search through: ``` // AND $results = SomeModel::where('location', $location)->where('blood_group', $bloodGroup)->get(); // OR $results = SomeModel::where('location', $location)->orWhere('blood_group', $bloodGroup)->get(); ``` You can make it easier to work with thanks to the scopes: ``` // SomeModel class public function scopeSearchLocation($query, $location) { if ($location) $query->where('location', $location); } public function scopeSearchBloodGroup($query, $bloodGroup) { if ($bloodGroup) $query->where('blood_group', $bloodGroup); } // then SomeModel::searchBloodGroup($bloodGroup)->searchLocation($location)->get(); ``` Just a sensible example, adjust it to your needs.
This is a great way but I think there is an error somewhere as laravel would not allow you to access a non-static function so instead of using ``` SomeModel::searchBloodGroup($bloodGroup)->searchLocation($location)->get(); ``` you could simply use ``` SomeModel::BloodGroup($bloodGroup)->Location($location)->get(); ``` take note of the **searchBloodGroup** has been changed to **BloodGroup**, that's how you will use it for all others also.
Laravel multiple column eloquent search query
[ "", "mysql", "sql", "laravel-4", "" ]
I'm creating custom views that show totals for different things in a database, and I'd like to also show the differences. For example; ``` SELECT (SELECT COUNT(*) FROM `documents`) AS `doc_count`, (SELECT COUNT(*) FROM `contacts`) AS `user_count`, (`doc_count` - `user_count`) AS `difference`; ``` I get an error using the aliases this way. Is there a way to write this query without repeating `select count(*)` queries?
You could wrap both queries with an additional query: ``` SELECT doc_count, user_count, doc_count - user_count AS difference FROM ((SELECT COUNT(*) FROM `documents`) AS doc_count, (SELECT COUNT(*) FROM `contacts`) AS user_count) t ```
No you can't use the aliases at same level of query you have to use the whole expression or use sub select ``` SELECT (SELECT COUNT(*) FROM `documents`) AS `doc_count`, (SELECT COUNT(*) FROM `contacts`) AS `user_count`, ((SELECT COUNT(*) FROM `documents`) - (SELECT COUNT(*) FROM `contacts`)) AS `difference`; ```
How to show the difference between two counts using aliases?
[ "", "mysql", "sql", "select", "" ]
I am doing a SELECT from a MYSQL table Table looks like this ``` year tour starts scoring_avg 1990 EUR 5 71.56 1990 USA 6 0.0 1991 EUR 12 71.21 1991 USA 8 69.23 ``` I am doing a SELECT like so ``` SELECT year, SUM(starts), SUM(starts*scoring_avg) as scoring.avg FROM scores GROUP BY year ``` Goal is to get combined scoring average for the year, combining the EUR and USA rows for each year. After the SELECT, I divide scoring\_avg by starts. Since I added a 6 to starts from the second line, with no scoring\_avg for that line, the result is not correct. Works fine for year 1991. Doesn't work for year 1990, since the scoring\_avg for the USA row is 0. Is there a way the SELECT can be modified to only use the ps.starts\*ps.scoring\_avg in the SUM where ps.scoring\_avg in that row is greater than 0? Thank you. -- Ed
If you just want to change `scoringavg`, use conditional aggregation: ``` SELECT year, SUM(starts), SUM(case when scoring_avg > 0 then starts*scoring_avg end) as scoring_avg FROM scores GROUP BY year; ``` However, I would suggest doing all the work in a single query and not doing any division afterwards. The following calculates the average that you want: ``` SELECT year, SUM(starts), (SUM(case when scoring_avg > 0 then starts*scoring_avg end) / SUM(scoring_avg > 0) ) as scoring_avg FROM scores GROUP BY year; ```
You may try this: ``` SELECT year, SUM(starts), SUM(starts*scoring_avg) as scoring.avg FROM scores WHERE scoring_avg > 0 GROUP BY year ```
MYSQL avoid SELECT computation if multiplying by ZERO
[ "", "mysql", "sql", "select", "" ]
I am in the process of migrating a SQL Server database to Oracle, where I have to convert SQL Server procedure which uses special tables called `INSERTED` and `DELETED` in SQL Server. As per my understanding these tables hold copies the data of last inserted/deleted records. (find the msdn article here: <http://msdn.microsoft.com/en-us/library/ms191300.aspx>) Are there any similar tables in Oracle to achieve this..? Please advise. UPDATE: Thanks for your answers and comments ,I think I need to explain the situation some more. Here is the full story to understand the real scenario; * Data base contains tables and shadow tables (shadow has an additional column). * When a table is updated same changes should be recorded in relevant shadow table with some additional data. * For this purpose they are having triggers for each table (these triggers copy data to relevant shadow table). * The above mentioned procedure generates these triggers dynamically for each and every table. * Now the real problem is I don't have the knowledge about the columns as triggers are dynamically generated for each table. * Basically I can’t get value like: NEW.col\_1 or: OLD.col\_1 as APC mentioned. Can I.? Or else I have to write all those triggers manually using prefixes: NEW and: OLD rather than trying to generate them dynamically. I am using Oracle 11g
Thanks for the answers and comments. here is the complete solution to my problem.If some one meet the exact problem this will help. ``` create or replace PROCEDURE CreateTrackingTriggers ( -- take the target table and shadow user as agruments v_TableName IN NVARCHAR2 DEFAULT NULL, v_ShadowUser IN NVARCHAR2 DEFAULT 'SHADOW_USER' ) AUTHID CURRENT_USER -- grant permission to create triggers AS v_TriggerName NVARCHAR2(500); v_ColList NVARCHAR2(2000); v_ColList_shadow NVARCHAR2(2000); v_SQLCommand VARCHAR2(4000); v_ColName NVARCHAR2(500); v_ColSize NUMBER(10,0); v_Prefix NVARCHAR2(500); v_count NUMBER(1,0); BEGIN DECLARE -- define a cursor to get the columns of the target table. order by COLUMN_ID is important CURSOR Cols IS SELECT COLUMN_NAME , CHAR_COL_DECL_LENGTH FROM USER_TAB_COLS WHERE TABLE_NAME = upper(v_TableName) order by COLUMN_ID; -- define a cursor to get the columns of the target shadow table order by COLUMN_ID is important CURSOR Shadow_Cols IS SELECT COLUMN_NAME , CHAR_COL_DECL_LENGTH FROM ALL_TAB_COLS WHERE TABLE_NAME = upper(v_TableName) and upper(owner)=upper(v_ShadowUser) order by COLUMN_ID; BEGIN -- generate the trigger name for target table v_TriggerName := 'TRG_' || upper(v_TableName) || '_Track' ; -- check v_count , determine whether shdow table exist if not handle it select count(*) into v_count from all_tables where table_name = upper(v_TableName) and owner = upper(v_ShadowUser); -- iterate the cursor. generating column names prefixing ':new.' OPEN Cols; FETCH Cols INTO v_ColName,v_ColSize; WHILE Cols%FOUND LOOP BEGIN IF v_ColList IS NULL THEN v_ColList := ':new.'||v_ColName ; ELSE v_ColList := v_ColList || ',' || ':new.'||v_ColName; END IF; FETCH Cols INTO v_ColName,v_ColSize; END; END LOOP; CLOSE Cols; -- iterate the cursor. get the shadow table columns OPEN Shadow_Cols; FETCH Shadow_Cols INTO v_ColName,v_ColSize; WHILE Shadow_Cols%FOUND LOOP BEGIN IF v_ColList_shadow IS NULL THEN v_ColList_shadow := v_ColName; ELSE v_ColList_shadow := v_ColList_shadow || ',' || v_ColName; END IF; FETCH Shadow_Cols INTO v_ColName,v_ColSize; END; END LOOP; CLOSE Shadow_Cols; -- create trigger command. This will generate the trigger that dupilicates target table's data into shdow table v_SQLCommand := 'CREATE or REPLACE TRIGGER '||v_TriggerName||' AFTER INSERT OR UPDATE OR DELETE ON '||upper(v_TableName)||' REFERENCING OLD AS old NEW AS new FOR EACH ROW DECLARE ErrorCode NUMBER(19,0); BEGIN -- v_ColList_shadow : shdow table column list -- v_ColList : target table column list with :new prefixed INSERT INTO '|| v_ShadowUser ||'.'||upper(v_TableName)||'('||v_ColList_shadow||') values ('||v_ColList||'); EXCEPTION WHEN OTHERS THEN ErrorCode := SQLCODE; END;'; EXECUTE IMMEDIATE v_SQLCommand; END; END; ```
Oracle triggers use pseudo-records rather than special tables. That is, instead of tables we can access the values of individual columns. We distinguish pseudo-records in the affected table from records in (other) tables by using the prefixes `:NEW` and `:OLD` . Oracle allows us to declare our own names for these, but there is really no good reason for abandoning the standard. Which column values can we access? ``` Action :OLD :NEW ------ ---- ---- INSERTING n/a Inserted value UPDATING Superseded value Amended value DELETING Deleted value n/a ``` You will see that `:OLD` is the same as the MSSQL table `DELETED` and `:NEW` is the same as table `INSERTED` So, to trigger a business rule check when a certain column is updated: ``` create or replace trigger t23_bus_check_trg before update on t23 for each row begin if :NEW.col_1 != :OLD.col_1 then check_this(:NEW.col_1 , :OLD.col_1); end if; end t23_bus_check_trg; ``` There's a whole chapter on records in the PL/SQL Reference. [Find out more](http://docs.oracle.com/cd/E11882_01/appdev.112/e25519/triggers.htm#LNPLS99955).
Oracle equivalent for SQL Server INSERTED and DELETED tables
[ "", "sql", "sql-server", "oracle", "triggers", "" ]
I'm having trouble running/configuring a query for Microsoft SQL Server, the query is as follows: ``` SELECT ps.WeekIncluded AS PaidWeeks, PayD.PayDate AS PayDates, ps.PayYear AS PYear, months.PMonth, (SELECT sum(DayPay) FROM Shifts GROUP BY WeekNumber) FROM dbo.PayStructure ps JOIN dbo.Months Months ON ps.MonthID = months.ID JOIN dbo.PayDates PayD ON ps.MonthID = PayD.MonthID Group BY ps.MonthID ``` What this is trying to do, is create a view (not included in snippet) using three tables, including selecting the sum of DayPay in Shifts and GroupBy the week number to be later joined and joined by a weeknumber to the specified month. Unfortunately i'm getting: > Column 'dbo.PayStructure.WeekIncluded' is invalid in the select list > because it is not contained in either an aggregate function or the > GROUP BY clause. & using: ``` SELECT ps.WeekIncluded AS PaidWeeks, PayD.PayDate AS PayDates, ps.PayYear AS PYear, months.PMonth, (SELECT sum(DayPay) FROM Shifts) FROM dbo.PayStructure ps JOIN dbo.Months Months ON ps.MonthID = months.ID JOIN dbo.PayDates PayD ON ps.MonthID = PayD.MonthID ``` Returns: ![Sucessful Query, Wrong results](https://i.stack.imgur.com/IHwwi.png) Every selected is equal to: *60.36* where what i'm trying to get is: > Janurary - Null > > February - Null > > March - Null > > April - Null > > May - Null > > June - Null > > July - Null > > August - Null > > September - 60.36 > > October - Null > > December - Null Null being for 11/12 months due to having no data input for those week numbers/months --- As Asked for. **Dbo.Shifts** ![Dbo.shifts](https://i.stack.imgur.com/tqmVz.png) **Dbo.PayStructure** ![dbo.PayStructure](https://i.stack.imgur.com/ZVPCX.png) **Dbo.Months** ![Dbo.Months](https://i.stack.imgur.com/l9DhY.png) --- Attempted: ``` SELECT ps.WeekIncluded AS PaidWeeks, PayD.PayDate AS PayDates, ps.PayYear AS PYear, SUM(Sh.DayPay) FROM dbo.PayStructure ps LEFT JOIN dbo.Months Months ON ps.MonthID = months.ID LEFT JOIN dbo.PayDates PayD ON ps.MonthID = PayD.MonthID LEFT JOIN dbo.Shifts Sh ON Sh.WeekNumber = ps.WeekIncluded GROUP BY Sh.WeekNumber ```
``` SELECT ps.WeekIncluded AS PaidWeeks, PayD.PayDate AS PayDates, ps.PayYear AS PYear, months.PMonth, SumPay FROM dbo.PayStructure ps JOIN dbo.Months Months ON ps.MonthID = months.ID JOIN dbo.PayDates PayD ON ps.MonthID = PayD.MonthID LEFT JOIN (SELECT S. WeekNumber, PSI.MonthId, PSI.PayYear, sum(DayPay) AS SumPay FROM Shifts S JOIN dbo.PayStrycture PSI ON S.WeekNumber = PSI.WeekIncluded AND YEAR(S.ShiftDate) = PSI.PayYear JOIN dbo.Months M ON PSI.MonthId = M.ID GROUP BY S. WeekNumber, PSI.MonthId, PSI.PayYear ) T ON PS.MonthId = T.MonthId AND PS.PayYear = T.PayYear AND PS.WeekIncluded = T.WeekNumber ```
The error message is fairly clear; SQL Server cannot know from your query which possible value of `WeekIncluded` it should include the result set for you query. Think of it yourself -- there will be one row per month in your output and one "slot" for a WeekIncluded value. Which one should be shown? Try running the query like this: ``` SELECT MIN(ps.WeekIncluded) AS FromWeek, MAX(ps.WeekIncluded) AS ToWeek, . . . ``` and you should get a better idea of what's going on. A couple of other engines include an extra aggregate function called `group_concat` to handle this kind of query but SQL Server makes it pretty hard to roll-up a comma-separated list of values in an aggregate like this.
Being unable to run query in Microsoft SQL Server
[ "", "sql", "sql-server", "" ]
Modeling a PostgreSQL database, is it a bad practice to create a table to serve another tables? For example, create a table with default address fields. ``` CREATE TABLE address ( zipcode char(8) NOT NULL, street varchar(80) NOT NULL, number varchar(10) NOT NULL, city varchar(60) NOT NULL, state char(2) NOT NULL ); ``` And then inherite it in every table which uses these fields. Example: ``` CREATE TABLE customer ( id integer PRIMARY KEY, name varchar(80) ) INHERITS (address); CREATE TABLE company ( id integer PRIMARY KEY, name varchar(80) ) INHERITS (address); CREATE TABLE building ( id integer PRIMARY KEY, name varchar(80) ) INHERITS (address); ``` The idea is apply something like [Single responsibility principle](http://en.wikipedia.org/wiki/Single_responsibility_principle) in the tables creation proccess. One of my main questions is if the table `address` can cause significative lose of performance because it will become each time bigger.
**If** the intention is to copy the table layout, the basic idea is fine, but **[inheritance](http://www.postgresql.org/docs/current/interactive/ddl-inherit.html)** is a misunderstanding. The idea to ***copy*** the table schema from another table is sound (if you really *need* a copy). But use ``` CREATE TABLE new_table (LIKE template_table); ``` Or ``` CREATE TABLE new_table (LIKE template_table INCLUDING ALL); ``` That's a *different*, built-in feature of the basic [`CREATE TABLE`](http://www.postgresql.org/docs/current/interactive/sql-createtable.html) statement. Copies the schema (with or without selected details), but the result is an *independant* table. More details in this recent related answer: * [Merging two tables into one with the same column names](https://stackoverflow.com/questions/25591824/merging-two-tables-into-one-with-the-same-column-names/25597446#25597446) You can also *add columns* like in your examples with inheritance: ``` CREATE TABLE new_table (LIKE template_table INCLUDING ALL, name text); ```
That isn't a good idea. Querying the `address` table will show you a bunch of jumbled addresses from all sources, but nothing to connect them to anything else usefully. This would be better done with a conventional relational model via a foreign key reference to an address ID. Another option is to define the address as a composite type and embed that in each table. You'll find that lots of client drivers don't like composite types very much, though, so it'll be clumsy to work with. Really, stick with the simple relational approach here.
Is it a bad practice to use a table only for multiple inheritance?
[ "", "sql", "postgresql", "inheritance", "postgresql-9.2", "postgresql-8.4", "" ]
I am struggling with an SQL query. This situation is as follows:- I have 2 tables Users and Results:- **Users** * user\_id (PK) **Results** * result\_id * fk module\_id I am trying to select all users from the Results table who don't have an entry for module 1. I can select all users with no corresponding record using the following:- ``` SELECT * FROM user u LEFT JOIN results r ON u.user_id = r.user_id WHERE isnull(r.result_id); ``` However, I can't for the life of me figure out how to ignore all rows in the Results table who have a module id that is not 1. Do I need a subquery? Any help would be much appreciated.
Is that what you want: ``` SELECT * FROM user U LEFT OUTER JOIN results R ON R.user_id = U.user_id WHERE R.result_id IS NULL OR(R.result_id IS NOT NULL AND R.module_id <> 1) ``` Hope this will help you.
If your requirement is to JOIN all users to their results unless the result happens to be module #1, you can use this: ``` SELECT * FROM user u LEFT JOIN results r ON u.user_id = r.user_id AND r.module_id != 1; ``` I'm not completely clear on whether you want all users and associated non-module 1 results, or if you're just looking for those users who have no entry for module 1 whatsoever. If that's what you're looking for, then using NOT IN (or NOT EXISTS, which would be faster - can't remember if MySQL supports it) would be the solution, e.g., ``` SELECT * FROM user u WHERE u.user_id NOT IN ( SELECT user_id FROM results WHERE module_id = 1 ); ```
SQL include results where id does not exists in related table
[ "", "mysql", "sql", "join", "" ]
I am trying to create a view as follows : ``` CREATE VIEW TEMP_VIEW AS SELECT a.actor_id, a.first_name, a.last_name, LISTAGG(DISTINCT(c.name ) || ': ' || (SELECT LISTAGG(f.title,', ') WITHIN GROUP (ORDER BY f.title) FROM film f INNER JOIN film_category fc ON f.film_id = fc.film_id INNER JOIN film_actor fa ON f.film_id = fa.film_id WHERE fc.category_id = c.category_id AND fa.actor_id = a.actor_id ) ,'; ') WITHIN GROUP (ORDER BY c.name) AS film_info FROM actor a LEFT JOIN film_actor fa ON a.actor_id = fa.actor_id LEFT JOIN film_category fc ON fa.film_id = fc.film_id LEFT JOIN category c ON fc.category_id = c.category_id GROUP BY a.actor_id, a.first_name, a.last_name; ``` The error I am getting is : > LISTAGG(DISTINCT(c.name ) || ': ' || > \* > ERROR at line 5: > ORA-30482: DISTINCT option not allowed for this function The query output of the view should be like : ``` actor_id | 1 first_name | PENELOPE last_name | GUINESS film_info | Animation: ANACONDA CONFESSIONS; Children: LANGUAGE COWBOY; Classics: COLOR PHILADELPHIA, WESTWARD SEABISCUIT; Comedy: VERTIGO NORTHWEST; Documentary: ACADEMY DINOSAUR; Family: KING EVOLUTION, SPLASH GUMP; Foreign: MULHOLLAND BEAST; Games: BULWORTH COMMANDMENTS, HUMAN GRAFFITI; Horror: ELEPHANT TROJAN, LADY STAGE, RULES HUMAN; Music: WIZARD COLDBLOODED; New: ANGELS LIFE, OKLAHOMA JUMANJI; Sci-Fi: CHEAPER CLYDE; Sports: GLEAMING JAWBREAKER ```
No.1 - The error message is quite clear, you cannot use DISTINCT function within LISTAGG. No.2 - In a normalized data structure, a category name will always be unique Try and check it yourself : ``` SELECT COUNT(DISTINCT(name)) FROM category; SELECT COUNT(*) FROM category; ``` If your schema is properly normalized the results will be the same for both queries, if so take out the word DISTINCT from your view definition. The rest looks good.
I would do this with nested selects. I think the following does what you want: ``` SELECT actor_id, first_name, last_name, LISTAGG(category || ': ' || films, '; ') WITHIN GROUP (ORDER BY category) FROM (SELECT a.actor_id, a.first_name, a.last_name, c.name as category LISTAGG(f.title, ', ') WITHIN GROUP (ORDER BY f.title) as films FROM actor a LEFT JOIN film_actor fa ON a.actor_id = fa.actor_id LEFT JOIN file f on fa.film_id = f.film_id LEFT JOIN film_category fc ON fa.film_id = fc.film_id LEFT JOIN category c ON fc.category_id = c.category_id GROUP BY a.actor_id, a.first_name, a.last_name, c.name ) t GROUP BY actor_id, first_name, last_name; ```
Need some assistance in creating a view definition
[ "", "sql", "oracle", "" ]
I want to add a new column to an already existing table, but I want to give it a default value dependent on already existing data: e.g. Each record has a `start_date`. Now I want to add an `open_until` column, and I want to fill it with the value for `start_date` of each existing record. (the upcoming records will be able to pick different value) Is there a friendly way to do this?
You can also do it within South. The only caveat is that you need two steps for that: 1. A **schema migration** that adds the open\_until column ``` from django.db import models import datetime class MyModel(models.Model): start_date = models.DateField(), open_until = models.DateField(default=datetime.date.today), ``` `$ python manage.py schemamigration --auto appname` 2. A **data migration** that fills existing rows with the value of that other column `$ python manage.py datamigration appname populate_open_until` ``` import datetime class Migration(DataMigration): def forwards(self, orm): "Set open_until value to that of start_date for existing rows" for t in orm.MyModel.objects.all(): t.open_until = t.start_date t.save() def backwards(self, orm): "Revert back to default" for t in orm.MyModel.objects.all(): t.open_until = datetime.date.today t.save() ``` (*optional*) In step 1 you can either provide a temporary default value or make it optional and add a 3rd step 3. A **schema migration** that makes the open\_until column mandatory.
In Python 3.8 I first add fields to `MyApp` models file and it looks like: ``` from django.db import models import datetime class MyModel(models.Model): start_date = models.DateField(), open_until = models.DateField(default=datetime.date.today), ``` Then, after running `manage.py makemigrations` add this lines to new migrations file is created: ``` def forward(apps, schema_editor): my_model_objects = apps.get_model('MyApp', 'MyModel').objects.all() for t in my_model_objects: t.open_until = t.start_date t.save() def reverse(apps, schema_editor): pass class Migration(migrations.Migration): operations = [ " the base operations is here " , migrations.RunPython(forward, reverse), ] ```
Django migrations/South: New column take default value another value from the same record
[ "", "sql", "django", "migration", "django-south", "relational", "" ]
I am working on a some kind of chat app. I store every message in a SQLite table such as following ``` | id | fromId | toId | date | content | --------------------------------------------- | 0 | 1423 | 90 | ... | ... | | 1 | 324 | 90 | ... | ... | | 2 | 90 | 324 | ... | ... | | 3 | 43 | 1423 | ... | ... | | 4 | 439 | 324 | ... | ... | | 5 | 90 | 324 | ... | ... | | 6 | 324 | 43 | ... | ... | ``` I am trying to write a SQL request that shows chat dialog previews. In other words, I need to get the most recent message for each pair of two subscribers. I guess that this is a frequent question, but I could not find any working solution (or I was not able to use them correctly).
Try this: ``` SELECT * FROM chat c WHERE NOT EXISTS (SELECT * FROM chat WHERE c.fromId IN (fromId, toId) AND c.toID IN (fromId, toID) AND c.id < id); ``` <http://sqlfiddle.com/#!7/d57f3/2/0>
If you have indexes on `chat(fromid, toid)` and `chat(toid, fromid)`, then the most efficient way may be: ``` select c.* from chat c where not exists (select 1 from chat c2 where c2.fromId = c.fromId and c2.toId = c.toId and c2.id > c.id ) and not exists (select 1 from chat c2 where c2.fromId = c.toId and c2.toId = c.fromId and c2.id > c.id ); ``` SQLite should be able to use the indexes for each of the subqueries.
Get the latest message for each two clients using SQL
[ "", "android", "sql", "sqlite", "" ]
I want to import a csv file into a database **Header** id,name,surname,date1,date2 **Data** 10001,Bob,Roberts,03/06/2007 15:18:25.10,03/06/2007 15:18:29.19 This file has millions of rows and in order to import it I used the following command: ``` mysqlimport --ignore-lines=1 --fields-terminated-by=, --columns='id,name,surname,date1,date2' --local -u root -p Database /home/server/Desktop/data.csv ``` My problem is that when I try to import the file dates are not stored properly and they look like this: ``` '0000-00-00 00:00:00' ``` I tried many things but nothing works. I suppose the problem is given by the fact that the time has milliseconds and there is a dot rather than a colon at the end of the string. My dates are stored in a timestamp variable Can you help me please Thanks
I have played with this issue for a bit and I solved it converting my dates with an awk script and upgrading mysql to version 5.6 which supports milliseconds Thanks anyway
Your dates aren't in the MySql standard format, which is "YYYY-mm-dd HH:MM:SS". Mysqlimport should be able to accept some date conversion function among the command line parameters, like you would do if you were using "LOAD DATA" command (which is used by mysqlimport utility), but at this time it isn't: <http://bugs.mysql.com/bug.php?id=55666> Here's some way to workaround this: <http://blog.dgaspar.com/2010/08/01/the-set-clause/> If you want to do this conversion at one step and don't want to use that workaround, you will need to directly use "LOAD DATA" command: ``` load data infile '/tmp/xxx.dat' into table xxx fields terminated by '|' lines terminated by '\n' (col1, col2, @col3, @col4, col5) set col3 = str_to_date(@col3, '%m/%d/%Y'), col4 = str_to_date(@col4, '%d/%m/%Y') ``` I got this code from: <http://dev.mysql.com/doc/refman/5.1/en/load-data.html#c8828>
Import dates in mysql
[ "", "mysql", "sql", "mysqlimport", "" ]
I have the following MySQL query: ``` select * from u, s, c, ut, m ``` which works fine, displaying all columns of all tables. I want to add record counts to only the `u` table. I tried this: ``` select (select count(*) as totalcount from u), * from u, s, c, ut, m ``` but it is throwing an error.
This query is working for me : ``` select * , (select count(*) from u) as totalcount from u , s , c , ut , m ; ```
``` select (select count(*) as totalcount from u), * from u, s, c, ut, m ``` I try your query and it works fine . Just try this . I hope it works. ``` select (select count(*) from u) as totalcount, * from u, s, c, ut, m ```
Adding 'count' in MySQL query with multiple tables in 'from' clause
[ "", "mysql", "sql", "" ]
Very simple basic SQL question here. I have this table: ``` Row Id __________Hour__Minute__City_Search 1___1409346767__23____24_____Balears (Illes) 2___1409346767__23____13_____Albacete 3___1409345729__23____7______Balears (Illes) 4___1409345729__23____3______Balears (Illes) 5___1409345729__22____56_____Balears (Illes) ``` What I want to get is only one distinct row by ID and select the last City\_Search made by the same Id. So, in this case, the result would be: ``` Row Id __________Hour__Minute__City_Search 1___1409346767__23____24_____Balears (Illes) 3___1409345729__23____7______Balears (Illes) ``` What's the easier way to do it? Obviously I don't want to delete any data just query it. Thanks for your time.
``` SELECT Row, Id, Hour, Minute, City_Search FROM Table T JOIN ( SELECT MIN(Row) AS Row, ID FROM Table GROUP BY ID ) AS M ON M.Row = T.Row AND M.ID = T.ID ```
Can you change hour/minute to a timestamp? What you want in this case is to first select what uniquely identifies your row: ``` Select id, max(time) from [table] group by id ``` Then use that query to add the data to it. ``` SELECT id,city search, time FROM (SELECT id, max(time) as lasttime FROM [table] GROUP BY id) as Tkey INNER JOIN [table] as tdata ON tkey.id = tdata.id AND tkey.lasttime = tdata.time ``` That should do it.
Filter SQL data by repetition on a column
[ "", "sql", "google-bigquery", "" ]
I have the following tables: Basically I have a many2many relation between students and courses using the junction table students\_courses ![enter image description here](https://i.stack.imgur.com/PRrvR.png) Here is some data populated into the tables: students: ![enter image description here](https://i.stack.imgur.com/5wNoM.png) courses ![enter image description here](https://i.stack.imgur.com/Dsyan.png) students\_courses: ![enter image description here](https://i.stack.imgur.com/VNj83.png) So basically I would like to select the full\_name and c\_id for a given student. So for example for student with id=3 i would have Aurica 5 and Aurica 6. My first approach was to write: ``` select s.full_name,sc.c_id from students s, students_courses sc where sc.s_id=3 ``` But i obtain this: ``` Aurica 5 Aurica 6 Aurica 5 Aurica 6 Aurica 5 Aurica 6 ``` So it is duplicated by the number of rows of the students\_courses table. Now I'm not sure why this happens. If I would be an SQL parser, I would parse it like this: "take the c\_id from students\_courses, full\_name from students, and display them if the students\_course row respects the where filter" Not it works using join, but I don't really understand why the inner join is necessary. ``` select s.full_name, sc.c_id from students s inner join students_courses sc on sc.s_id=s.id and s.id=3; ``` Explain a bit how is the first sql interpreted by the SQL parser and why with join works. Thanks,
When you select information from two tables what it does is a cross product of all the records and then it looks to the all of the records that satisfy the where clause. You have 3 records in the Students table ``` id | full_name ---+---------- 3 | Aurica 4 | Aurica 5 | Aurica ``` And 6 records in the student\_courses table. ``` s_is | c_id -----+----- 3 | 5 3 | 6 4 | 7 4 | 8 5 | 9 5 | 10 ``` So before your where statement it creates 18 different records. So it is easy to see I will include all of the columns. ``` s.id | s.full_name | sc.s_id | sc.c_id -----+-------------+---------+-------- 3 | Aurica | 3 | 5 3 | Aurica | 3 | 6 3 | Aurica | 4 | 7 3 | Aurica | 4 | 8 3 | Aurica | 5 | 9 3 | Aurica | 5 | 10 4 | Aurica | 3 | 5 4 | Aurica | 3 | 6 4 | Aurica | 4 | 7 4 | Aurica | 4 | 8 4 | Aurica | 5 | 9 4 | Aurica | 5 | 10 5 | Aurica | 3 | 5 5 | Aurica | 3 | 6 5 | Aurica | 4 | 7 5 | Aurica | 4 | 8 5 | Aurica | 5 | 9 5 | Aurica | 5 | 10 ``` From there it only displays the ones where cs.id=3 ``` s.full_name | sc.c_id ------------+-------- Aurica | 5 Aurica | 6 Aurica | 5 Aurica | 6 Aurica | 5 Aurica | 6 ``` The second query you had compared the value of sc.s\_id=s.id and only displays the ones where those values are the same, as well as the c\_id=3
The SQL parser doesn't try to guess how your two tables are related. It would seem like the database engine has enough information to figure this out itself by following constraints, but SQL intentionally doesn't use the FK relationships to decide how to join your tables; you might want to remove constraints at at future date for some reason (such as in order to improve performance), and you wouldn't want dropping a constraint to alter how joins were made. The DBA needs freedom to change indexes and constraints without having to worry about having changed what results are returned by queries. Since it can't count on having complete information to go on, the SQL engine is not in the business of deducing/guessing relationships. It's up to the person writing the SQL to specify what they are joining on. If you don't give it any instructions telling it how to hook up the tables (using a JOIN ON clause or WHERE clause) then it creates a cross join, which gives you the duplicated results.
Don't understand why inner join is necessary for filtering in sql
[ "", "sql", "join", "" ]
I am facing a `to_char()` currency formatting problem here. The below code is working for me: ``` SELECT TO_CHAR(10000,'L99G999D99MI', 'NLS_NUMERIC_CHARACTERS = ''.,'' NLS_CURRENCY = $') "Amount" FROM DUAL; ``` which will provide me with the output: `$10,000.00`. Now, I want to convert the currency into a France currency, which the desire output is `10 000,00` or a Switzerland currency with output `10'000.00`. So, I modified the code as shown below for both of the case above: ``` SELECT TO_CHAR(10000,'L99G999D99MI', 'NLS_NUMERIC_CHARACTERS = ''"", ""'' NLS_CURRENCY = ''$'' ') "Amount" FROM DUAL; SELECT TO_CHAR(10000,'L99G999D99MI', 'NLS_NUMERIC_CHARACTERS = ''". "'' NLS_CURRENCY = ''$'' ') "Amount" FROM DUAL; ``` But this code does not work and showing an error of `ORA-12702`. Is there any problem with the code?
If you want to do it in the query: ``` SELECT TO_CHAR(10000,'L99G999D99MI', 'NLS_NUMERIC_CHARACTERS = ''.'''''' NLS_CURRENCY = ''$'' ') "Amount" FROM DUAL; ``` Gives `$10'000.00` (as this string is getting pre-processed there are pairs of quotes around the characters (becoming single) and then to get a single-quote in the string you need four quotes to become one!) ``` SELECT TO_CHAR(10000,'L99G999D99MI', 'NLS_NUMERIC_CHARACTERS = '', '' NLS_CURRENCY = ''$'' ') "Amount" FROM DUAL; ``` Gives `$10 000,00`
This can be used as well since the decimal notation is already know for French countries ``` SELECT TO_CHAR(1000000000,'999,999,999,999.99') "Amount" FROM DUAL; ```
SQL to_char Currency Formatting
[ "", "sql", "oracle", "currency", "to-char", "" ]
I am trying to copy data from one column to another column in a different table using MySQL but the table that I am importing into has a foreign key restraint that is preventing me from doing this; Here is the table that I would like to import from (product\_code) column Table1 ``` +----+--------------+-------------+-------+--------------+-----------+---------+-------+-------+ | id | product_code | distributor | brand | productname | wheelsize | pcd_1 | pcd_2 | pcd_3 | +----+--------------+-------------+-------+--------------+-----------+---------+-------+-------+ | 1 | F7050MHS20A2 | ******* | MAK | MOHAVE | 7 x 15 | 5x139.7 | | | | 2 | 3480 | ******* | KFZ | Winter Steel | 4.5 x 13 | 3x98 | | | | 3 | 3480 | ******* | KFZ | Winter Steel | 4.5 x 13 | 3x98 | | | | 4 | 3480 | ******* | KFZ | Winter Steel | 4.5 x 13 | 3x98 | | | | 5 | 3480 | ******* | KFZ | Winter Steel | 4.5 x 13 | 3x98 | | | | 6 | 3480 | ******* | KFZ | Winter Steel | 4.5 x 13 | 3x98 | | | | 7 | 3480 | ******* | KFZ | Winter Steel | 4.5 x 13 | 3x98 | | | | 8 | 3480 | ******* | KFZ | Winter Steel | 4.5 x 13 | 3x98 | | | | 9 | 3480 | ******* | KFZ | Winter Steel | 4.5 x 13 | 3x98 | | | | 10 | 3480 | ******* | KFZ | Winter Steel | 4.5 x 13 | 3x98 | | | +----+--------------+-------------+-------+--------------+-----------+---------+-------+-------+ ``` I would like to copy the `product_code` column into the `sku` column Table2 ``` +----------+----------+-------+--------------+ | id | value_id | pid | sku | +----------+----------+-------+--------------+ | 20315857 | 369781 | 41257 | 001 | | 20315858 | 369782 | 41256 | Config - ST5 | +----------+----------+-------+--------------+ ``` The problem is that the `value_id` column in Table2 is referencing `value_id` Table3 so I either get a Foreign `key restraint error` or `lock wait timeout` ``` a foreign key constraint fails (`gravytra_topgear`.`am_finder_map`, CONSTRAINT `FK_MAP_VALUE` FOREIGN KEY (`value_id`) REFERENCES `am_finder_value` (`value_id`) ON D ``` Table 3 ``` +----------+-----------+-------------+----------------+ | value_id | parent_id | dropdown_id | name | +----------+-----------+-------------+----------------+ | 6771 | 0 | 4 | AC | | 6749 | 0 | 4 | Acura USA | | 6895 | 0 | 4 | Aixam | | 6872 | 0 | 4 | Alfa Romeo | | 6853 | 0 | 4 | Alfa Romeo USA | | 6772 | 0 | 4 | Alpina | | 6815 | 0 | 4 | AMC USA | | 6854 | 0 | 4 | Anhui Anchi | | 6928 | 0 | 4 | Ariel | | 6783 | 0 | 4 | ARO | +----------+-----------+-------------+----------------+ ``` Here is my Query `INSERT INTO table2 (sku) SELECT product_code FROM table1;` The table1 `product_code` column has over 2million records inside it and has caused my server to crash during the query. I know that there must be a better way to do this but I can't figure out how to so would like some assistance if possible please...?
The fix was simple, Just wrap the query inside: ``` SET AUTOCOMMIT = 0; SET FOREIGN_KEY_CHECKS = 0; SET UNIQUE_CHECKS = 0; # YOUR QUERIES HERE... SET AUTOCOMMIT = 1; SET FOREIGN_KEY_CHECKS = 1; SET UNIQUE_CHECKS = 1; ``` The the data goes in, only difficulty is the large size of the dataset but breaking it into chunks helped alot.
To me your query looks OK. try breaking the "insert - select" into parts either using some id in the where clause or using limit in the select. insert 100 records. See how it goes. check your time\_out variables. increase accordingly if needed.
Copy Data from one column to another column in a different table using MySQL
[ "", "mysql", "sql", "" ]
I have a table like below ``` id name year -------------- 1 A 2000 2 B 2000 2 B 2000 2 B 2000 5 C 2000 1 D 2001 3 E 2001 ``` as well as you see in the year 2000 we missed id '3' and id '4' and in the year 2001 we missed id '2'. I want to generate my second table which includes missed items. 2nd table : ``` From-id to-id name year -------------------------------- 3 4 null 2000 2 null null 2001 ``` Which method in a SQL query can solve my problem?
Gaps and Islands in Sequences is the name of this problem. you read this [article](https://www.simple-talk.com/sql/t-sql-programming/the-sql-of-gaps-and-islands-in-sequences/)
Here's something to get you started: ``` WITH cte AS ( SELECT * FROM (VALUES (1),(2),(3),(4),(5) ) Tally(number) ), cte2 as ( SELECT DISTINCT [year] FROM (VALUES (2000),(2000),(2001) )tbl([year]) ), cte3 as ( SELECT * FROM cte CROSS JOIN cte2 ) SELECT * FROM cte3 LEFT OUTER JOIN YourTable ON cte3.number = YourTable.id AND cte3.[year] = YourTable[year) ``` A few notes: please avoid using reserved keywords as column names (such as year). Furthermore, since I didn't know how you'd handle multiple missing ranges I did not format the output to reflect a range. For example: What would be your expected output if only one row with id=3 would be in your table?
SQL Server find the missing number
[ "", "sql", "sql-server", "t-sql", "" ]
still working on a query for a project and my partner has managed to come up with a nifty SQL statement what works wonders when run but wont seem to work in VBA and it has got me questioning how much of SQL statements are supported in VBA This is the original query which my work partner whipped up and it works great when running the query in SQL ``` SELECT crm_clients.`id`, crm_clients.`national_insurance`, crm_clients.`total_hmrc`, (SELECT crm_crmuseractions.title FROM dev_pfands.`crm_crmuseractions` WHERE crm_crmuseractions.`id` = crm_clients.`status`) AS `status` FROM dev_pfands.`crm_clients` INNER JOIN crm_client_cheques ON crm_clients.id = crm_client_cheques.`client_id` INNER JOIN dev_pfands.`crm_payments` ON crm_clients.id = crm_payments.`client_id` INNER JOIN dev_pfands.`crm_self_assesments` ON crm_clients.id = crm_self_assesments.`client_id` WHERE crm_clients.`status` = 9 OR crm_clients.`status` = 8 OR crm_clients.`status` = 7 OR crm_clients.`national_insurance` != '' OR crm_clients.`id` != '' ``` I know VBA likes the SQL structured a little different so i adapted it to this, which maybe wrong so if it is feel free to burn me on it because i need to learn. ``` sql = "SELECT crm_clients.id, crm_clients.national_insurance, crm_clients.total_hmrc _ (SELECT _ crm_crmuseractions.title _ FROM _ crm_crmuseractions _ WHERE crm_crmuseractions.id = crm_clients.status ) AS 'status _ FROM _ crm_clients _ INNER JOIN crm_client_cheques _ ON crm_clients.id = crm_client_cheques.client_id _ INNER JOIN crm_payments _ ON crm_clients.id = crm_payments.client_id _ INNER JOIN crm_self_assesments.client_id _ WHERE crm_clients.status = 9 _ OR crm_clients.status = 8 _ OR crm_clients.status = 7 _ OR crm_clients.national_insurance != '' _ OR crm_clients.id != '' " ``` Apologies in advance if its something ive missed but anything other than simple selects or inserts/deletes and updates some of the other features of SQL like joins etc dont seem to work for me in VBA If anyone knows where i've gone wrong then that'll be great and if ive done it fine and its non supported features even an explanation of why would be great so i can related it back to my work friends who believes it works. Thanks in advanced guys.
``` Dim query As String query = "SELECT " & _ "crm_clients.id, " & _ "crm_clients.national_insurance, " & _ "crm_clients.total_hmrc, " & _ "(SELECT " & _ " crm_crmuseractions.Title " & _ "FROM " & _ " dev_pfands.crm_crmuseractions " & _ "WHERE crm_crmuseractions.`id` = crm_clients.status) AS 'status' " & _ "FROM " & _ "dev_pfands.crm_clients " & _ "INNER JOIN crm_client_cheques " & _ " ON crm_clients.id = crm_client_cheques.client_id " & _ "INNER JOIN dev_pfands.crm_payments " & _ " ON crm_clients.id = crm_payments.client_id " & _ "INNER JOIN dev_pfands.crm_self_assesments " & _ " ON crm_clients.id = crm_self_assesments.client_id " & _ "WHERE crm_clients.status = 9 " & _ "OR crm_clients.status = 8 " & _ "OR crm_clients.status = 7 " & _ "OR crm_clients.national_insurance != '' " & _ "OR crm_clients.id != ''" ```
You need to concatenate the strings of data across the multiple lines like so: ``` strText = "This is the first line " & _ "This is the second line" ``` Given the large amount of text you have, you may run into an error `Too many line continuations` (as the maximum amount of line continuations is 25). In which case you can concatenate the strings like so: ``` strText = "This is the first line" strText = strText & "This is the second line" ``` **Note** As you are writing a SQL statement, you need to make sure that you include spaces in the correct places i.e. you most likely need to leave a space at the end of each line.
VBA SQL query strings, how much is supported?
[ "", "sql", "vba", "excel", "" ]
When I am trying to calculate the date difference by using datediff function it showing that invalid identifier. ``` SELECT DATEDIFF(day,'2008-08-05','2008-06-05') AS DiffDate from da_static_trade. Error : invalid identifier. ``` Can you please tell me what is the function to calculate date difference.
There is no `DATEDIFF()` function in Oracle. On Oracle, it is an arithmetic issue ``` select DATE1-DATE2 from table ```
Oracle support Mathematical Subtract `-` operator on Data datatype. You may directly put in select clause following statement: ``` to_char (s.last_upd – s.created, ‘999999D99′) ``` Check the [EXAMPLE](http://crackingsiebel.wordpress.com/2010/12/01/oracle-sql-date-difference-in-days/) for more visibility. In case you need the output in termes of hours, then the below might help; ``` Select to_number(substr(numtodsinterval([END_TIME]-[START_TIME]),’day’,2,9))*24 + to_number(substr(numtodsinterval([END_TIME]-[START_TIME],’day’),12,2)) ||':’||to_number(substr(numtodsinterval([END_TIME]-[START_TIME],’day’),15,2)) from [TABLE_NAME]; ```
How to calculate difference between two dates in oracle 11g SQL
[ "", "sql", "oracle11g", "" ]
See the below example, ``` create table data(name varchar, value int); insert into data values('joe',1); insert into data values('bob',2); insert into data values('jane',3); insert into data values('anne',4); insert into data values('kate',5); ``` And if I Execute ``` select * from data limit 2; ``` Will Get ``` name | value ------+------- joe | 1 bob | 2 (2 rows) ``` So,How Can I Get the Last 2 Rows in `select * from data`? --- What I'm expecting is.... ``` name | value ------+------- anne | 4 kate | 5 (2 rows) ```
You have two options according to your need i.e, ``` select * from data order by value desc limit 2 ``` Or [**LIMIT and OFFSET**](http://www.postgresql.org/docs/9.3/static/queries-limit.html) if you want the 4th and 5th row just offset the first 3 so that the 4th row becomes the start of our set and you can specify a limit to say that you only want 2 rows from that. ``` select * from data offset 3 limit 2; /* The order of LIMIT and OFFSET does not matter. This gives the same result */ select * from data limit 2 offset 3; ```
I know I'm answering a six year old question and I know the accepted answer says to use an offset, however that's only going to be useful if you know the length of the table. Looking at the wording of the question and the example given, I assumed that like myself, the author wanted the last entries in a table based on an `id` or other value in the order that they were entered. The best solution I've found so far that orders the values as expected is using a subquery as follows: ``` SELECT * FROM ( SELECT * FROM data ORDER BY VALUE DESC LIMIT 2) AS _ ORDER BY VALUE ASC; ``` You'd substitute `VALUE` for whichever column you'd want to sort by to get the "last" entries. In my experience, this method is an order of magnitude quicker than using a count to find an offset.
PostgreSQL:How get last rows from a select query
[ "", "sql", "postgresql", "" ]
Is there a way to for an sql statement to search if a column string with multiple items contains a certain item, but not include a certain item that is a substring. The following is the current sql statement that I am using. ``` select * from tbltest where platform like '%item%' ``` platform is the column string that could have multiple items in the string. Item is the specific item that I am searching for in the platform string. The following is an example of what I am describing and the items that I am searching for. Items to search for in string (These are in a dropdownlist for the user to select). Notice that ASP would be considered a substring of ASP.NET and if the user selects to search for ASP in the column string of items, the records returned would also include the ASP.NET items based on the sql statement that I write above. ASP ASP.NET PHP HTML J2EE So is there a way to add a statement in the where portion of an SQL statement that would do what I am describing above or specifically, based on the example above, search for the ASP items without returning the ASP.NET items? UPDATE: Can the solution also account for the case where the column string contains both ASP and ASP.NET? UPDATE 2: This is a better description of what I'm looking for. Thanks. [SQL search column where one item in column is substring of another item Update](https://stackoverflow.com/questions/25747107/sql-search-column-where-one-item-in-column-is-substring-of-another-item-update)
If you want to require that an item be surrounded by spaces you can add them on either side of your list as well as your term: ``` select * from tbltest where ' '+platform+' ' like '% item %' ``` Ideally data is not stored in lists, as this searching will not be terribly efficient.
``` select * from tbltest where (platform like 'item%' or platform like '% item%') and (platform like '%item' or platform like '%item %') ``` What this does is check whether the `item` is surrounded by spaces, the beginning and/or the ending of the string. A requirement would be that there's no `item` with a space and you always split on a space. Otherwise you'd need another char to split on.
SQL search column where one item in column is substring of another item
[ "", "sql", "" ]
I need to execute a stored procedure at the end of each calendar month. it should take end of the current month as finish and end of previous month as start date. Here examples: ``` exec my_report @ReportStartDate = '20140731', @ReportEndDate='20140831' exec my_report @ReportStartDate = '20140131', @ReportEndDate='20131231' exec my_report @ReportStartDate = '20140228', @ReportEndDate='20140131' ``` My aim is to store results in a table. So I need to create a new stored procedure to call the current stored procedure. I could not find to schedule the `my_report` stored procedure. So I create a new stored procedure. My aim is to call `caller_sp` on each day and check the dates inside the caller stored procedure. Here my caller stored procedure. I have good knowledge of oracle but I an new to SQL Server. 1. Is there a way to schedule `my_report` at the end of each calendar month and send begin and end dates. 2. Is there a decent version of my code below Code: ``` declare @reportstartyear VARCHAR(4) = null declare @ReportEndDate DATETIME = null declare @ReportStartDate DATETIME = null if month(getdate()) = '01' Begin if DAY(getdate()) = '31' Begin set @reportstartyear = year(getdate())-1 set @ReportStartDate = cast(@reportstartyear + '1231' as Datetime) exec [LTR].[LetterOfGuaranteeProceedsReport] @ReportStartDate, @ReportEndDate = cast(select getdate()) end end else if month(getdate())='02' begin if year(getdate())%4=0 begin if day(getdate())='29' begin set @reportstartyear=year(getdate()) set @ReportStartDate=cast(@reportstartyear+'0131' as Datetime) exec [LTR].[LetterOfGuaranteeProceedsReport] @ReportStartDate,@ReportEndDate=cast(select getdate()) end end end else if day(getdate())='28' begin set @reportstartyear=year(getdate()) set @ReportStartDate=cast(@reportstartyear+'0131' as Datetime) exec [LTR].[LetterOfGuaranteeProceedsReport] @ReportStartDate,@ReportEndDate=cast(select getdate()) end else if month(getdate())='03' begin if day(getdate())='31' begin if year(getdate())%4=0 begin set @reportstartyear=year(getdate()) set @ReportStartDate=cast(@reportstartyear+'0229' as Datetime) exec [LTR].[LetterOfGuaranteeProceedsReport] @ReportStartDate,@ReportEndDate=cast(select getdate()) end else begin set @reportstartyear=year(getdate()) set @ReportStartDate=cast(@reportstartyear+'0228' as Datetime) exec [LTR].[LetterOfGuaranteeProceedsReport] @ReportStartDate,@ReportEndDate=cast(select getdate()) end end end ```
Your script seems a bit complicated ``` DECLARE @ReportStartDate date, @ReportEndDate date -- for sqlserver 2012 SELECT @ReportStartDate = EOmonth(getdate(), -1), @ReportEndDate = EOmonth(getdate()) -- for earlier versions SELECT @ReportStartDate = dateadd(month, datediff(m, 0, getdate()), -1), @ReportEndDate = dateadd(month, datediff(m, -1, getdate()), -1) EXEC my_report @ReportStartDate, @ReportEndDate ``` To execute the job the last day of every month: Create a job, then find and pick Under frequency: Occurs: Monthly The Last - Day - of every 1 month
Here my script. I changed to a SP. It works fine. Thanks every one ``` declare @ReportStartDate DATETIME=EOmonth(getdate(), -1), @ReportEndDate DATETIME=EOmonth(getdate()) if @ReportEndDate=getdate() Begin insert into report_log (col1, col2, col3 ) exec Report @ReportStartDate, @ReportEndDate,@username=null,@LanguageId=null update report_log set reportstartdate=@ReportStartDate, reportenddate=@ReportEndDate where reportenddate is null end ```
SQL Server : how to run a stored procedure at the end of each calendar month
[ "", "sql", "sql-server", "t-sql", "" ]
How can I convert the following query to inner join and avoid using distinct to optimise performance. ``` select distinct(GroupId) from BatchQuotaIndividualQuotas where BatchQuotaCommonSettingsID = 58 and GroupId not in ( select distinct(groupid) from BatchQuotaIndividualQuotas where BatchQuotaCommonSettingsID = 58 and ObjectiveFunctionTotalResultID is null ) ``` GroupId is not primary key. There are multiple rows corresponding to a single GroupId. I want to select the GroupIds for which none of the ObjectiveFunctionTotalResultID is null. –
if you remove distinct from the where clause, your query will run faster. Here are 2 better ways of writing it count(ObjectiveFunctionTotalResultID) will count not null values ``` SELECT GroupId FROM BatchQuotaIndividualQuotas WHERE BatchQuotaCommonSettingsID = 58 GROUP BY GroupId HAVING count(ObjectiveFunctionTotalResultID) = count(*) ``` or: EXCEPT will include distinct ``` SELECT GroupId FROM BatchQuotaIndividualQuotas WHERE BatchQuotaCommonSettingsID = 58 EXCEPT SELECT GroupId FROM BatchQuotaIndividualQuotas WHERE BatchQuotaCommonSettingsID = 58 AND ObjectiveFunctionTotalResultID is null ```
First of all, distinct is not a function, but a modifier of the select statement. You select a distinct combination of field values rather than a distinct single field. The parentheses will actually call a syntax error if you have more fields. So it's not ``` select distinct(groupid) ``` but ``` select distinct groupid ``` Secondly, you don't need the inner distinct at all. Duplicates may exist in an `in` list, and the dbms is likely to optimize this for you already. By adding `distinct` yourself in the inner query, you are actively preventing the optimizer to do its job. Thirdly, you will still need `distinct` on the outer query if you want to have distinct group ids. Inner join doesn't change that. Besides, since you want to have records that must *not* match, an inner join wouldn't do the trick. A left join could do it, but if I interpret your query correctly, you could just write it as: ``` select distinct GroupId from BatchQuotaIndividualQuotas where BatchQuotaCommonSettingsID = 58 and ObjectiveFunctionTotalResultID is not null ``` You *could* change it to `group by` like I do below, but the effect is the same. Group by just creates groups, and has to remove duplicates as well. Group by is mostly used for aggregations. I'm not sure about the inner workings, but group by is theoretically a harder process, because the database needs to create actual groups for these aggregations, instead of just filtering out duplicates. Probably this will be optimized as well, so in the end, the query below will perform and behave the same as the one above. ``` select GroupId from BatchQuotaIndividualQuotas where BatchQuotaCommonSettingsID = 58 and ObjectiveFunctionTotalResultID is not null group by GroupId ```
Changing not in clause to inner join
[ "", "sql", "sql-server", "" ]
I have built a cube in Business Development Intelligent Studio(BIDS) using some database with the name 'Test\_cube' which consists of Products dimension table, customers dimension table and orders fact table. The products table attributes :prodID-primary key,prodname,prodtype,prodcost The customers table attributes :custID-primary key,custname,custloc The orders table attributes :orderID,prodID,custID,quantity,unitprice,totalprice-calculated query column where primary key was set to orderID,prodID,custID But I got some errors while trying to deploy the cube like **Internal error : The operation terminated unsuccessfully. The datasource,'Test\_cube', contains an ImpersonationMode that is not supported for processing operations. Errors in the high-level relational engine.A connection could not be made to the datasource with the DataSourceID of 'Test\_cube',Name of 'Test\_cube'. Errors in the OLAP Storage engine: An error occurred while the dimension, with the ID of 'products',Name of the 'products' was being processed. Errors in the OLAP storage engine: An error occurred while the 'prodID' attribute of the 'products' dimension from the 'Test\_cube' database was being processed. Server : The Current operation was cancelled because another operation in the transaction failed.**
Try this Double Click on your data source under solution explorer -> Navigate to `Impersonation Information` tab -> Choose `inherit` Now try processing it again.
I used 'Use a specific username and password' and typed in the username and password I use to log into my PC and it worked
Trying to process the cube but getting the error at impersonation information
[ "", "sql", "bids", "ssas-2008", "" ]
Here is my current SQL statement ``` SELECT Customers.Customer, SUM(CASE WHEN Products.ProductID in ('ab6','ab5','ab4','ab3','ab2','ab1','abc'.......many others THEN 1 ELSE 0 END) AS Product_10, FROM Customers INNER JOIN Products ON Customers.ProductID = Products.ProductID GROUP BY Customers.Customer ``` So basically it will count these items but I want the results to show me which one it matched whether it be all of them or one of them. Is this possible?
Most databases have a way of combining strings. In MySQL, the function is `group_concat()`, so you can get what you want using: ``` SELECT c.Customer, SUM(CASE WHEN p.ProductID in ('ab6','ab5','ab4','ab3','ab2','ab1','abc'.......many others THEN 1 ELSE 0 END) AS Product_10, GROUP_CONCAT(DISTINCT p.ProductId) FROM Customers c INNER JOIN Products p ON c.ProductID = p.ProductID GROUP BY c.Customer; ``` By the way, you have a problem with your data model. Your query suggests that a customer can have multiple rows in the `Customers` table. That doesn't seem right. `Customer` should be a primary key on this table. You are missing a junction table to map between customers and products, which I would call `CustomerProducts`.
``` SELECT Customers.Customer, Products.ProductID, SUM(CASE WHEN Products.ProductID in ('ab6','ab5','ab4','ab3','ab2','ab1','abc'.......many others THEN 1 ELSE 0 END) AS Product_10, FROM Customers INNER JOIN Products ON Customers.ProductID = Products.ProductID WHERE Products.ProductID in ('ab6','ab5','ab4','ab3','ab2','ab1','abc'......) GROUP BY Customers.Customer, Products.ProductID ```
SUM(CASE WHEN Products.ProductID in (show which product matched) SQL
[ "", "sql", "count", "case", "" ]
I have the following log table for user messages (simplified form) in Postgres 9.2: ``` CREATE TABLE log ( log_date DATE, user_id INTEGER, payload INTEGER ); ``` It contains up to one record per user and per day. There will be approximately 500K records per day for 300 days. payload is ever increasing for each user (if that matters). I want to efficiently retrieve the latest record for each user before a specific date. My query is: ``` SELECT user_id, max(log_date), max(payload) FROM log WHERE log_date <= :mydate GROUP BY user_id ``` which is extremely slow. I have also tried: ``` SELECT DISTINCT ON(user_id), log_date, payload FROM log WHERE log_date <= :mydate ORDER BY user_id, log_date DESC; ``` which has the same plan and is equally slow. So far I have a single index on `log(log_date)`, but doesn't help much. And I have a `users` table with all users included. I also want to retrieve the result for some some users (those with `payload > :value`). Is there any other index I should use to speed this up, or any other way to achieve what I want?
For best read performance you need a [multicolumn index](https://www.postgresql.org/docs/current/indexes-multicolumn.html): ``` CREATE INDEX log_combo_idx ON log (user_id, log_date DESC NULLS LAST); ``` To make **[index only scans](https://wiki.postgresql.org/wiki/Index-only_scans)** possible, add the otherwise not needed column `payload` in a [covering index](https://www.postgresql.org/docs/current/indexes-index-only-scans.html) with the `INCLUDE` clause (Postgres 11 or later): ``` CREATE INDEX log_combo_covering_idx ON log (user_id, log_date DESC NULLS LAST) INCLUDE (payload); ``` See: * [Do covering indexes in PostgreSQL help JOIN columns?](https://dba.stackexchange.com/a/190151/3684) Fallback for older versions: ``` CREATE INDEX log_combo_covering_idx ON log (user_id, log_date DESC NULLS LAST, payload); ``` Why `DESC NULLS LAST`? * [Unused index in range of dates query](https://dba.stackexchange.com/a/90183/3684) For ***few*** rows per `user_id` or small tables `DISTINCT ON` is typically fastest and simplest: * [Select first row in each GROUP BY group?](https://stackoverflow.com/questions/3800551/select-first-row-in-each-group-by-group/7630564#7630564) For ***many*** rows per `user_id` an [**index skip scan** (or **loose index scan**)](https://wiki.postgresql.org/wiki/Loose_indexscan) is (much) more efficient. That's not implemented up to Postgres 15 [(work is ongoing)](https://commitfest.postgresql.org/23/1741/). But there are ways to emulate it efficiently. [Common Table Expressions](https://www.postgresql.org/docs/current/queries-with.html) require Postgres **8.4+**. [`LATERAL`](https://www.postgresql.org/docs/current/queries-table-expressions.html#QUERIES-LATERAL) requires Postgres **9.3+**. The following solutions go beyond what's covered in the [**Postgres Wiki**](https://wiki.postgresql.org/wiki/Loose_indexscan). ## 1. No separate table with unique users *With a separate `users` table, solutions in **2.** below are typically simpler and faster. Skip ahead.* ### 1a. Recursive CTE with `LATERAL` join ``` WITH RECURSIVE cte AS ( ( -- parentheses required SELECT user_id, log_date, payload FROM log WHERE log_date <= :mydate ORDER BY user_id, log_date DESC NULLS LAST LIMIT 1 ) UNION ALL SELECT l.* FROM cte c CROSS JOIN LATERAL ( SELECT l.user_id, l.log_date, l.payload FROM log l WHERE l.user_id > c.user_id -- lateral reference AND log_date <= :mydate -- repeat condition ORDER BY l.user_id, l.log_date DESC NULLS LAST LIMIT 1 ) l ) TABLE cte ORDER BY user_id; ``` This is simple to retrieve arbitrary columns and probably best in current Postgres. More explanation in chapter *2a.* below. ### 1b. Recursive CTE with correlated subquery ``` WITH RECURSIVE cte AS ( ( -- parentheses required SELECT l AS my_row -- whole row FROM log l WHERE log_date <= :mydate ORDER BY user_id, log_date DESC NULLS LAST LIMIT 1 ) UNION ALL SELECT (SELECT l -- whole row FROM log l WHERE l.user_id > (c.my_row).user_id AND l.log_date <= :mydate -- repeat condition ORDER BY l.user_id, l.log_date DESC NULLS LAST LIMIT 1) FROM cte c WHERE (c.my_row).user_id IS NOT NULL -- note parentheses ) SELECT (my_row).* -- decompose row FROM cte WHERE (my_row).user_id IS NOT NULL ORDER BY (my_row).user_id; ``` Convenient to retrieve a *single column* or the *whole row*. The example uses the whole row type of the table. Other variants are possible. To assert a row was found in the previous iteration, test a single NOT NULL column (like the primary key). *More explanation for this query in chapter 2b. below.* Related: * [Query last N related rows per row](https://stackoverflow.com/questions/25957558/querying-last-n-related-records-in-postgres/25965393#25965393) * [GROUP BY one column, while sorting by another in PostgreSQL](https://dba.stackexchange.com/a/74811/3684) ## 2. With separate `users` table Table layout hardly matters as long as exactly one row per relevant `user_id` is guaranteed. Example: ``` CREATE TABLE users ( user_id serial PRIMARY KEY , username text NOT NULL ); ``` Ideally, the table is physically sorted in sync with the `log` table. See: * [Optimize Postgres query on timestamp range](https://stackoverflow.com/questions/13998139/optimize-postgres-timestamp-query-range/14007963#14007963) Or it's small enough (low cardinality) that it hardly matters. Else, sorting rows in the query can help to further optimize performance. [See Gang Liang's addition.](https://stackoverflow.com/a/36223670/939860) If the physical sort order of the `users` table happens to match the index on `log`, this may be irrelevant. ### 2a. `LATERAL` join ``` SELECT u.user_id, l.log_date, l.payload FROM users u CROSS JOIN LATERAL ( SELECT l.log_date, l.payload FROM log l WHERE l.user_id = u.user_id -- lateral reference AND l.log_date <= :mydate ORDER BY l.log_date DESC NULLS LAST LIMIT 1 ) l; ``` [`JOIN LATERAL`](https://www.postgresql.org/docs/current/sql-select.html) allows to reference preceding `FROM` items on the same query level. See: * [What is the difference between a LATERAL JOIN and a subquery in PostgreSQL?](https://stackoverflow.com/questions/28550679/what-is-the-difference-between-lateral-and-a-subquery-in-postgresql/28557803#28557803) Results in one index (-only) look-up per user. Returns no row for users missing in the `users` table. Typically, a **foreign key** constraint enforcing referential integrity would rule that out. Also, no row for users without matching entry in `log` - conforming to the original question. To keep those users in the result use **`LEFT JOIN LATERAL ... ON true`** instead of `CROSS JOIN LATERAL`: * [Call a set-returning function with an array argument multiple times](https://stackoverflow.com/questions/26107915/call-a-set-returning-function-with-an-array-argument-multiple-times/26514968#26514968) Use **`LIMIT n`** instead of `LIMIT 1` to retrieve **more than one rows** (but not all) per user. Effectively, all of these do the same: ``` JOIN LATERAL ... ON true CROSS JOIN LATERAL ... , LATERAL ... ``` The last one has lower priority, though. Explicit `JOIN` binds before comma. That subtle difference can matters with more join tables. See: * ["invalid reference to FROM-clause entry for table" in Postgres query](https://stackoverflow.com/questions/34597700/invalid-reference-to-from-clause-entry-for-table-in-postgres-query/34598292#34598292) ### 2b. Correlated subquery Good choice to retrieve a **single column** from a **single row**. Code example: * [Optimize groupwise maximum query](https://stackoverflow.com/questions/24244026/optimize-groupwise-maximum-query/24377356#24377356) The same is possible for **multiple columns**, but you need more smarts: ``` CREATE TEMP TABLE combo (log_date date, payload int); SELECT user_id, (combo1).* -- note parentheses FROM ( SELECT u.user_id , (SELECT (l.log_date, l.payload)::combo FROM log l WHERE l.user_id = u.user_id AND l.log_date <= :mydate ORDER BY l.log_date DESC NULLS LAST LIMIT 1) AS combo1 FROM users u ) sub; ``` Like `LEFT JOIN LATERAL` above, this variant includes *all* users, even without entries in `log`. You get `NULL` for `combo1`, which you can easily filter with a `WHERE` clause in the outer query if need be. Nitpick: in the outer query you can't distinguish whether the subquery didn't find a row or all column values happen to be NULL - same result. You need a `NOT NULL` column in the subquery to avoid this ambiguity. A correlated subquery can only return a **single value**. You can wrap multiple columns into a composite type. But to decompose it later, Postgres demands a well-known composite type. Anonymous records can only be decomposed providing a column definition list. Use a registered type like the row type of an existing table. Or register a composite type explicitly (and permanently) with `CREATE TYPE`. Or create a temporary table (dropped automatically at end of session) to register its row type temporarily. Cast syntax: `(log_date, payload)::combo` Finally, we do not want to decompose `combo1` on the same query level. Due to a weakness in the query planner this would evaluate the subquery once for each column (still true in Postgres 12). Instead, make it a subquery and decompose in the outer query. Related: * [Get values from first and last row per group](https://stackoverflow.com/questions/25170215/get-values-from-first-and-last-row-per-group/25173081#25173081) Demonstrating all 4 queries with 100k log entries and 1k users: *db<>fiddle [here](https://dbfiddle.uk/?rdbms=postgres_11&fiddle=de6abcb0f57d39553f8b58791b9f8518)* - pg 11 Old [sqlfiddle](http://sqlfiddle.com/#!17/fad8f/7)
This is not a standalone answer but rather a comment to @Erwin's [answer](https://stackoverflow.com/a/25536748/2684232). For 2a, the lateral join example, the query can be improved by sorting the `users` table to exploit the locality of the index on `log`. ``` SELECT u.user_id, l.log_date, l.payload FROM (SELECT user_id FROM users ORDER BY user_id) u, LATERAL (SELECT log_date, payload FROM log WHERE user_id = u.user_id -- lateral reference AND log_date <= :mydate ORDER BY log_date DESC NULLS LAST LIMIT 1) l; ``` The rationale is that index lookup is expensive if `user_id` values are random. By sorting out `user_id` first, the subsequent lateral join would be like a simple scan on the index of `log`. Even though both query plans look alike, the running time would differ much especially for large tables. The cost of the sorting is minimal especially if there is an index on the `user_id` field.
Optimize GROUP BY query to retrieve latest row per user
[ "", "sql", "postgresql", "indexing", "greatest-n-per-group", "postgresql-performance", "" ]
I have a UDF that calculates a score based on a range of values that are going in to the table. The UDF has to be applied/called during the table creation process, this is where I'm having a little trouble. The UDF had to be created using case when methods only, so I can't change much on that but I'm sure I've just got something a little off. I'm not sure if the answer is already out there or not, but I haven;t stumbled across it yet so apologies if this is something already answered. Here is the UDF which is created first ``` --Create UDF create function [dbo].[cupidscoreUDF] ( @gender char(1), @name varchar(15), @dob datetime, @weight int, @height int, @smoker bit, @salary int ) returns int as begin declare @score int -- To determine age in years declare @Age int select @Age = DATEDIFF(YEAR, @dob, GETDATE()) select @score = case when @Age between 20 and 30 then 5 when @Age between 31 and 40 then 4 when @Age between 41 and 50 then 3 when @Age > 50 then 2 else 0 end -- To determine the height/weight ratio declare @WeightHeight int set @WeightHeight = @weight / @height set @score = @score + case when @WeightHeight between 20 and 25 then 1 when @WeightHeight between 25 and 30 then 3 when @WeightHeight between 30 and 35 then 4 when @WeightHeight between 35 and 40 then 2 else 0 end -- If non-smoker add 2 points if @smoker = 0 set @Score = @Score + 2 -- To determine score by salary set @score = @score + case when @salary < 50000 then 1 when @salary between 500001 and 60000 then 2 when @salary between 60001 and 70000 then 3 when @salary > 70000 then 4 end return @score end ; ``` Now here's what I've got for the table creation process ``` -- Create Member_Profile table create table Member_Profile ( MemberID int primary key, Gender varchar(6), Name varchar(50), Dob datetime, Weight int, Height int, Smoker bit, Salary int, Cupid as dbo.cupidscoreUDF ) GO insert into Member_Profile (Gender, Name, Dob, Weight, Height, Smoker, Salary) values ('Male','James',19931115, 75, 180, 0, 80000); insert into Member_Profile (Gender, Name, Dob, Weight, Height, Smoker, Salary) values ('Female','Rosie',19870912, 45, 150, 0, 100000); insert into Member_Profile (Gender, Name, Dob, Weight, Height, Smoker, Salary) values ('Male','Richard',19630402, 95, 168, 1, 200000); select * from Member_Profile ``` The UDF takes the member's info and then calculates their 'cupid' score from that, which is then inserted along with everything else in to the table. Any help using the UDF would be great
Try to use this for Cupid column: ``` Cupid as dbo.cupidscoreUDF(Gender, Name, Dob, Weight, Height, Smoker, Salary) ```
I would very seriously suggest moving the calling of the UDF to the insert/update instead of the table definition. To make this easier you could make a simple procedure to insert/update that calls the UDF. That way you avoid having to give the data twice. here is a very generic example, in this example it looks a bit weird to do all the extra steps for something so simple but that quickly changes when you apply actual scenarios. ``` use demo go create table dbo.example_table ( id int identity(1,1) primary key, some_value_1 int, some_value_2 int, some_calculation int) go create function dbo.calculator_function ( @value1 int, @value2 int) returns int as begin declare @result int = (select @value1 + @value2 as result) return @result end go create procedure dbo.insert_example @value1 int, @value2 int as insert dbo.example_table (some_value_1,some_value_2,some_calculation) select @value1, @value2, dbo.calculator_function(@value1,@value2) go exec dbo.insert_example 1,2 go select * from example_table ```
Calling a UDF during table creation
[ "", "sql", "sql-server", "t-sql", "user-defined-functions", "" ]
I have a case where I want to return the exchange rate of a currency, but if the date of the job is before 1/1/1999, then I want to just return the rate of 1/1/1999 since the exchange\_rate table has no information before 1/1/1999. So in this case, if the returned date from the COALESCE is 1/1/1985, the query should just instead join against 1/1/1999. ``` SELECT j.offer_date, j.accepted_date, j.reported_date, j.start_date, salary, salary_format_id, j.currency_id FROM job j LEFT JOIN exchange_rate c1 ON CAST(COALESCE(j.offer_date, j.accepted_date, j.reported_date, j.start_date) AS date) = c1.date AND c1.currency_id = j.currency_id LEFT JOIN exchange_rate c2 ON CAST(COALESCE(j.offer_date, j.accepted_date, j.reported_date, j.start_date) AS date) = c2.date AND c2.currency_id = 1 WHERE j.job_id = 4793 ```
Do this: ``` CASE WHEN COALESCE(j.offer_date, j.accepted_date, j.reported_date, j.start_date) < '1999-01-01' THEN '1999-01-01' ELSE COALESCE(j.offer_date, j.accepted_date, j.reported_date, j.start_date) END ``` And, yes, I know that's repetitive and a lot to type, but it will get the job done.
I personally would use case statement in select part ``` SELECT CASE WHEN date < '1999-01-01' THEN (SELECT TOP 1 [COLUMN_YOU_WANT] FROM exchange_rate AS c3 WHERE date = '1999-01-01' AND c3.currency_id = j.currency_id) ELSE c1.[COLUMN_YOU_WANT] END, j.offer_date, j.accepted_date, ... ``` Basically it says if date < '1999-01-01' select the rate for '1999-01-01', otherwise just use what you get from JOIN.
SQL Conditional Date COALESCE
[ "", "sql", "sql-server", "join", "" ]
I have the following tables ``` Employee ID QueueID OverrideQueueID 1 1 NULL 2 2 3 3 1 3 4 4 NULL Queue ID 1 2 3 4 ``` I need to join these tables together. Basically the join needs to look to see if there is an OverrideQueueID, if so, use that OverrideQueueID in the join to the QueueTable, if not use the QueueID in the join. The QueueID will always exists but the OverrideQueueID will sometimes be NULL. This is going to be millions of records joined to a table with about 10 records in it, so I didn't want to go the COALESCE route as I have seen performance degradation there in the past in using functions in joins. I assume if I went the COALESCE route a table scan would always be performed to generate the value from the COALESCE. This is similar to the below question I put out earlier, but now I have 1 table joining to 1 table instead of 1 table conditionally joining to multiple tables. I have made some progress from the original question. [[Conditional join to two different tables based on 2 columns in 1 table](https://stackoverflow.com/questions/25569626/conditional-join-to-two-different-tables-based-on-2-columns-in-1-table][1])
COALESCE or ISNULL in the join is nonetheless the way to go. Analyse the execution plan of your query if the performances are not up to your expectations and edit yout post with the result.
To address your perf concerns, another option would be to add a computed, persisted column to the employee table, then index this column and do all joins against it instead of either QueueId or OverrideQueueId. For example: ``` ALTER TABLE dbo.Employee ADD ActiveQueueId AS COALESCE(OverrideQueueId, QueueId) PERSISTED; GO CREATE NONCLUSTERED INDEX IX_Employee_ActiveQueueId ON dbo.Employee (ActiveQueueId); ```
Joining 2 columns in 1 table to 1 column in another table
[ "", "sql", "sql-server", "sql-server-2012", "ssms", "" ]
The structure of the table and data is as follows: ``` CityID cityname CountryID A1 abc IN A2 bcd IN A2 cde US A2 def UK A3 efg SL A4 fgh SL A4 ghi NZ ``` Here the `CityIDs` are duplicates and the output should be: ``` CityID cityname CountryID A2 bcd IN A2 cde US A2 def UK A4 fgh SL A4 ghi NZ ``` I am able to only find the count of duplicate id's using the query below: ``` SELECT CityID,COUNT(CityID) IDcount FROM tbl_City GROUP BY CityID HAVING (COUNT(CityID) >1) ```
Try following query:- ``` SELECT CityID, cityname, CountryID FROM tab WHERE CityID IN (SELECT CityID FROM tab GROUP BY CityID HAVING COUNT(CityID) > 1); ```
Try this using [OVER](http://msdn.microsoft.com/en-us/library/ms189461.aspx) clause: ``` DECLARE @tbl_city TABLE ( [CityID] CHAR(2) ,[CityName] VARCHAR(12) ,[CountryID] CHAR(2) ) INSERT INTO @tbl_city VALUES ('A1', 'abc', 'IN') ,('A2', 'bcd', 'IN') ,('A2', 'cde', 'US') ,('A2', 'def', 'UK') ,('A3', 'efg', 'SL') ,('A4', 'fgh', 'SL') ,('A4', 'ghi', 'NZ') ;WITH DataSource AS ( SELECT CityID ,cityname ,CountryID ,COUNT(CityID) OVER(PARTITION BY CityID) IDcount FROM @tbl_city ) SELECT * FROM DataSource WHERE IDcount > 1 ```
how to get duplicate rows of a single column including other columns in sql server
[ "", "sql", "sql-server", "t-sql", "" ]
I'm trying to write a query that's pulling dates from one table of schedules and seeing if they had followup contact from another table within X number of days. I have a query that's something like this but it's not doing the checks properly. My logic seems right in my head but I guess it's not right: ``` select appointments.person_id ,appointments.date ,nextDate.minDate ,case when datediff(d,appointments.date,nextdate.minDate) <= 3 then 'yes' else 'no' end as 'within3days' from appointments left join ( select person_id, min(date) as minDate from calls group by person_id ) as nextDate on appointments.person_id = nextDate.person_id and appointments.date <= nextDate.mindate ``` Any ideas? I'm thinking I'm not exposing the appointments date properly to the join to the calls table <http://sqlfiddle.com/#!3/404fa/5>
I would recommend that you use `apply` for this purpose. I think it is the easiest way to express your logic: ``` select a.person_id, a.datetime, c.datetime as nextdatetime, (case when datediff(d, a.datetime, c.datetime) <= 3 then 'yes' else 'no' end) as within3days from appointments a outer apply (select top 1 c.* from calls c where c.person_id = a.person_id and c.datetime > a.datetime order by c.datetime ) c; ```
I think this is what you want: ``` select a.person_id, a.date, c.date as first_call_after, case when datediff(d,a.date,c.date) >= 3 then 'yes' else 'no' end as within3days from appointments a left join calls c on a.person_id = c.person_id and c.date = (select min(x.date) from calls x where x.person_id = c.person_id and x.date >= a.date) order by a.person_id, a.date ``` **Fiddle:** <http://sqlfiddle.com/#!3/404fa/14/0> You are joining to the person's first call, any call. What you really want is the first call **occurring after the appointment on the given row.** Edit, just changed x.date > a.date to x.date >= a.date in the event someone places a call immediately after their appointment. In my above statement I should have said "occurring on or after..."
SQL find next followup within X number of days (joins comparing datetimes)
[ "", "sql", "sql-server", "join", "" ]
I am having difficulties writing a Postgres function, as I am not familiar with it. I have multiple tables to import into Postgres with this format: ``` id | 1960 | 1961 | 1962 | 1963 | ... ____________________________________ 1 23 45 87 99 2 12 31 ... ``` which I need to convert into this format: ``` id | year | value _________________ 1 1960 23 1 1961 45 1 1962 87 ... 2 1960 12 2 1961 31 ... ``` I would imagine the function too to read like this: ``` SELECT all-years FROM imported_table; CREATE a new_table; FROM min-year TO max-year LOOP EXECUTE "INSERT INTO new_table (id, year, value) VALUES (id, year, value)"; END LOOP; ``` However, I'm having real trouble writing the nitty-gritty details for this. Would be easier for me to do that in PHP, but I am convinced that it's cleaner to do it directly in a Postgres-function. The years (start and end) vary from table to table. And sometimes, I can even have years only for every fifth year or so ...
A **completely dynamic** version requires dynamic SQL. Use a plpgsql function with `EXECUTE`: For **Postgres 9.2 or older** (before `LATERAL` was implemented): ``` CREATE OR REPLACE FUNCTION f_unpivot_years92(_tbl regclass, VARIADIC _years int[]) RETURNS TABLE(id int, year int, value int) AS $func$ BEGIN RETURN QUERY EXECUTE ' SELECT id , unnest($1) AS year , unnest(ARRAY["'|| array_to_string(_years, '","') || '"]) AS val FROM ' || _tbl || ' ORDER BY 1, 2' USING _years; END $func$ LANGUAGE plpgsql; ``` For **Postgres 9.3 or later** (with `LATERAL`): ``` CREATE OR REPLACE FUNCTION f_unpivot_years(_tbl regclass, VARIADIC _years int[]) RETURNS TABLE(id int, year int, value int) AS $func$ BEGIN RETURN QUERY EXECUTE (SELECT 'SELECT t.id, u.year, u.val FROM ' || _tbl || ' t LEFT JOIN LATERAL ( VALUES ' || string_agg(format('(%s, t.%I)', y, y), ', ') || ') u(year, val) ON true ORDER BY 1, 2' FROM unnest(_years) y ); END $func$ LANGUAGE plpgsql; ``` About `VARIADIC`: * [Return rows matching elements of input array in plpgsql function](https://stackoverflow.com/questions/17978310/return-rows-matching-elements-of-input-array-in-plpgsql-function) Call for arbitrary years: ``` SELECT * FROM f_unpivot_years('tbl', 1961, 1964, 1963); ``` Same, passing an actual array: ``` SELECT * FROM f_unpivot_years('tbl', VARIADIC '{1960,1961,1962,1963}'::int[]); ``` For a long list of sequential years: ``` SELECT * FROM f_unpivot_years('t', VARIADIC ARRAY(SELECT generate_series(1950,2014))); ``` For a long list with regular intervals (example for every 5 years): ``` SELECT * FROM f_unpivot_years('t', VARIADIC ARRAY(SELECT generate_series(1950,2010,5))); ``` Output as requested. The function takes: *1.* A valid table name - double-quoted if it's otherwise illegal (like `'"CaMeL"'`). Using the object identifier type `regclass` to assert correctness and **defend against SQL injection**. You may want to schema-qualify the tale name to be unambiguous (like `'public."CaMeL"'`). More: * [Table name as a PostgreSQL function parameter](https://stackoverflow.com/questions/10705616/table-name-as-a-postgresql-function-parameter/10711349#10711349) *2.* **Any** list of numbers corresponding to (double-quoted) column names. ***Or*** an actual array, prefixed with the keyword `VARIADIC`. The array of columns does not have to be sorted in any way, but table and columns must exist or an exception is raised. Output is sorted by `id` and `year` (as `integer`). If you want years to be sorted according to the sort order of the input array, make it just `ORDER BY 1`. Sort order according to array is not strictly guaranteed, but works in the current implementation. More about that: * [PostgreSQL unnest() with element number](https://stackoverflow.com/questions/8760419/postgresql-unnest-with-element-number/8767450#8767450) Also works for `NULL` values. [**SQL Fiddle**](http://sqlfiddle.com/#!17/d5a96/1) for both with examples. ### References: * [Is there something like a zip() function in PostgreSQL that combines two arrays?](https://stackoverflow.com/questions/12414750/is-there-something-like-a-zip-function-in-postgresql-that-combines-two-arrays/12414884#12414884) * [Table name as a PostgreSQL function parameter](https://stackoverflow.com/questions/10705616/table-name-as-a-postgresql-function-parameter/10711349#10711349) * [PostgreSQL generate\_series() with SQL function as arguments](https://stackoverflow.com/questions/7955407/postgresql-generate-series-with-sql-function-as-arguments/7955877#7955877)
`PostgreSQL 9.3` offers as neat [JSON functions](http://www.postgresql.org/docs/9.3/static/functions-json.html) which can be used for such tasks without defining new functions or knowing a number of columns. ``` SELECT id, (k).key as year, (k).value as value FROM (SELECT j->>'id' as id, json_each_text(j) as k FROM ( SELECT row_to_json(tbl) as j FROM tbl) as q) as r WHERE (k).key <> 'id'; ``` <http://sqlfiddle.com/#!15/1714b/13>
How to unpivot a table in PostgreSQL
[ "", "sql", "postgresql", "pivot", "dynamic-sql", "unpivot", "" ]
Consider the following output: ``` ObjectNumber Field1 Field2 Field3 ------------------------------------ 1 1 0 0 1 0 0 1 2 0 0 0 2 1 0 0 2 0 1 1 ``` I need to "merge" these rows into 1 row per ObjectNumber. Assume that 1. all Fields have either `0` or `1` as value and 2. per ObjectNumber there is only one `1` per field The expected output is adding up all the fields: ``` ObjectNumber Field1 Field2 Field3 ------------------------------------ 1 1 0 1 2 1 1 1 ``` How can this be done?
In this case both max() en sum() will work, but sum() seems more appropriate since you say that the output is 'adding up all the fields'. ``` SELECT ObjectNumber, sum(Field1) AS Field1, sum(Field2) AS Field2, sum(Field3) AS Field3 FROM youroutput t GROUP BY ObjectNumber; ```
Just use an aggregation query with `max()`: ``` select ObjectNumber, max(Field1) as Field1, max(Field2) as Field2, max(Field3) as Field3 from youroutput t group by ObjectNumber; ```
Merge multiple rows into 1
[ "", "sql", "sql-server", "sql-server-2008", "" ]
[Summary of the question: 2 SQL statements produce same results, but at different speeds. One statement uses `JOIN`, other uses `IN`. `JOIN` is faster than `IN`] I tried a 2 kinds of `SELECT` statement on 2 tables, named *booking\_record* and *inclusions*. The table *inclusions* has a many-to-one relation with table *booking\_record*. (Table definitions not included for simplicity.) First statement: (using `IN` clause) ``` SELECT id, agent, source FROM booking_record WHERE id IN ( SELECT DISTINCT foreign_key_booking_record FROM inclusions WHERE foreign_key_bill IS NULL AND invoice_closure <> FALSE ) ``` Second statement: (using `JOIN`) ``` SELECT id, agent, source FROM booking_record JOIN ( SELECT DISTINCT foreign_key_booking_record FROM inclusions WHERE foreign_key_bill IS NULL AND invoice_closure <> FALSE ) inclusions ON id = foreign_key_booking_record ``` with 300,000+ rows in *booking\_record*-table and 6,100,000+ rows in *inclusions*-table; the 2nd statement delivered 127 rows in just 0.08 seconds, but the 1st statement took nearly 21 minutes for same records. Why `JOIN` is so much faster than `IN` clause?
This behavior is well-documented. See [here](http://dev.mysql.com/doc/refman/5.6/en/subquery-optimization.html). The short answer is that until MySQL version 5.6.6, MySQL did a poor job of optimizing these types of queries. What would happen is that the subquery would be run each time for every row in the outer query. Lots and lots of overhead, running the same query over and over. You could improve this by using good indexing and removing the `distinct` from the `in` subquery. This is one of the reasons that I prefer `exists` instead of `in`, if you care about performance.
EXPLAIN should give you some clues ([Mysql Explain Syntax](http://dev.mysql.com/doc/refman/5.0/en/explain.html) I suspect that the IN version is constructing a list which is then scanned by each item (IN is generally considered a very inefficient construct, I only use it if I have a short list of items to manually enter). The JOIN is more likely constructing a temp table for the results, making it more like normal JOINs between tables.
MySQL(version 5.5): Why `JOIN` is faster than `IN` clause?
[ "", "mysql", "sql", "join", "" ]
I have a query where I am trying to select the contents of another row and insert it into the table but changing particular values. Inside this I am trying to use a `Replace()` function to replace certain characters in the given column. Is this correct or will I need to take this out and do this via an `Update` statement? This is my SQL statement: ``` INSERT INTO [dbo].[PurchaseLogic] ([column1] ,[column2] ,[column3] ,[column4]) SELECT [column1] ,[column2] ,replace(column3, 'TextA','TextB') ,[column4] FROM dbo.purchaselogic WHERE column1 = 1 ``` hanks EDIT Sorry, this is the error I am getting when executing it: > Argument data type text is invalid for argument 1 of replace function.
This looks correct to me and should do what you are wanting to do. But is there some reason why you simply can't try this yourself and check the results? If Column3 is a data type of 'Text', this isn't going to work since Replace doesn't work on that data type. You could cast the data type is nvarchar(max) to make this work. For example: ``` Replace(Cast(column3 as varchar(max)),'TextA','TextB') ```
``` INSERT INTO [dbo].[PurchaseLogic] ([column1] ,[column2] ,[column3] ,[column4]) SELECT [column1] ,[column2] ,CAST(REPLACE(CAST(column3 as Varchar(MAX)),'TextA','TextB') AS Text) ,[column4] FROM dbo.purchaselogic WHERE column1 = 1 ``` I got the essense of it [here](https://stackoverflow.com/questions/4341613/alternatives-to-replace-on-a-text-or-ntext-datatype)
SQL Server - Insert Select
[ "", "sql", "sql-server", "" ]
The `service_option` table and the `room` table have a same column `room_key`, the `room` table and the `user` table share a same column `user_key`, what I want is obtaining user's infomation from service options. ``` SELECT user_key, user_id, user_status, user_company_name FROM user WHERE user_key IN ( SELECT user_key FROM room WHERE room_key IN ( SELECT room_key FROM service_option WHERE service_option_key = 3 AND ordered_service_option_status !=0 ) ) ``` Using IN nested in another IN turned to be very inefficient.
This is probably the query you're looking for: ``` SELECT U.user_key ,U.user_id ,U.user_status ,U.user_company_name FROM [user] U INNER JOIN room R ON R.user_key = U.user_key INNER JOIN service_option S ON S.room_key = R.room_key AND S.service_option_key = 3 AND S.ordered_service_Option_status != 0 ``` The `INNER JOIN` approach will be much more performant and make the query more readable. Hope this will help you.
Do not use WHERE IN, but join : ``` SELECT u.user_key, u.user_id, u.user_status, u.user_company_name FROM user u, room r, service_option so WHERE u.user_key = r.user_key AND r.room_key = so.room_key AND so.service_option_key = 3 AND so.ordered_service_option_status != 0 ```
how to reconstruct this sql query
[ "", "mysql", "sql", "" ]
I am have the following sequence of actions for users and a `DateTime`time stamp of each action. Below is an example table ``` Actions: +------------+-------------+----------------------+ | session_id | action_name | time | +------------+-------------+----------------------+ | 123abcd | ADD | 2014-08-27 13:41:02 | +------------+-------------+----------------------+ | 123abcd | LIKE | 2014-08-27 13:43:02 | +------------+-------------+----------------------+ | 123abcd | DELETE | 2014-08-27 13:45:02 | +------------+-------------+----------------------+ | 123abcd | EMAIL | 2014-08-27 13:48:02 | +------------+-------------+----------------------+ | 123abcd | LIKE | 2014-08-27 13:52:02 | +------------+-------------+----------------------+ | 546erqr | ADD | 2014-08-27 14:02:02 | +------------+-------------+----------------------+ | 546erqr | EMAIL | 2014-08-27 14:12:02 | +------------+-------------+----------------------+ | 546erqr | LIKE | 2014-08-27 14:14:02 | +------------+-------------+----------------------+ ``` As you can see a `user` can go through a `session` performing certain actions. I am interested in **calculating the time spent before the user sends the email for each session**. Essentially, I want to do a `TIMESTAMPDIFF(SECOND, min(time), max(time))/60` till the `action = EMAIL`. I tried executing it using a `where` clause in the SQL statement, but it didn't work. For the output, I want something like: ``` +------------+-----------------------+ | session_id | time_till_email_mins | +------------+-----------------------+ | 123abcd | 7 | +------------+-----------------------+ | 546erqr | 10 | +------------+-----------------------+ ``` In general, if one were to do computations like this (be it time difference calculations or counting something) *till* a certain action, how does one go about it. Hope this makes sense. Please let me know if you have any questions
I think you probably want this: ``` select session_id, min(time) as first_action_time, min(case when action_name = 'EMAIL' then time end) as first_email_time, timestampdiff(second,min(time),min(case when action_name = 'EMAIL' then time end)) as diff_in_secs from actions group by session_id ``` This is the difference in seconds between the first action for the session and the first action for the session where the action name is 'EMAIL'.
You can do this with conditional aggregation: ``` select session_id, timestampdiff(second, min(case when action_name = 'EMAIL' then action_time end), min(action_time)) / 60 from actions a group by session_id; ```
MySQL get time duration up to a selected action
[ "", "mysql", "sql", "" ]
I've been stuck on this issue for a while now. I'm really close I think, but there's something I'm missing. A Transaction can have zero or many TransactionErrors. I am trying to display all Transactions only once, and I'm also trying to display only the latest error message if there is one. ``` SELECT [Transaction].[TransactionID] ,[FileName] ,[DestinationSystem] ,[CreatedOn] ,LEFT([TransactionError].[ErrorMessage], 300) AS LatestErrorMessage --Gets only the first 300 characters of the error message FROM [WM01DB].[dbo].[Transaction] INNER JOIN SourceSystem ON SourceSystem.SourceSystemId = Transaction.SourceSystemId LEFT JOIN TransactionError ON TransactionError.TransactionId = Transaction.TransactionId WHERE Transaction.CreatedOn >= '2014-08-01 00:00:00.000' AND Transaction.CreatedOn < '2014-09-02 00:00:00.000' ORDER BY [CreatedOn], [Transaction].[TransactionID] ``` When I run this query, I get most of the results I want, but I get duplicate transactions because these transactions have multiple TransactionErrors. It looks like this... ``` TransactionID FileName DestinationSystem CreatedOn LatestErrorMessage 18124 201408131541517937_DC_TEST_3339376-4.1.xml TEST 2014-08-18 18:31:19.993 U_BOL and Tracking Number are blank 18124 201408131541517937_DC_TEST_3339376-4.1.xml TEST 2014-08-18 18:31:19.993 FRT_CHG_TYPE is blank 18125 201408111521484448_DC_TEST_3339375-2.1.xml TEST 2014-08-19 16:04:58.467 NULL 18126 201408111521484448_DC_TEST_3339375-2.1.xml TEST 2014-08-19 16:09:00.467 NULL ``` Ugh... Bad looking code block... As you can see, there are duplicate TransactionIDs as demonstrated with 18124. I would like 18124 to display only once with the latest error message. The only way to get the latest error message would be to use the latest TransactionErrorID for a particular TransactionID... Please help! :(
I have a similar solution to Krishnraj Rana. However, I think that you need to avoid having the rowid filter in the WHERE clause because that will make if behave like an inner join: ``` ; with Errors as (SELECT [ErrorMessage] , Row_Number() over (Partition By TransactionId order by TransactionErrorId Desc) as id FROM TransactionError ) SELECT [Transaction].[TransactionID] ,[FileName] ,[DestinationSystem] ,[CreatedOn] ,LEFT([ErrorMessage], 300) AS LatestErrorMessage --Gets only the first 300 characters of the error message FROM [WM01DB].[dbo].[Transaction] INNER JOIN SourceSystem ON SourceSystem.SourceSystemId = Transaction.SourceSystemId LEFT JOIN Errors ON TransactionError.TransactionId = Transaction.TransactionId and errors.id = 1 WHERE Transaction.CreatedOn >= '2014-08-01 00:00:00.000' AND Transaction.CreatedOn < '2014-09-02 00:00:00.000' ORDER BY [CreatedOn], [Transaction].[TransactionID] ```
You can achieve it by using **ROW\_NUMBER()** with **PARTITION BY** clause like this - ``` SELECT [Transaction].[TransactionID] ,[FileName] ,[DestinationSystem] ,[CreatedOn] ,LEFT([TransactionError].[ErrorMessage], 300) AS LatestErrorMessage --Gets only the first 300 characters of the error message ,ROW_NUMBER() OVER ( PARTITION BY [Transaction].[TransactionID] ORDER BY [CreatedOn] ,[Transaction].[TransactionID] DESC ) AS SrNo FROM [WM01DB].[dbo].[Transaction] INNER JOIN SourceSystem ON SourceSystem.SourceSystemId = TRANSACTION.SourceSystemId LEFT JOIN TransactionError ON TransactionError.TransactionId = TRANSACTION.TransactionId WHERE TRANSACTION.CreatedOn >= '2014-08-01 00:00:00.000' AND TRANSACTION.CreatedOn < '2014-09-02 00:00:00.000' AND SrNo = 1 ORDER BY [CreatedOn] ,[Transaction].[TransactionID] ```
Display Only One Record That May Or May Not Have Children
[ "", "sql", "sql-server", "" ]
I have an Item table: ``` ItemID ItemNumber CategoryID ---------------------------------- 1 1 1 2 2 1 3 3 1 4 1 2 5 2 2 6 3 2 7 1 3 8 2 3 9 3 3 ``` A Category table: ``` CategoryID CategoryNumber GenreID --------------------------------------- 1 1 1 2 2 2 3 3 3 ``` And a Genre table which is not relevant. The requirement is for me to make sure that ItemNumber 1 is unique to GenreID 1. i.e., there can be only one Item with an ItemNumber of 1 that is in a category belonging to the Genre with a GenreID of 1. How do I write a SQL query to get a count of Items with the same ItemNumber in the same Genre, or Items that violate this business rule? It's really boggling my mind and I need expert SQL help.
You can try below query ``` select i.ItemNumber, c.GenreID, count(*) from Item i, Category c where i.CategoryID = c.CategoryID group by i.ItemNumber, c.GenreID having count(*) > 1 ```
You need to join the tables together and count the distinct genres per item. The resulting query is something like this, if I understand your problem correctly: ``` select i.itemnumber, count(distinct c.genreid) as numgenres from items i join categories c on i.categoryid = c.categoryid group by i.itemnumber having count(distinct c.genreid) > 1 order by numgenres desc; ```
SQL count duplicate values across multiple tables
[ "", "sql", "sql-server", "" ]
I am trying to select all the rows in which COLUMN has a value between 1 and 9. The problem is the datatype for COLUMN is text and I have used the following code to get those rows: ``` SELECT * FROM TABLE WHERE COLUMN BETWEEN '1' AND '9'; ``` Instead of getting rows with COLUMN value between 1 and 9, I also get rows with COLUMN values 10, 11 and so forth. How can I make it give me only rows with COLUMN values between 1 and 9?
For example like this: ``` SELECT * FROM TABLE WHERE COLUMN BETWEEN '1' AND '9' and LEN(COLUMN) = 1; ```
If all of the values in the column are **guaranteed to be integers**™ you could do this: ``` SELECT * FROM TABLE WHERE CInt(COLUMN) BETWEEN 1 AND 9 ```
Using between operator with numeric text values in Microsoft Access SQL
[ "", "sql", "ms-access", "between", "" ]
I am having some trouble getting my SQL to run efficiently with an IN statement. If I run the two statements separately, and manually paste in the series of results (in this case there are 30 vendor\_id's) the vendor\_master query runs instantly, and the invoices query runs in about 2 seconds. ``` select * FROM invoices where vendor_id IN ( select vendor_id from vendor_master WHERE vendor_master_id = 12345 ); ``` So what causes the HUGE slowdown, more than 60 seconds and often times out? Is there a way to put the results in a variable with commas? Or to get the inner statement to execute firsT?
> *So what causes the HUGE slowdown, more than 60 seconds and often times out?* The `IN` clause works well when the data set inside the `IN` condition is "small" and "deterministic". That's because the condition is evaluated *once* per row. So, assuming that the query in the `IN` clause returns 100 rows and the table in the `FROM` clause has 1000 rows, the server will have to perform `100 * 1000 = 100,000` comparissons to filter out your data. Too much effort to filter too little data, don't you think? Of course, if your data sets (both in `from` and `in` clauses) are bigger, you can imagine the effect. By the way, when you use a subquery as an `in` condition, there's also an additional overhead: the subquery needs to be executed *once* for each row. So the sequence is something like this: * row 1 + execute subquery + check if the value of row 1 matches a value of the result of the subquery + if the above is true, keep the row in the result set; exclude it otherwise * row 2 + execute subquery + check if the value of row 2 matches a value of the result of the subquery + if the above is true, keep the row in the result set; exclude it otherwise * ... Too much work to do, don't you think? --- > *Is there a way to put the results in a variable with commas?* Yes, there's a way... but would you *really* want to do that? Let's see: First, create a list with the values you want to filter: ``` set @valueList = (select group_concat(vendor_id separator ',') from (select vendor_id from vendor_master where vendor_master_id = 12345) as a) ``` Then, create an SQL expression: ``` set @sql = concat('select * from invoices where vendor_id in (', @valueList, ')'; ``` Finally, create a prepared statement and execute it: ``` prepare stmt from @sql; execute stmt; -- when you're done, don't forget to deallocate the statement: -- deallocate prepare stmt; ``` I ask you again: do you *really* want to do all this? --- > *Or to get the inner statement to execute first?* All the other answers are pointing you in the right direction: instead of using `in` use `inner join`: ``` select i.* from invoices as i inner join ( select distinct vendor_id from vendor_master where vendor_master_id = 12345 ) as vm on i.vendor_id = vm.vendor_id; ``` If, for some reason, this still is too slow, the only alternative that comes to my mind is: Create a temporary table (sort of "divide-and-conquer strategy"): ``` drop table if exists temp_vm; create temporary table temp_vm select distinct vendor_id from vendor_master where vendor_master_id = 12345; alter table temp_vm add index vi(vendor_id); select i.* from invoices as i inner join temp_vm as vm on i.vendor_id = vm.vendor_id; ``` Remember: temp tables are only visible to the connection that creates them, and are dropped when the connection is closed or terminated. --- In any case, your performance will be improved if you ensure that your tables are properly indexed; specifically, you need to ensure that `invoices.vendor_id` and vendor\_master.vendor\_master\_id` are indexed.
Prior to MySQL 5.6.6, `in` was optimized quite inefficiently. Use `exists` instead: ``` select * FROM invoices i where exists (select 1 from vendor_master vm where i.vendor_id = vm.vendor_id and vm.vendor_master_id = 12345 ); ``` For best performance, you want an index on `vendor_master(vendor_id, vendor_master_id)`.
SQL IN Statement Slowness
[ "", "mysql", "sql", "" ]
I have two tables: Users: ``` id name isSpecial 1 Tal 1 2 Jorden 0 3 John 1 4 Paige 0 ``` Details: ``` id userId Country zipCode 1 1 Israel 4564 2 3 US 554654 ``` I want to get all the data from Users by the name of Jorden OR if isSpecial is 1 to be shown like this Result: ``` id name Country zipCode 1 Tal Israel 4564 2 Jorden 3 John US 554654 ``` I know it's supposed to be a simple query but I can't get the results that I want!
You can use a `LEFT JOIN`: ``` SELECT u.id, u.name, u.isSpecial, d.country, d.zipCode FROM Users u LEFT JOIN Detals d ON u.id = d.userId WHERE u.name = 'Jorden' OR u.isSpecial = 1 ```
Like this ``` SELECT u.id,u.name,d.country,d.zipCode FROM Users u LEFT outer JOIN Details d ON u.id = d.userId WHERE u.name = 'Jorden' OR u.isSpecial = '1' ```
Select from two tables statment
[ "", "mysql", "sql", "" ]
i have two sql statements the first one ``` SELECT course.CourseNum, course.CourseName, course.CreditHours FROM course WHERE course.TypeID=1 ``` result is ``` Course Num------Course Name---------Credit Hours 1101----------Arabic Language----------3 1103----------English LanguageI--------3 1104----------English LanguageII-------3 1106----------Hebrew LanguageI---------3 1125----------Scientific Research------3 ``` THE second one is just show the results that have grade ``` SELECT course.CourseNum, course.CourseName, course.CreditHours, studentcoursecomplete.CourseGrade FROM course, studentcoursecomplete WHERE studentcoursecomplete.SID=1 And studentcoursecomplete.CourseID = course.CourseID AND course.TypeID=1 ``` result is ``` Course Num---------Course Name----------Credit Hours-----Grade 1101---------------Arabic Language-----------3------------60 1103---------------English LanguageI---------3------------65 1104---------------English LanguageII--------3------------70 ``` so now i need an sql statement that show the first record EXCEPT the second record !! the sql statement must show this result .. ``` Course Num------Course Name---------Credit Hours 1106----------Hebrew LanguageI---------3 1125----------Scientific Research------3 ```
You can do a left join and look for NULL. Also use table alias when possible to make the code more readable. ``` select c.CourseNum, c.CourseName, c.CreditHours from course c left join studentcoursecomplete cc on cc.CourseID = c.CourseID and cc.SID = 1 where c.TypeID=1 and cc.CourseID is null ```
Try this code. It should give your desired result. ``` SELECT course.CourseNum, course.CourseName, course.CreditHours FROM course course left join studentcoursecomplete studentcoursecomplete on studentcoursecomplete.CourseID = course.CourseID and studentcoursecomplete.SID=1 where studentcoursecomplete.CourseID is null and course.TypeID=1; ```
how can i make this select statement
[ "", "mysql", "sql", "join", "except", "" ]
I am having problem while understanding `=` and `is` operators in SQL Server. Consider the following example queries which are having different behaviors in their respective output: ``` SELECT * FROM tableName WHERE colName IS NULL; SELECT * FROM tableName WHERE colName = NULL; ``` First query will provide the required output i.e. select those records for which `colName` is having a `null` value. But the second query will result in zero matching records. Please clarify different uses of these operators with pros and cons. **EDIT** Here, most of the answers are claiming that `=` doesn't work with `null`, but the following statement will work with `null` and `=`. ``` SET ANSI_NULLS OFF SELECT * FROM tableName WHERE colName = NULL; ``` This will provide the same result as statement having `is` operator.
Nothing equals `null`. Not even `null` equals `null`. `null` is not a value, it is more like a concept, or a mark, meaning unknown value. As such, you need two operators for this, one for equality, and one for checking the concept of `null`. Once you start to think of `null` as "unknown value" a lot of the other behavior also makes sense. `10 + null`? Add an unknown value to 10? Obviously you will have another unknown value as a result. For more information, please check [the documentation of the equality operator in T-SQL](http://msdn.microsoft.com/en-us/library/ms175118.aspx). Additionally, see the documentation for [SET ANSI\_NULL](http://msdn.microsoft.com/en-us/library/ms188048.aspx). Note that the documentation is in conflict about the behavior of `x = null` between the equality operator (documentation says it will always be `false` if x is non-`null`) whereas `SET ANSI_NULLS` documentation says that `x = null` will behave equivalent to `x is null` when the option is turned on.
From <http://msdn.microsoft.com/en-us/library/ms188795.aspx>: `To determine whether an expression is NULL, use IS NULL or IS NOT NULL instead of comparison operators (such as = or !=). Comparison operators return UNKNOWN when either or both arguments are NULL.` **EDIT** The originating question was updated to note that `SET ANSI_NULLS OFF` allows: `select * from tableName where colName = null`. This may be true at the moment but future versions of SQL server will set `ANSI_NULLS` always `ON` and any calls to `SET ANSI_NULLS OFF` will result in an error. Source: <http://msdn.microsoft.com/en-us/library/ms188048.aspx>
Difference between "=" and "is" in sql server
[ "", "sql", "sql-server", "" ]
I'm trying generate a column based on the title of a table. Is there a way to display the table name specified in the FROM clause in the query results? Thanks
From the comments, it seems that you have multiple tables all with the same structure and similar names This is almost always a sign of poor database design. You should have a single table `LOAD_UNISTATS_ACCREDITATION` with a column for the year. If you cannot change the database structure, then perhaps you can create a view: ``` create view v_LOAD_UNISTATS_ACCREDITATION as select lua.*, 2013 as year from LOAD_UNISTATS_2013_ACCREDITATION lua union all select lua.*, 2012 as year from LOAD_UNISTATS_2012_ACCREDITATION lua . . .; ``` But the answer to your question is "no". There is no automated way to specify a table name in query. And, for a simple reason. The columns in a query are defined in the `SELECT` but tables are defined in the `FROM`. A query can have multiple tables. One could imagine a function such as `OBJECT_ID` for this purpose, but columns are not first class objects in the database.
The dynamic sql approach, where you could pass in a parameter with the table name (assuming it always has the same structure, length) would be something like this: **Create Script:** ``` create table LOAD_UNISTATS_2013_ACCREDITATION (value1 int, value2 int) insert into LOAD_UNISTATS_2013_ACCREDITATION (value1, value2) values (1,2) ``` **Dynamic SQL:** ``` declare @sql nvarchar(200) declare @tblname nvarchar(50) set @tblname = 'LOAD_UNISTATS_2013_ACCREDITATION' set @sql = 'SELECT value1,value2,' + substring(@tblname, 15, 4) + ' as TableName FROM ' + @tblname execute(@sql) ``` ## [Demo Sql Fiddle](http://sqlfiddle.com/#!6/6ab37/12)
Display Table Name in Query Results; SQL Server 2008
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "" ]
My current query runs very slow when accessing a DB with pretty large tables ``` SELECT * FROM table1 WHERE timestamp BETWEEN 635433140000000000 AND 635433150000000000 AND ID IN ('element1', 'element2', 'element3', ... , 'element 3002'); ``` As you can see the `IN` clause has several thousand values. This query is executed roughly every second. Is there another way to write it to improve performance?
Add the elements of the IN to an indexed temporary (if the elements change) or permanent table (if the elements are static) and inner join on them.
This is your query: ``` SELECT * FROM table1 WHERE timestamp BETWEEN 635433140000000000 AND 635433150000000000 AND ID IN ('element1', 'element2', 'element3', ... , 'element 3002'); ``` The query is fine. Add an index on `table1(id, timestamp)`.
How can I make large IN clauses more efficient in SQL Server?
[ "", "sql", "sql-server", "performance", "" ]
Example Table: ``` a | b ----- 1 | 1 1 | 2 2 | 1 3 | 0 3 | 1 4 | 1 4 | 2 4 | 3 5 | 0 ``` I'd like to get all `DISTINCT` values of column `a`, which are associated with values from column `b`, where `b = 1 AND b = 2`. The result should be: ``` a - 1 4 ``` Does this work with `GROUP BY`? I tried the following SQL, but it gives always 0 results: ``` SELECT t.a FROM table t GROUP BY t.a, t.b HAVING t.b = 1 AND t.b = 2 ``` MySQL seems to call the `AND` statement on the same row and not on the grouped `b` rows. Any ideas? `:)`
You are very close. Try this: ``` SELECT t.a FROM table t GROUP BY t.a HAVING SUM(t.b = 1) > 0 AND SUM(t.b = 2) > 0; ``` Each of the `SUM()` conditions counts the number of rows where the condition is true. You want both, hence the `> 0` and `AND`. **EDIT** removed `t.b` from `GROUP BY`
One way of doing it is as follows: ``` SELECT t.a FROM table t GROUP BY t.a HAVING SUM(CASE WHEN b=1 THEN 1 ELSE 0 END) <> 0 AND SUM(CASE WHEN b=2 THEN 1 ELSE 0 END) <> 0 ``` Note that you need to remove `t.b` from `GROUP BY`, otherwise the results would be incorrect. [Demo.](http://www.sqlfiddle.com/#!2/39348/1)
Group By with multiple Having on the same field
[ "", "mysql", "sql", "group-by", "having", "" ]
I have here an sql in which I need to select the id of a user and the status value in the table. Here is my sql: ``` SELECT * FROM user WHERE user_id='".$id."' AND status='add' OR status='delete' ``` But then, the outcome is not what I want because it also show the other `id` that has a status value of `delete` which is in my last parameter. Is there any way to filter just the `id` and the `status`?
Organize your query so that user id condition will not be disturbed by your multiple OR's ``` SELECT * FROM user WHERE user_id='".$id."' AND (status='add' OR status='delete') ``` Or use in ``` SELECT * FROM user WHERE user_id='".$id."' AND status IN('add','delete') ```
Here the query you're looking for: ``` SELECT * FROM user U WHERE U.user_id = '".$id"' AND U.status IN ('add', 'delete') ``` The use of `IN` clause is highly recommended instead of multiple `OR` conditions for the same column. Hope this will help you.
How to get 2 values from the same table with a parameter of id
[ "", "mysql", "sql", "" ]
I have 2 tables: `item_status_log` and `items` with the following structure: ``` items(itemid, status, ordertype) item_status_log(itemid, date_time, new_status, old_status) ``` Basically, when the `status` is changed in my program, a record is logged in the `item_status_log` with the `old_status`, the `new_status`, and the `date_time`. What I want is to be able to view a table of items grouped by the date they were updated. I have the following sql which works perfectly: ``` select to_char(date_time, 'MM-DD-YYYY') as "Shipment Date", count(*) as "TOTAL Items" from item_status_log i where old_status = 'ONORDER' group by "Shipment Date" order by "Shipment Date" desc ``` this gives me: ``` Shipment Date | TOTAL Items ------------------------------ 09/02/2014 | 4 09/01/2014 | 23 ``` However, I want to add 2 columns to the above table, which breaks down how many of the items have a status in the items table of 'INVENTORY' and 'ORDER'. I'm looking for this: ``` Shipment Date | TOTAL Items | Inventory | Ordered --------------------------------------------------------- 09/02/2014 | 4 | 3 | 1 09/01/2014 | 23 | 20 | 3 ``` Here is what I have tried, but got the '*subquery uses ungrouped column "i.date\_time" from the outer query*' error: ``` select to_char(date_time, 'MM-DD-YYYY') as "Shipment Date", count(*) as "TOTAL Items", (select count(*) from item_status_log t where date(t.date_time) = date(i.date_time) and itemid in ( select itemid from items where ordertype = 'ORDER' )) as "Customer", (select count(*) from item_status_log t where date(t.date_time) = date(i.date_time) and itemid in ( select itemid from items where ordertype = 'INVENTORY' )) as "Inventory" from item_status_log i where old_status = 'ONORDER' group by "Shipment Date" order by "Shipment Date" desc ```
I think you just need conditional aggregation: ``` select to_char(date_time, 'MM-DD-YYYY') as "Shipment Date", count(*) as "TOTAL Items", sum(case when i.ordertype = 'ORDER' then 1 else 0 end) as NumOrders, sum(case when i.ordertype = 'INVENTORY' then 1 else 0 end) as NumInventory from item_status_log il join items i on il.itemid = i.itemid where old_status = 'ONORDER' group by "Shipment Date" order by "Shipment Date" desc; ```
Try: ``` select to_char(date_time, 'MM-DD-YYYY') as "Shipment Date", count(*) as "TOTAL Items", sum(case when ordertype = 'INVENTORY' then 1 else 0 end) as "Inventory", sum(case when ordertype = 'ORDER' then 1 else 0 end) as "Ordered" from item_status_log i where old_status = 'ONORDER' group by "Shipment Date" order by "Shipment Date" desc ```
subquery uses ungrouped column "i.date_time" from outer query
[ "", "sql", "subquery", "" ]
I have the following four tables in SQL Server 2008R2: ``` DECLARE @ParentGroup TABLE (ParentGroup_ID INT, ParentGroup_Name VARCHAR(100)); DECLARE @ChildGroup TABLE (ChildGroup_id INT, ChildGroup_name VARCHAR(100), ParentGroup_id INT); DECLARE @Entity TABLE ([Entity_id] INT, [Entity_name] VARCHAR(100)); DECLARE @ChildGroupEntity TABLE (ChildGroupEntity_id INT, ChildGroup_id INT, [Entity_ID] INT); INSERT INTO @parentGroup VALUES (1, 'England'), (2, 'USA'); INSERT INTO @ChildGroup VALUES (10, 'Sussex', 1), (11, 'Essex', 1), (12, 'Middlesex', 1); INSERT INTO @entity VALUES (100, 'Entity0'),(101, 'Entity1'),(102, 'Entity2'),(103, 'Entity3'),(104, 'Entity4'),(105, 'Entity5'),(106, 'Entity6'); INSERT INTO @ChildGroupEntity VALUES (1000, 10, 100), (1001, 10, 101), (1002, 10, 102), (1003, 11, 103), (1004, 11, 104), (1005, 12, 100), (1006, 12, 105), (1007, 12, 106); /* SELECT * FROM @parentGroup SELECT * FROM @ChildGroup SELECT * FROm @entity SELECT * FROM @ChildGroupEntity */ ``` The relationships between the tables as below: ``` SELECT ParentGroup_Name, ChildGroup_name, [Entity_name], 0 [ChildGroupSequence], 0 [EntitySequence] FROM @ChildGroupEntity cge INNER JOIN @ChildGroup cg ON cg.ChildGroup_id=cge.ChildGroup_id INNER JOIN @parentGroup pg ON pg.parentGroup_id=cg.parentGroup_id INNER JOIN @entity e ON e.[entity_id]=cge.[Entity_ID] ORDER BY ParentGroup_Name, ChildGroup_name, [Entity_name] ``` The output of the above query is: ``` ------------------------------------------------------------------------------- ParentGroup_Name|ChildGroup_name|Entity_name|ChildGroupSequence|EntitySequence| ------------------------------------------------------------------------------- England |Essex |Entity3 |0 |0 | England |Essex |Entity4 |0 |0 | England |Middlesex |Entity0 |0 |0 | England |Middlesex |Entity5 |0 |0 | England |Middlesex |Entity6 |0 |0 | England |Sussex |Entity0 |0 |0 | England |Sussex |Entity1 |0 |0 | England |Sussex |Entity2 |0 |0 | ------------------------------------------------------------------------------- ``` Now, I want to find out the child groups and all entities associated with the child groups for parent group 1. Also, I want to calculate the [ChildGroupSequence], [EntitySequence] as for the logic below: 1. The ChildGroupSequence column should represent the child group’s sequence within the parent group, starting from 1000 and incrementing by 100. Ie first subgroup will be 1000, second subgroup will be 1100. 2. The EntitySequence column should represent the entity sequence within the child group, starting from 100 and incrementing by single digits, resetting for each subgroup. I.e. the first entity in childgroup 1 starts at 100, as does the first entity in childgroup 2. So, the output should be in the following format: ``` ------------------------------------------------------------------------------- ParentGroup_Name|ChildGroup_name|Entity_name|ChildGroupSequence|EntitySequence| ------------------------------------------------------------------------------- England |Essex |Entity3 |1000 |100 | England |Essex |Entity4 |1000 |101 | England |Middlesex |Entity0 |1100 |100 | England |Middlesex |Entity5 |1100 |101 | England |Middlesex |Entity6 |1100 |102 | England |Sussex |Entity0 |1200 |100 | England |Sussex |Entity1 |1200 |101 | England |Sussex |Entity2 |1200 |102 | ------------------------------------------------------------------------------- ``` I can do this easily by reading values into the application layer (.Net program), but want to learn SQL server by experimenting a few little things like this. Could anyone help me in writing this SQL query? Any help would be much appreciated. Thanks in advance. EDIT: My sample data hadn't seem to correctly reflect the first rule, the rule states that ChildGroupSequence should be incremented by 100 and the sample output increments by 1. The second query reflects the increment by 100. @jpw: Thank you very much for pointing this out.
I believe this can be accomplished using [partitioning](http://msdn.microsoft.com/en-us/library/ms189461%28v=sql.100%29.aspx) and [ranking](http://msdn.microsoft.com/en-us/library/ms189798%28v=sql.100%29.aspx) functions like this: ``` SELECT ParentGroup_Name, ChildGroup_name, [Entity_name], 999 + DENSE_RANK() OVER(PARTITION BY ParentGroup_Name ORDER BY ChildGroup_name) AS [ChildGroupSequence], 99 + ROW_NUMBER() OVER(PARTITION BY ParentGroup_Name, ChildGroup_name ORDER BY ChildGroup_name, Entity_name) AS [EntitySequence] FROM @ChildGroupEntity cge INNER JOIN @ChildGroup cg ON cg.ChildGroup_id=cge.ChildGroup_id INNER JOIN @parentGroup pg ON pg.parentGroup_id=cg.parentGroup_id INNER JOIN @entity e ON e.[entity_id]=cge.[Entity_ID] ORDER BY ParentGroup_Name, ChildGroup_name, [Entity_name] ``` This query generates the sample output you described. Your sample data does not seem to correctly reflect the first rule though as the rule states that ChildGroupSequence should be incremented by 100 and the sample output increments by 1. The second query reflects the increment by 100: ``` SELECT ParentGroup_Name, ChildGroup_name, [Entity_name], 900 + 100 * DENSE_RANK() OVER(PARTITION BY ParentGroup_Name ORDER BY ChildGroup_name) AS [ChildGroupSequence], 99 + ROW_NUMBER() OVER(PARTITION BY ParentGroup_Name, ChildGroup_name ORDER BY ChildGroup_name, Entity_name) AS [EntitySequence] FROM @ChildGroupEntity cge INNER JOIN @ChildGroup cg ON cg.ChildGroup_id=cge.ChildGroup_id INNER JOIN @parentGroup pg ON pg.parentGroup_id=cg.parentGroup_id INNER JOIN @entity e ON e.[entity_id]=cge.[Entity_ID] ORDER BY ParentGroup_Name, ChildGroup_name, [Entity_name] ``` Please see this [sample SQL Fiddle](http://www.sqlfiddle.com/#!3/9cf9a/2) for examples of both queries. Maybe the query should partition by ID and not Name, if so Sussex will come before Essex as it has a lower ID and the query would be this: ``` SELECT ParentGroup_Name, ChildGroup_name, [Entity_name], 900 + 100 * DENSE_RANK() OVER(PARTITION BY pg.ParentGroup_ID ORDER BY cg.ChildGroup_ID) AS [ChildGroupSequence], 99 + ROW_NUMBER() OVER(PARTITION BY pg.ParentGroup_ID, cg.ChildGroup_ID ORDER BY cg.ChildGroup_ID, cge.Entity_ID) AS [EntitySequence] FROM @ChildGroupEntity cge INNER JOIN @ChildGroup cg ON cg.ChildGroup_id=cge.ChildGroup_id INNER JOIN @parentGroup pg ON pg.parentGroup_id=cg.parentGroup_id INNER JOIN @entity e ON e.[entity_id]=cge.[Entity_ID] ORDER BY pg.ParentGroup_ID, cg.ChildGroup_ID, [Entity_name] ```
You can solve this by using [**ranking functions**](http://msdn.microsoft.com/en-us/library/ms189798.aspx) ``` SELECT ParentGroup_Name, ChildGroup_name, [Entity_name], 899 + DENSE_RANK() OVER(PARTITION BY ParentGroup_Name ORDER BY ChildGroup_name) + 100 * DENSE_RANK() OVER(ORDER BY ParentGroup_Name ASC) AS ChildGroupSequence, 99 + ROW_NUMBER() OVER(PARTITION BY ParentGroup_Name, ChildGroup_name ORDER BY [Entity_name]) AS [EntitySequence] FROM @ChildGroupEntity cge INNER JOIN @ChildGroup cg ON cg.ChildGroup_id=cge.ChildGroup_id INNER JOIN @parentGroup pg ON pg.parentGroup_id=cg.parentGroup_id INNER JOIN @entity e ON e.[entity_id]=cge.[Entity_ID] ORDER BY ParentGroup_Name, ChildGroup_name, [Entity_name] ``` Result you can find here [`SQL Fiddle`](http://sqlfiddle.com/#!3/d41d8/38540)
SQL server: calculate sequence in a SQL query
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "sql-server-2008-r2", "" ]
I am new to SQL programming and have the following scenario. I have three tables which are linked to each other as - **ForceInstance** ``` ForceInstanceID (PK) ForceID (FK to Force) Description CreatedBy CreatedOn ``` **Force** ``` ForceID (PK) ForceTypeID (FK to ForceType) Description CreatedBy CreatedOn` ``` **ForceType** ``` ForceTypeID (PK) Description ``` I want to make a join of the three tables, I tried this query - ``` Select * from ForceInstance FI inner join Force F inner join ForceType FT on FI.ForceID = F.ForceID and F.ForceTypeID = FT.ForceTypeID ``` but was unable to achieve anything. I am new to SQL and wanted to know how to achieve this.
``` Select * from ForceInstance FI inner join Force F on FI.ForceID = F.ForceID inner join ForceType FT on F.ForceTypeID = FT.ForceTypeID ```
``` SELECT * FROM ForceInstance FI INNER JOIN Force F ON FI.ForceID = F.ForceID INNER JOIN ForceType FT ON F.ForceTypeID = FT.ForceTypeID ```
Inner Join on three tables linked by foreignkeys
[ "", "sql", "sql-server", "t-sql", "" ]
I'm trying run basic Insert sql query. Here is the insert sql. ``` Insert into tblABC( X1,X2,X3) Select X1,X2,X3 from tblXX ``` Problem is `tblXX` doesn't have `X3` column. Sometimes `X3` will be there and sometimes it will be missing. When column is missing then push a default value such as `0`. Trying to use Case statement ``` CASE WHEN EXISTS (SELECT 1 From information_schema.Columns C Inner Join Sys.Objects O On ( O.Object_Id = Object_Id(N'tblXX') And O.Name = C.Table_Name) and C.Column_Name = 'X3') THEN X3 ELSE 0 END ``` I get an Invalid column name error on X3 column
You can try the following: ``` if (exists (select 1 from information_schema.columns where table_name = 'tblXX' and column_name = 'X3')) begin Insert into tblABC(X1, X2, X3) Select X1, X2, X3 from tblXX; end else begin Insert into tblABC(X1, X2, X3) Select X1, X2, 0 from tblXX; end; ``` If this doesn't work (because the `if` fails), then you would need to resort to dynamic SQL. An important note: you would seem to have a poor application design if you are randomly adding and removing columns from tables. I would suggest that you fix this problem by fixing the underlying design of the system, so such efforts are not needed.
I'm not sure about what you're looking for, if you just want to fill the table `tblABC` with data from table `tblXX` that have no value for the third column, here is the query that use a default value: ``` INSERT INTO tblABC(X1, X2, X3) SELECT T.X1 ,T.X2 ,0 FROM tblXX T ``` If you just want to use a default value if not present in `tblXX`, the query will be the following: ``` INSERT INTO tblABC(X1, X2, X3) SELECT T.X1 ,T.X2 ,CASE WHEN T.X3 IS NULL THEN 0 ELSE T.X3 END FROM tblXX T ``` Hope this will help you.
SQL Insert query with missing column and Invalid column name
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I need to Update these NULL Values: ``` PK | CODE ---+------- 1 | 20 2 | NULL 3 | NULL 4 | 30 5 | NULL 6 | NULL 7 | NULL 8 | 40 9 | NULL ``` Like this: ``` PK | CODE -----+------------ 1 | 20 2 | 20 3 | 20 4 | 30 5 | 30 6 | 30 7 | 30 8 | 40 9 | 40 ``` It should always be based on the last minimum value. I have tried the code below, but it just updates the first row before the one who had value at the beginning. ## **QUERY** ``` UPDATE TT SET CODE = (SELECT CODE FROM #TSPV_TEMP T2 with(nolock) WHERE T2.KEY = (tt.KEY -1)) FROM #TSPV_TEMP TT with (nolock) WHERE tt.CODE IS NULL ```
You can do this as: ``` UPDATE TT SET CODE = (SELECT TOP 1 CODE FROM #TSPV_TEMP T2 with(nolock) WHERE T2.KEY < tt.KEY AND CODE IS NOT NULL ORDER BY KEY DESC ) FROM #TSPV_TEMP TT with (nolock) where tt.CODE IS NULL; ``` Note the differences in the subquery. This looks for the previous non-NULL value for `CODE` for the update.
Another way using a derived table which for every pk with a null code contains the maximum lesser pk with a non-null code. ``` update t1 set t1.code = t3.code from tt t1 join (select t1.pk, max(t2.pk) max_pk from tt t1 join tt t2 on t1.pk > t2.pk and t2.code is not null and t1.code is null group by t1.pk) t2 on t2.pk = t1.pk join tt t3 on t3.pk = t2.max_pk ```
Update based on previous value SQL SERVER 2005
[ "", "sql", "sql-server", "sql-server-2005", "sql-update", "logic", "" ]
I need to update a table every time a certain action is taken. **MemberTable** * `Name varchar 60` * `Phone varchar 20` * `Title varchar 20` * `Credits int` <-- **the one that needs constant updates** * etc with all the relevant member columns 10 - 15 total Should I update this table with: ``` UPDATE Members SET Credits = Credits - 1 WHERE Id = 1 ``` or should I create another table called account with only two columns like: **Account table** * `Id int` * `MemberId int` <-- foreign key to members table * `Credits int` and update it with: ``` UPDATE Accounts SET Credits = Credits - 1 WHERE MemberId = 1 ``` Which one would be faster and more efficient? I have read that SQL Server must read the whole row in order to update it. I'm not sure if that's true. Any help would be greatly appreciated
I know that this doesn't directly answer the question but I'm going to throw this out there as an alternative solution. Are you bothered about historic transactions? Not everyone will be, but in case you or other future readers are, here's how I would approach the problem: ``` CREATE TABLE credit_transactions ( member_id int NOT NULL , transaction_date datetime NOT NULL CONSTRAINT df_credit_transactions_date DEFAULT Current_Timestamp , credit_amount int NOT NULL , CONSTRAINT pk_credit_transactions PRIMARY KEY (member_id, transaction_date) , CONSTRAINT fk_credit_transactions_member_id FOREIGN KEY (member_id) REFERENCES member (id) , CONSTRAINT ck_credit_transaction_amount_not_zero CHECK (credit_amount <> 0) ); ``` In terms of write performance... ``` INSERT INTO credit_transactions (member_id, credit_amount) VALUES (937, -1) ; ``` Pretty simple, eh! No row locks required. The downside to this method is that to work out a members "balance", you have to perform a bit of a calculation. ``` CREATE VIEW member_credit AS SELECT member_id , Sum(credit) As credit_balance , Max(transaction_date) As latest_transaction FROM credit_transactions GROUP BY member_id ; ``` However using a view makes things nice and simple and can be optimized appropriately. Heck, you might want to throw in a NOLOCK (read up about this before making your decision) on that view to reduce locking impact. # TL;DR: **Pros:** quick write speed, transaction history available **Cons:** slower read speed
Actually the later way would be faster. If your number transaction is very huge, to the extent where millisecond precision is very important, it's better to do it this way. Or maybe some members will not have credits, you might save some space here as well. However, if it's not, it's good to keep your table structure normalized. If every account will always have a `credit`, it's better to include it as a column in table `Member`. Try to not having unnecessary intermediate table which will consume more space (with all those foreign keys and additional IDs). Furthermore, it also makes your schema a little bit more complex. In the end, it depends on your requirement.
Which SQL Update is faster/ more efficient
[ "", "sql", "sql-server", "" ]
Is there a convenient way to use the ARRAY\_CONTAINS function in hive to search for multiple entries in an array column rather than just one? So rather than: ``` WHERE ARRAY_CONTAINS(array, val1) OR ARRAY_CONTAINS(array, val2) ``` I would like to write: ``` WHERE ARRAY_CONTAINS(array, val1, val2) ``` The full problem is that I need to read `val1` and `val2` dynamically from the command line arguments when I run the script and I generally don't know how many values will be conditioned on. So you can think of `vals` being a comma separated list (or array) containing values `val1`, `val2`, `...`, and I want to write ``` WHERE ARRAY_CONTAINS(array, vals) ``` Thanks in advance!
There is a UDF [here](https://github.com/klout/brickhouse/blob/master/src/main/java/brickhouse/udf/collect/ArrayIntersectUDF.java) that will let you take the intersection of two arrays. Assuming your values have the structure ``` values_array = [val1, val2, ..., valn] ``` You could then do ``` where array_intersection(array, values_array)[0] is not null ``` If they don't have any elements in common, `[]` will be returned and therefore `[][0]` will be `null`
``` Create table tmp_cars AS Select make,COLLECT_LIST(TRIM(model)) model_List from default.cars GROUP BY make; Select array_contains(model_List,CAST('Rainier' as varchar(40))) FROM Default.tmp_cars t where make = 'Buick'; ``` Data [" Rainier"," Rendezvous CX"," Century Custom 4dr"," LeSabre Custom 4dr"," Regal LS 4dr"," Regal GS 4dr"," LeSabre Limited 4dr"," Park Avenue 4dr"," Park Avenue Ultra 4dr"] Return True
ARRAY_CONTAINS muliple values in hive
[ "", "sql", "hive", "" ]
this is the table structure, ``` id int name varchar insert_date date update_date TIMESTAMP ``` Now here the idia is , **insert\_date** will be able to insert by defult current time & the update\_date it will insert current time in each edit or insert it will update it. so how i could make the (insert\_date) default value to current time and date but on update it should not change the value of (insert\_date) if i keep it TIMESTAMP on update it will change. insert\_date -- insert current time , only in insert just one time regards
you may use `NOW()` function in you respective fields
If you want to add only time .. you need to use CURTIME() AND if you want current date and time... NOW() If you want only current date.... CURDATE() for more here.. The following SELECT statement: SELECT NOW(),CURDATE(),CURTIME() will result in something like this: NOW() CURDATE() CURTIME() 2008-11-11 12:45:34 2008-11-11 12:45:34 Choose format as you want.
mysql insert current date, only in insert time
[ "", "mysql", "sql", "" ]
I have two tables: * Table 1 with columns `Number` and `TEXT` * Table 2 with columns `Number` and `TEXT` Now Table 1 has `Nr = 12345AB` and Table 2 `Nr = 00012345AB` How can I find all columns from **Table 1** that are not in **Table 2**?
Try this select: ``` select * from table1 t1 left join table2 t2 on t1.number=t2.number where t2.number is null ```
Try ***exists***: ``` select t1.* from Table1 t1 where not exists (select 1 from Table2 t2 where t2.Number = t1.Number) ```
Search for same Values in two SQL TABLES
[ "", "sql", "" ]
I need to parse row (with spaces) into two different columns in SQL. The rows look like that: **FullName** John Mayer Rajon Jennings Donna Fraye Michael Moore So the expected result is : **Name** John Rajon Donna Michael **Surname** Mayer Jennings Fraye Moore How can i do that in SQL?
If you have a requirement like First Name should be string before first space and rest everything what follows should go as Last name , you can update two columns as: ``` Update T Set T.FirstName = Left(ltrim(rtrim(FullName)), CHARINDEX(' ',ltrim(rtrim(FullName)))) ,T.LastName = Right(ltrim(rtrim(FullName)),len(FullName)- CHARINDEX(' ',ltrim(rtrim(FullName)))) From Test T ``` [Check Demo here..](http://rextester.com/RWGD72923)
You can use a split function.This function must be created. Here are some examples : <http://ole.michelsen.dk/blog/split-string-to-table-using-transact-sql/> [Split function by comma in SQL Server 2008](https://stackoverflow.com/questions/20512855/split-function-by-comma-sql-server-2008)
How to parse a row in SQL into columns (separator-spaces)
[ "", "sql", "sql-server", "function", "parsing", "row", "" ]
I've got the following table in a Postgres database: Table **Example**: ``` --------------------------- name | number --------------------------- DefaultName | 1 DefaultName | 2 DefaultName | 3 DefaultName | 4 Charlie | 1 Charlie | 3 Charlie | 4 Charlie | 5 Amanda | 2 Amanda | 3 Amanda | 4 Amanda | 5 ``` I need to get the "number"s that are present in the 'DefaultName', but that are not present in each "name"s that differ from 'DefaultName'. In this case, it would return: ``` --------------------------- names | numbers --------------------------- Charlie | 2 Amanda | 1 ``` I am trying a Left Join like the one below, but I can't figure a way to get the DefaultName numbers crossed with a negation with the other names'... ``` SELECT Test_Configs.name, Default_Configs.number FROM Example AS Test_Configs LEFT JOIN Example AS Default_Configs ON Default_Configs.name = 'DefaultName' ```
I would generate the whole range per name and `LEFT JOIN` to the base table to eliminate the present ones: ``` SELECT n.name, nr.number FROM ( SELECT DISTINCT name FROM example WHERE name <> 'DefaultName' ) n -- all names except 'DefaultName' CROSS JOIN ( SELECT number -- assuming distinct numbers for 'DefaultName' FROM example WHERE name = 'DefaultName' ) nr -- combine with numbers from 'DefaultName' LEFT JOIN example x USING (name, number) WHERE x.number IS NULL; -- minus existing ones ``` To list only the gaps for each name individually: ``` SELECT n.name, nr.number FROM ( SELECT name, min(number) AS min_nr, max(number) AS max_nr FROM example GROUP BY 1 ) n , generate_series(n.min_nr, n.max_nr) AS nr(number) LEFT JOIN example x USING (name, number) WHERE x.number IS NULL; ``` [**SQL Fiddle.**](http://sqlfiddle.com/#!15/8aa67/1) Here are the basic techniques to exclude rows existing in another table (a derived table in this example): * [Select rows which are not present in other table](https://stackoverflow.com/questions/19363481/select-rows-which-are-not-present-in-other-table/19364694#19364694)
It will take a few passes, select default, group names that aren't default, then left join and check for null values. Check the [SQLFiddle](http://sqlfiddle.com/#!15/689d0/1) Example. ``` select Names.name, DefaultConfigs.number from Example DefaultConfigs cross join ( select name from Example where name != 'DefaultName' group by name ) Names left join Example Missing on Missing.name = Names.name and Missing.number = DefaultConfigs.number where DefaultConfigs.name = 'DefaultName' and Missing.name is null ; ```
JOIN using negation clauses
[ "", "sql", "postgresql", "" ]
Here is my table Table Suppliers in Database A ``` ID(AUTONUMBER) | SupplierCode(Unique) | SupplierName 001 supp001 TestA 002 supp002 TestB 003 supp003 TestC ``` Table Suppliers in Database B ``` ID(AUTONUMBER) | SupplierCode(Unique) | SupplierName 001 supp001 TestA 003 supp003 TestC ``` In this case, i want to insert supp002 to table Suppliers in Database B And it will skip supp001 and supp003 because the SUPPLIERCODE exists Can anyone help me with this condition note: SQL Server query not MySQL
Might as well make it an answer ``` insert into databaseB..suppliers Select * from databaseA..suppliers where id not in (select id from databaseB..suppliers) ``` The table references should be correct, but I can't verify. You might want to go databasea.dbo.suppliers if that's the correct full name
Assuming the two databases are on the same server and both tables are part of the "dbo" schema, inserting only those records in the A table that don't already exist in B into the B table can be handled like this: ``` INSERT INTO DatabaseB.dbo.Suppliers SELECT ID,SupplierCode, SupplierName FROM DatabaseA.dbo.Suppliers WHERE SupplierCode NOT IN (SELECT SupplierCode FROM DatabaseB.dbo.Suppliers) ``` If the tables in A and B belong to a different schema, replace the "dbo" above with the appropriate schema name(s). If the databases reside on different servers, this article that discusses creating linked servers may be useful but the syntax will be similar. [Create Linked Servers](http://msdn.microsoft.com/en-us/library/ff772782.aspx "Create Linked Servers")
How to Insert from one table to another table from difference Database
[ "", "sql", "sql-server", "" ]
Hello I am so good with sql so I need your help. I have one table called property. Now I want to retrieve rows with different value in same field. I tried with **LIKE** and **AND** but not getting any result. **TABLE:** ``` id user_id title approved -- ------- ----- -------- 1 60 one 1 2 60 one 1 3 60 one 1 4 60 two 1 5 60 three 1 ``` **SQL QUERY:** ``` SELECT * FROM `property` WHERE `user_id` =60 AND `approved` =1 AND `title` like '%one%' AND `title` like '%two%' ``` **I want this result:** ``` id user_id title approved -- ------- ----- -------- 1 60 one 1 2 60 one 1 3 60 one 1 4 60 two 1 ``` Any Idea? Thanks.
with `title like '%one%' AND title like '%two%'` you looks for a title that have "one" and "two" together. and you want some with "one" OR "two" ``` SELECT * FROM `property` WHERE `user_id` =60 AND `approved` =1 AND (`title` like '%one%' OR `title` like '%two%') ```
You have to use > OR or || condition. You are using LIKE condition to retrieve result. That will retrieve If title have SIXTYONE also. Because SIXTYONE contains "ONE" word. If you exactly looking for that, use LIKE as your post mentioned. else You should use =(equal to) for exact result. who having title "ONE" only. Can you try it? ``` SELECT * FROM `property` WHERE `user_id` =60 AND `approved` =1 AND (`title`='one' || `title`='two') ```
Select rows with different value in same filed
[ "", "mysql", "sql", "" ]
I'm trying to return NOT NULL and NULL as the result of a CASE statement but I can't find the right way to do it What I have below is incorrect and underlines the `NOT` keyword ``` SELECT ROW_NUMBER() OVER ( ORDER BY d.cnumber DESC) AS RowNum, d.cnumber, d.firstname, d.lastname, d.current_email, d.updated_email, d.optedin, d.activated FROM Customer d WHERE (d.optedin = CASE WHEN @query = 'UNSUBMITTED' OR @query = 'SUBMITTED' THEN d.optedin WHEN @query = 'OPTEDIN' THEN 1 ELSE 0 END) AND (d.activated = CASE WHEN @query = 'OPTEDIN' OR @query = 'OPTEDOUT' THEN d.activated WHEN @query = 'UNSUBMITTED' THEN NULL ELSE NOT NULL END) ``` `activated` is a nullable datetime field. `@query` is a varchar parameter
Using a `case` in a `where` clause is usually an indication you've taken the wrong path. I believe that you'll find you can solve this with more standard logic: ``` WHERE (@query in ( 'UNSUBMITTED', 'SUBMITTED') OR (@query = 'OPTEDIN' AND d.optedin = 1) OR d.optedin = 0) AND (@query in ('OPTEDIN', 'OPTEDOUT') OR (@query = 'UNSUBMITTED' AND d.activated IS NULL) OR d.activated IS NOT NULL) ```
I suggest you modify the condition for d.activated to this: ``` AND (@query = 'UNSUBMITTED' AND d.activated IS NULL OR @query <> 'UNSUBMITTED' AND d.activated IS NOT NULL) ``` Please note that you do not need to specifically test for this condition: ``` CASE WHEN @query = 'OPTEDIN' OR @query = 'OPTEDOUT' THEN d.activated ``` As it will be covered by `OR d.activated IS NOT NULL` in the condition that I have proposed BTW, for d.optedin you can use the similar: ``` (@query = 'OPTEDIN' AND d.optedin = 1 OR @query <> 'OPTEDIN' AND d.optedin = 0) ```
SQL Case NOT NULL as value
[ "", "sql", "sql-server", "" ]
I am doing some data mapping and out of my 15 columns on 5 will have data the rest will be empty. When I export my fie as csv my columns that have no data return comas. My vendor needs those trailing comas removed. I tried using trim on one of the colmns and that did not work. sample sample : > Clientid,schoolc,schname,District,state,NCESid,PartnerID,Action,Reserved1,Reserved2,Reserved3,Reserved4,Reserved5,Reserved6,Reserved7,Reserved8,Reserved9,reserved10 > ca-rosev42441,26,barns Elementary,Sample City School > District,CA,163364418255,,,,,,,,,,,,
There is very likely a much better approach to get the formatting you're after, but assuming that -the first five columns will have some data with length >=1 -two commas together indicates the beginning of the end (of the line at least) you can handle each line/row of data individually in a manner similar to the LEFT(...) call below Something like the example snippet below should work. Note that the DECLARE, SET, and PRINT statements are only required to set up the example and the LEFT(...) is all you need if @line contains the row of data you want to output. If we can see how the data is being pulled out of the table(s) there may be a much more sensible and efficient way to drop the extraneous commas. ``` DECLARE @line nvarchar(max) SET @line = 'Clientid,schoolc,schname,District,state,NCESid,PartnerID,Action,Reserved1,Reserved2,Reserved3,Reserved4,Reserved5,Reserved6,Reserved7,Reserved8,Reserved9,reserved10 ca-rosev42441,26,barns Elementary,Sample City School District,CA,163364418255,,,,,,,,,,,,' SET @line = LEFT(@line,CHARINDEX(',,',@line)-1) ---NoTrailingCommas PRINT @line ```
**See this example**: ``` CREATE FUNCTION [dbo].[ufn_TrimLeadingCharacters_reversed] ( @Input VARCHAR(50), @LeadingCharacter CHAR(1) ) RETURNS VARCHAR(50) AS BEGIN RETURN reverse(REPLACE(LTRIM(REPLACE(reverse(@Input), ISNULL(@LeadingCharacter, '0'), ' ')), ' ', ISNULL(@LeadingCharacter, '0'))) END select dbo.ufn_TrimLeadingCharacters_reversed('jazz,guitar,bass,strings,,,,,',',') ```
remove trailing commas using sql
[ "", "sql", "" ]
I got the following statement: ``` SELECT XYZ FROM ABC WHERE ID IN (123) ``` At the moment I made the 123 as a configuration parameter in a separate table: ``` SELECT XYZ FROM ABC WHERE ID IN (SELECT CONTENT FROM c_configuration WHERE IDENTIFIER = "ID") ``` Now the content from the c\_configuration parameter changed to "123, 456". Is there any better way then splitting the field content at "," and insert them into an array and put the array in the `WHERE ID IN ( ARRAY )` part? Thank you in advance Matt
try out the following code: ``` SELECT XYZ FROM ABC WHERE ID IN ( SELECT DISTINCT TO_NUMBER(REGEXP_SUBSTR(CONTENT,'[^,]+', 1, level)) FROM c_configuration WHERE IDENTIFIER = 'ID' CONNECT BY REGEXP_SUBSTR(CONTENT, '[^,]+', 1, level) IS NOT NULL ) ``` * [SQLFiddle Demo](http://sqlfiddle.com/#!4/d2330/1)
You'll have to split up the csv value or use dynamic sql.
WHERE IN with column content
[ "", "sql", "plsql", "" ]
This is the function which I designed. It got complied successfully ``` CREATE OR REPLACE FUNCTION "F_CHECK" ( p_id IN VARCHAR2 p_name IN VARCHAR2)RETURN VARCHAR2 is v_id VARCHAR2; v_name VARCHAR2; cnt pls_integer; BEGIN IF id IS NULL THEN RETURN NULL; END IF; SELECT COUNT(*) INTO cnt from emp_new where id = p_id; IF (cnt > 0) THEN SELECT id, name INTO v_id, v_name from emp_new where id=p_id; IF (v_id is null and p_id is null and v_name is null and p_name is null) THEN return NULL; ELSE IF (v_name =trunc(p_name)) then return NULL; else insert into employees values(p_id,p_name,sysdate); end if; end if; end if; exception when DUP_VAL_ON_INDEX THEN raise application error (-20001, 'NAME EXISTS UNDER DIFFERENT ID'); END F_CHECK; ``` But I'm not getting the expected result, when I execute the function ``` select F_CHECK(1,1) from dual; ``` ERROR I'M getting is: ``` SQL EEROR: ORA-06503: PL/SQL : FUNCTION RETURNED WITHOUT VALUE ```
One of the possible causes of exceptions that you will get when you run this code is: If your select into didn't return a value, you will get an exception, an nu-handled exception Even though you have a return NULL in the end of the function, but still you need to catch all the exceptions that might occur The area where you need to take care of is: ``` SELECT id, name INTO v_id, v_name from emp_new where id=p_id; ``` Surround it by `Begin ... EXCEPTION WHEN NO_DATA_FOUND THEN ... END;` block Also, your insert statement might cause an exception if you violated some constraints on your table, so you might need to handle that also Have a look at [Error Handling in Oracle](http://docs.oracle.com/cd/B10500_01/appdev.920/a96624/07_errs.htm) Below is your updated code, also fixed code regarding the extra parenthesis **Edited** ``` CREATE OR REPLACE FUNCTION F_CHECK ( P_ID EMP_NEW.ID%TYPE, P_NAME EMP_NEW.NAME%TYPE ) RETURN VARCHAR2 IS V_ID EMP_NEW.ID%TYPE; V_NAME EMP_NEW.NAME%TYPE; CNT NUMBER; BEGIN --IF ID IS NULL THEN -- What is ID ?? is it P_ID --Changed below IF P_ID IS NULL THEN RETURN 'Error: Add Value For Id'; END IF; IF P_NAME IS NULL THEN RETURN 'Error: Add Value For Name'; END IF; SELECT COUNT(*) INTO CNT FROM EMPLOYEES WHERE ID = P_ID; IF (CNT > 0) THEN SELECT ID, NAME INTO V_ID, V_NAME FROM EMP_NEW WHERE ID=P_ID; ---------------------------------------------------------------------------------------- --IF V_ID IS NULL AND P_ID IS NULL AND V_NAME IS NULL AND P_NAME IS NULL THEN --The code above will always evaluate to False because P_ID at this stage is not null! --Also, if P_Name must have a value, check it at the begining along with the ID, not here ---------------------------------------------------------------------------------------- IF V_ID IS NULL AND V_NAME IS NULL THEN RETURN 'Error: Not found details'; ELSE --Details are found IF (V_NAME = TRUNC(P_NAME)) THEN RETURN 'Name already assigned to this id'; ELSE --Its a new name, add it INSERT INTO EMPLOYEES VALUES(P_ID,P_NAME,SYSDATE); --Make sure your columns are only three and in the correct order as the Values specified END IF; END IF; END IF; RETURN 'Ok, added'; EXCEPTION WHEN DUP_VAL_ON_INDEX THEN raise application error (-20001, 'NAME EXISTS UNDER DIFFERENT ID'); END F_CHECK; ```
You must return a value when the execution flow reaches (around line 22) ``` ... else insert into employees values(p_id,p_name,sysdate); end if; ... ``` In this case, the function does not return a value which is expected by the caller, hence the error. You can instruct the PL/SQL compiler to warn you about such code (and other problems) with `ALTER SESSION SET PLSQL_WARNINGS='ENABLE:ALL';` prior to compilation.
Function returned without value
[ "", "sql", "oracle", "plsql", "" ]
I feel my problem is very complicated. I have a Table A with a column named AbsenceLinks. The table would look like this:  - - - - - - - - - - - - - - - - - - - - - - - - - - - - |     **Description**     |    **AbsenceLinks**  | |- - - - - - - - - - - - - + - - - - - - - - - - - - - -| |         Illness          |        14/3 15/9        | |- - - - - - - - - - - - - + - - - - - - - - - - - - - -| |     Education        |        19/3 18/9        | |- - - - - - - - - - - - - + - - - - - - - - - - - - - -| |Leave of Absence|            20/3             | |- - - - - - - - - - - - - + - - - - - - - - - - - - - -| |      Holiday           |             8/3             | l- - - - - - - - - - - - - - - - - - - - - - - - - - - -l I have another table B where I have a column named AbsenceID that matches the number before the slash-symbol in the AbsenceLinks column. (Table A AbsenceLinks value '20/9' matches AbsenceID 20 in table B) This table would look like this:  - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - |          **Absence**           |       **AbsenceID**      | | - - - - - - - - - - - - - - - -+ - - - - - - - - - - - - - -| |        Illness (Days)      |             14               | | - - - - - - - - - - - - - - - -+ - - - - - - - - - - - - - -| |            Illness             |              15              | | - - - - - - - - - - - - - - - -+ - - - - - - - - - - - - - -| |    Leave of Absence   |             20              | | - - - - - - - - - - - - - - - -+ - - - - - - - - - - - - - -| |Holiday Without Salary|              8              | l- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -l I tried to see how I could retrieve some of the string from AbsenceLinks and made a case statement: ``` CASE WHEN LEN(AbsenceLink) = 3 THEN SUBSTRING(AbsenceLink,1,1) --1/3 WHEN LEN(AbsenceLink) = 4 and SUBSTRING(AbsenceLink,1,4) LIKE '%/' THEN SUBSTRING(AbsenceLink,1,1)--1/10 WHEN LEN(AbsenceLink) = 4 AND SUBSTRING(AbsenceLink,1,4) LIKE '%/%' THEN SUBSTRING(AbsenceLink,1,2)--17/3 WHEN LEN(AbsenceLink) = 8 AND SUBSTRING(AbsenceLink,1,2) like '%/' AND SUBSTRING(AbsenceLink,5,2) like '%/' THEN SUBSTRING(AbsenceLink,1,1)+', '+SUBSTRING(AbsenceLink,5,1)--2/9 1/10 WHEN LEN(AbsenceLink) = 8 AND SUBSTRING(AbsenceLink,1,3) like '%/' AND SUBSTRING(AbsenceLink,5,3) like '%/' THEN SUBSTRING(AbsenceLink,1,2)+', '+SUBSTRING(AbsenceLink,5,2)--10/3 9/9 WHEN lenLENAbsenceLink) = 9 AND SUBSTRING(AbsenceLink,1,3) like '%/' AND SUBSTRING(AbsenceLink,5,4) like '%' THEN SUBSTRING(AbsenceLink,1,2)+', '+SUBSTRING(AbsenceLink,5,3)--14/3 15/9 End AS AbsLink ``` I have to compare theese values in a report for some statistic for a customer. **I need theese two tables to be linked and theese columns are the only columns which can be linked**. I want to do like this: ``` SELECT A.col1, A.col2, B.col1, B.col2 FROM TableA A, TableB B WHERE A.AbsenceLink = B.Absence ``` ## The problem is: * The value of AbsenceLink is a nvarchar value like '20 / 3 1/9 ', there may be one or many spaces before or after the AbsenceID * Absence is a int value like 20 and 1. * I want 20 and 1 from '20/3 1/9' to be compared and linked with the Absence value. * It is a database at my work and I can't change the data or make another table. **So dearest wise and clever programmers - what can I do?**
With help from Jaugar Chang I found out the solution to my problem. The code below will show the following result: **\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_** | **Description**           | **AbsenceLinks**  | |- - - - - - - - - - - - - - + - - - - - - - - - - - - | | Education               | 19                      | |- - - - - - - - - - - - - - + - - - - - - - - - - - - | | Education               | 18                      | |- - - - - - - - - - - - - - + - - - - - - - - - - - - | | Holiday                   | 8                        | |- - - - - - - - - - - - - - + - - - - - - - - - - - - | | Illness                     | 14                      | |- - - - - - - - - - - - - - + - - - - - - - - - - - - | | Illness                     | 15                      | |- - - - - - - - - - - - - - + - - - - - - - - - - - - | | Leave of Absence  | 20                      | |**\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_**| ``` WITH links AS ( SELECT Description, SUBSTRING(AbsenceLink,0,CHARINDEX('/',AbsenceLink, 0)) AS AbsenceID, CASE WHEN CHARINDEX(' ',AbsenceLink, 0) > 0 THEN SUBSTRING(AbsenceLink,CHARINDEX(' ',AbsenceLink, 0)+1,255) ELSE '' END AS linkAbsNr FROM TableA UNION ALL SELECT Description, SUBSTRING(linkAbsNr,0,CHARINDEX('/',linkAbsNr, 0)) AS AbsenceID, CASE WHEN CHARINDEX(' ',linkAbsNr, 0) > 0 THEN SUBSTRING(linkAbsNr,CHARINDEX(' ',linkAbsNr, 0)+1,255) ELSE '' END AS RightAbsNr FROM links where linkAbsNr <> '' ) SELECT Description, AbsenceID FROM links ORDER BY Description ```
## UPDATE: Method with substring and join only. There is a similar question [expand-comma-separated-values-into-separate-rows](https://stackoverflow.com/a/703125/3630826) answered by @KM. we could reference. Your case is a little different from that one. You should take all the parts within some spaces before and a slash after. It's little harder than split string from a single character. But you can divide it into several steps to solve it. * **Extract every part stopped by '/' into rows** + The method of @KM. works well of doing this. * **Take the last part with spaces ahead.** + There may have spaces before the last '/', we need to find the position of the last space after trim the spaces before the last '/'. So we use the trick of revers like this `charindex(' ', reverse(rtrim(left(AbsenceLink,number-1))),0)`. Here is the result:`SQLFiddle` ``` with numbers as (select 1 as number union all select number +1 from numbers where number<100) select Description , right(rtrim(left(AbsenceLink,number-1)),charindex(' ', reverse(rtrim(left(AbsenceLink,number-1))),0)) as AbsenceID from (select Description, ' '+AbsenceLink as AbsenceLink from t) as t1 left join numbers on number<= len(AbsenceLink) where substring(AbsenceLink,number,1)='/' ``` **NOTE:** * If you care about the performance, create an permanent `numbers` table instead of temp table may be helpful. * More specific explain in [this answer](https://stackoverflow.com/a/25624014/3630826). ## Method with CTE recursive query `SqlFiddle` ``` with links as (select Description, substring(AbsenceLink,1,charindex('/',AbsenceLink, 0)-1) as AbsenceID, case when charindex(' ',AbsenceLink, 0) > 0 then substring(AbsenceLink,charindex(' ',AbsenceLink, 0)+1,255) else '' end as left_links from (select convert(varchar(255),ltrim(rtrim(substring(AbsenceLink,1,charindex('/',AbsenceLink, 0)-1) )) + '/' +ltrim(substring(AbsenceLink,charindex('/',AbsenceLink, 0)+1, 255) ) )as AbsenceLink, Description from t) as t1 union all select Description, substring(left_links,1,charindex('/',left_links, 0)-1) as AbsenceID, case when charindex(' ',left_links, 0) > 0 then substring(left_links,charindex(' ',left_links, 0)+1,255) else '' end as left_links from (select convert(varchar(255),ltrim(rtrim(substring(left_links,1,charindex('/',left_links, 0)-1) )) + '/' +ltrim(substring(left_links,charindex('/',left_links, 0)+1, 255) ) )as left_links, Description from links where left_links<>'') as t2 ) select * from links order by Description ``` * With recursive CTE table, to get first link's absence id in every iteration until all links processed. * If there are more than one space before or after `/`, `ltrim(rtrim(substring(AbsenceLink,1,charindex('/',AbsenceLink, 0)-1) )) + '/' + ltrim(substring(AbsenceLink,charindex('/',AbsenceLink, 0)+1, 255) )` can get rid off all the spaces before or after the first `/` in each recursive step. * To convert varchar to integer, can reference to [SqlServer document here](http://msdn.microsoft.com/en-us/library/ms187928.aspx) about convert function. Two methods can get the same result like this: ``` DESCRIPTION ABSENCEID Illness 14 Illness 15 Education 18 Education 19 Holiday 8 Leave of Absence 20 ```
How to extract and split IDs from nvarchar column surrounded by spaces and a slash to compare to other table's value
[ "", "sql", "sql-server-2008", "" ]
I have a table that contains Phone# like this: ``` Phone# ------------------ 1234567890 2548796584 9999 2126543133 9999999999 999999 999 9853 9999999 ``` Now I want a query that will return rows having only number 9 in it, in this case result should be. ``` Phone# ------------------ 9999 9999999999 999999 999 9999999 ``` I have heard about regex in Oracle but not in Sql Server. I'm using SQL server 2008 R2, can anyone help me please, thanks in advance.
Here is a much simpler way than using CLR for this. The reason this like pattern works is we are finding rows that not like NOT 9. The carrot ( ^ ) in RegEx means NOT. ``` create table #Phones ( Num varchar(20) ) insert #Phones select '1234567890' union all select '2548796584' union all select '9999' union all select '2126543133' union all select '9999999999' union all select '999999' union all select '999' union all select '9853' union all select '9999999' union all select '12345' select * from #Phones where Num not like '%[^9]%' ```
You can use a query like this: ``` select * from Phones where 1 = dbo.RegExpLike(number, '^9+$') ``` You can check more examples about using regex here: <http://www.codeproject.com/Articles/42764/Regular-Expressions-in-MS-SQL-Server>
SQL Server Regex Select
[ "", "sql", "sql-server", "regex", "sql-server-2008", "" ]
I am working on a project, and need to join results from 2 tables into one set. The tables are ordered as such: ``` gameData: [Id,TeamID, data..........] players: [Id (same as above), name, data.....] ``` I need to do something like: ``` SELECT * FROM gameData and SELECT data FROM players WHERE gameData.Id = players.Id ``` And here is what I have thusfar. ``` SELECT * FROM gameData AS A LEFT OUTER JOIN players AS B on A.playerID = B.Id; ``` And have it return all of the values from A, and only the data from B. I know that the syntax is not correct, I have little experience working with SQL Joins, any advice would be greatly appreciated! Edit: Trying both answers now. Thanks! Edit2: Can I do something like: "Select a.\* from tableA as a"
``` Select a.Id, a.TeamID, a.data, b.data FROM gameData as a LEFT OUTER JOIN players b On a.ID = b.ID ```
You could enumerate the fields that you select and alias the tables, like: ``` select a.Id, a.TeamId, a.data, b.data from tableA a join tableB b on a.Id = b.Id ```
Please help - SQL Join query
[ "", "sql", "sqlite", "" ]
i am trying the below query but i am getting error like incorrect syntax at SqlDataReader reader = command.ExecuteReader .. please correct me where i am wrong ``` string querystring = "SELECT ImageName FROM dbo.ProfilePic WHERE UserName = @UserName & IsActive = 'Y' order by ID Desc"; SqlCommand command = new SqlCommand(querystring, con); command.Parameters.AddWithValue("@UserName", Session["UserName"].ToString()); con.Open(); SqlDataReader reader = command.ExecuteReader(); while (reader.Read()) { Console.WriteLine(String.Format("{0}", reader[0])); ViewBag.ProfilePic = reader[0]; } reader.Close(); ``` updated with all suggested answers
SQL doesn't need the double equal signs. ``` SELECT ImageName FROM dbo.ProfilePic WHERE Username = @UserName ```
in `SQL` you don't need to use double equal sign to compare values and `&&` is wrong, then this is wrong: ``` string querystring = "select ImageName from dbo.ProfilePic where UserName == @UserName && IsActive == 'Y' order by ID descending"; ``` and this is true: ``` string querystring = "select ImageName from dbo.ProfilePic where UserName = @UserName AND IsActive = 'Y' order by ID desc"; ```
sql query error incorrect syntax
[ "", "sql", "" ]
I am trying to create a view that will point to the most recent version of a table. For instance, if there are two tables, tblZipCode2014 and tblZipCode2013, I would want the view to return everything in tblZipCode2014. Then, if a new table called tblZipCode2015 is created the view would return everything in tblZipCode2015. This way a user could always use the view to get the current Zip Code information. The following code should get the most recent table name, but I can't figure out how to get the view to reference this table dynamically. > `SELECT name > FROM ( SELECT name,ROW_NUMBER() OVER(ORDER BY name desc) AS RowNum > FROM Geospatial.sys.tables > WHERE name like 'tblZipCode____') AS x > WHERE RowNum = 1` Any ideas on how to do this?
Why are you storing similar data in multiple tables? You should have one table `tblZipCode` that has a column for the year. Then you could just do: ``` create view v_tblZipCode select * from tblZipCode where year = (select max(year) from tblZipcode); ``` (The subquery would actually be fast using an index.) But, to be honest, you can solve your particular problem by defining a view or synonym in a regularly scheduled job. This job would use dynamic SQL to get the most recent table and then define the view for the users. I think something like this would work: ``` declare @sql nvarchar(max); select top 1 @sql = 'create view myview as select * from sys.' + table_name from information_schema.tables t where table_name like 'tblZipCode____' and schema_name = 'sys' order by table_name desc; exec(@sql); ``` Just put this in a regularly scheduled job, and the most recent table will get used.
This is not possible to do in a view. You cannot make a table name dynamic in a select statement unless you are using dynamic sql and you cannot use dynamic SQL in a view, because you cannot use the exec command. You might try something like this instead: * Create a table called tblZipCodeCurrent (create a synonym or view to change the name to something users would prefer) * When you are adding a new table, change the name of the old one to have the year and add the new one as tblZipCodeCurrent
Creating a view which references the most recent version of a table?
[ "", "sql", "sql-server", "" ]
I am sure this questions has been answered before, but I have no idea what to search for. I have 2 tables (coming from Wordpress), Users and User Meta Data. I need to create a query that takes the Meta Data and includes that with the other user data. I am developing independent solution in MS Access that will use WP data, so I cannot use any Wordpress specific functions. Basically, I need to take these 2 tables: ``` user_id | username ------------------- 01 | bob24 02 | james112 meta_id | user_id | meta_key | meta_value ----------------------------------------- 01 | 01 | first_name | Bob 02 | 01 | last_name | Smith 03 | 02 | first_name | James 04 | 02 | last_name | Jones ``` And turn it into this: ``` user_id | username | first_name | last_name ----------------------------------------- 01 | bob24 | Bob | Smith 02 | james112 | James | Jones ``` I am sure there is a word for this, but I don't know what it is called. Can anyone point me in the right direction? Thanks!
Thanks so much for @GolezTrol for pointing me in the right direction. I did need to pivot the data. I searched and Access has this feature. I created a "cross-tab query." Here is the query in case anyone else has the same problem. ``` TRANSFORM Last(meta.meta_value) AS LastOfmeta_value SELECT meta.[user_id], Last(meta.[umeta_id]) AS [Total Of umeta_id] FROM meta GROUP BY meta.[user_id] PIVOT meta.[meta_key]; ```
You are probably looking for *pivot table*, which is a slow and complex process in MySQL. If you like to read more about that, you can read [this question](https://stackoverflow.com/questions/7674786/mysql-pivot-table). But since you seem to have exactly two fields, maybe a join will do just fine for you. To do it with a join, join the meta table twice. Once for the first name and once for the last name: ``` select u.user_id, u.username, fn.meta_value as first_name, ln.meta_value as last_name from User u left join Meta fn on fn.user_id = u.user_id and fn.meta_key = 'first_name' left join Meta ln on ln.user_id = u.user_id and ln.meta_key = 'last_name' ``` If you want to have only those users that do indeed have a last name, you can change the join to an inner join, which might be slightly faster too.
Database Query - One to Many
[ "", "sql", "ms-access", "" ]
I have a stored procedure with a nested query that checks whether "`category`" from the main table matches a "`category`" in a sub table. So there can either be one match or none. How can I return Yes if there is a match and the sub query returns something and No if there is no match and the sub query returns nothing ? I tried the following which works in general but only if there is a match as otherwise this returns nothing. **My SQL (shortened):** ``` SELECT A.categoryID, A.category, A.[description], ( SELECT 'Yes' AS subscribed FROM MOC_Categories_Subscribers D WHERE D.category = A.category FOR XML PATH(''), ELEMENTS, TYPE ) FROM MOC_Categories A ```
If subquery doesn't return any rows then your result will be NULL. Thus you need to check it. In SQL Server you can do this by using functions [`ISNULL`](http://msdn.microsoft.com/en-us/library/ms184325.aspx) and [`COALESCE`](http://msdn.microsoft.com/en-us/library/ms190349.aspx), it depends on version that you're using ``` SELECT A.categoryID, A.category, A.[description], COALESCE((SELECT TOP 1 'Yes' FROM MOC_Categories_Subscribers D WHERE D.category = A.category), 'No') AS Result FROM MOC_Categories A ```
``` SELECT A.categoryID, A.category, A.[description], ( SELECT case when count(subscribed) > 0 then 'Yes' else 'No' end FROM MOC_Categories_Subscribers D WHERE D.category = A.category ) FROM MOC_Categories A ```
How to return Yes or No if nested query has result or not in SQL Server?
[ "", "sql", "sql-server", "stored-procedures", "subquery", "" ]
I would like to do a SQL request like this : ``` SELECT p.ID, p.post_title FROM ".$wpdb->prefix."posts AS p INNER JOIN ".$wpdb->prefix."postmeta pm ON pm.post_id=p.ID AND pm.meta_key='shops_list' INNER JOIN ".$wpdb->prefix."follow AS f ON pm.meta_value LIKE CONCAT('%',f.shop_followed_id,'%') ``` `shop_followed_id` is the column name it doesn't work, but when I test with `CONCAT('%',163,'%')`, it worked. Do you know where is the syntax error? exemple of meta\_value : a:3:{i:0;s:3:"168";i:1;s:2:"22";i:2;s:2:"43";} exemple of shop\_followed\_id : 168
looks to me you got the like backwards. try: ``` INNER JOIN ".$wpdb->prefix."follow AS f ON f.shop_followed_id LIKE CONCAT('%',pm.meta_value,'%') ```
Use Casting for the f.shop\_followed\_id. CONCAT('%',CAST(f.shop\_followed\_id As VARCHAR),'%') May be this work
SQL concat with columns name
[ "", "mysql", "sql", "concatenation", "" ]
I am an SQL newby and I'm struggling with the following (simplified) Query. It must extract an Account number from a master file (Master) and the associated transactions from a Current table (nTrans) and a History table (nTransArc) using the common Account number fields. This works fine. The part I can't get to work is it also needs to extract Vat transactions from the VatTrans (Current) and VatTransArc (History) tables using a link between the TranCode in the nTrans/nTransArc tables and the VTranCode in the VatTrans/VatTransArc tables. This is the code in red between the lines (and the later INNER JOIN). Can anyone help? ``` SELECT Mast.MAccNo, Tran.TAccNo,Tran.TranCode, Vat.VTranCode,Vat.TaxCode FROM (SELECT AccNo,TranCode FROM nTrans UNION ALL SELECT AccNo,TranCode FROM nTransArc) '-------------------------------------------------------' FROM (SELECT VTranCode,TaxCode FROM VatTrans UNION ALL SELECT VTranCode,TaxCode FROM VatTransArc) '-------------------------------------------------------' Tran JOIN Master Mast ON Tran.TAccNo = Mast.MAccNo INNER JOIN Vat ON Vat.VTranCode = Tran.TranCode ORDER BY Mast.MAccNo ```
my best guess: ``` SELECT m.MAccNo, t.TranCode, v.TaxCode, vt.taxcode FROM Master m inner JOIN (SELECT AccNo,TranCode FROM nTrans UNION ALL SELECT AccNo,TranCode FROM nTransArc) t ON m.MAccNo = t.TAccNo INNER JOIN (select TaxCode, VTranCode from Vat union all SELECT TaxCode, VTranCode FROM VatTrans UNION ALL SELECT TaxCode, VTranCode FROM VatTransArc) v ON v.VTranCode = t.TranCode ORDER BY m.MAccNo ```
As I read your question I believe the following will get you what you want: ``` SELECT m.MAccNo , t.TAccNo , t.TranCode , v.VTranCode , v.TaxCode FROM Master m INNER JOIN nTrans t ON t.TAccNo = m.MAccNo INNER JOIN VatTrans v ON v.VTranCode = t.TranCode UNION ALL SELECT m.MAccNo , t.TAccNo , t.TranCode , v.VTranCode , v.TaxCode FROM Master m INNER JOIN nTransArc t ON t.TAccNo = m.MAccNo INNER JOIN VatTransArc v ON v.VTranCode = t.TranCode ORDER BY m.MAccNo ``` This gets all your current data in one query and then gets all your historical data in the second query and then combines the results.
SQL Server: multiple SELECTs in a single query
[ "", "sql", "sql-server", "" ]
This is my first question since every question I have ever had has already had an answer on here. please forgive the poor formatting. The query runs in 1ms by itself which is great. It produces about 600,000 results from about 3 million entries while the database is getting inserted into about 10 per second. I know this is not very much for a database so I assume load is not an issue. I have other large queries that insert just fine into a file. This one specifically, when "SELECT \* INTO OUTFILE" is added, runs in about 11 hours. This is way too long for the query to run and I have no idea why. Table: container\_table -`Primary Key: containerID(bigint), mapID(int), cavityID(int)` -`Index: timestamp(datetime)` Table: cont\_meas\_table -`Primary Key: containerID(bigint), box(int), probe(int), inspectionID(int), measurementID(int)` Table: cavity\_map -`Primary Key: mapID(int), gob(char), section(int), cavity(int)` Query: ``` (SELECT 'containerID','timestamp','mapID','lineID','fp','fpSequence','pocket','cavityID', 'location','inspResult', 'otgMinThickMeasValuePrb2_1','otgMaxThickMeasValuePrb2_1','RatioPrb2_1','otgOORMeasValuePrb2_1', 'otgMinThickMeasValuePrb2_2','otgMaxThickMeasValuePrb2_2','RatioPrb2_2','otgOORMeasValuePrb2_2', 'otgMinThickMeasValuePrb2_3','otgMaxThickMeasValuePrb2_3','RatioPrb2_3') UNION (SELECT * INTO OUTFILE 'testcsv.csv' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY '\n' FROM (SELECT containerID, timestamp, groupmeas.mapID, lineID, fp, fpSequence, pocket, cavityID, CONCAT(MIN(section), MIN(gob)) AS location, inspResult, otgMinThickMeasValuePrb2_1, otgMaxThickMeasValuePrb2_1, (COALESCE(otgMaxThickMeasValuePrb2_1/NULLIF(CAST(otgMinThickMeasValuePrb2_1 AS DECIMAL(10,5)), 0), 0)) AS RatioPrb2_1, otgOORMeasValuePrb2_1, otgMinThickMeasValuePrb2_2, otgMaxThickMeasValuePrb2_2, (COALESCE(otgMaxThickMeasValuePrb2_2/NULLIF(CAST(otgMinThickMeasValuePrb2_2 AS DECIMAL(10,5)), 0), 0)) AS RatioPrb2_2, otgOORMeasValuePrb2_2, otgMinThickMeasValuePrb2_3, otgMaxThickMeasValuePrb2_3, (COALESCE(otgMaxThickMeasValuePrb2_3/NULLIF(CAST(otgMinThickMeasValuePrb2_3 AS DECIMAL(10,5)), 0), 0)) AS RatioPrb2_3 FROM (SELECT dbad.container_table.containerID, dbad.container_table.timestamp, dbad.container_table.mapID, dbad.container_table.lineID, dbad.container_table.fp, dbad.container_table.fpSequence, dbad.container_table.pocket, dbad.container_table.cavityID, dbad.container_table.inspResult, CASE WHEN aggMeas.otgMinThickMeasValuePrb2_1 IS NULL THEN - 1 ELSE aggMeas.otgMinThickMeasValuePrb2_1 END AS otgMinThickMeasValuePrb2_1, CASE WHEN aggMeas.otgMaxThickMeasValuePrb2_1 IS NULL THEN - 1 ELSE aggMeas.otgMaxThickMeasValuePrb2_1 END AS otgMaxThickMeasValuePrb2_1, CASE WHEN aggMeas.otgOORMeasValuePrb2_1 IS NULL THEN - 1 ELSE aggMeas.otgOORMeasValuePrb2_1 END AS otgOORMeasValuePrb2_1, CASE WHEN aggMeas.otgMinThickMeasValuePrb2_2 IS NULL THEN - 1 ELSE aggMeas.otgMinThickMeasValuePrb2_2 END AS otgMinThickMeasValuePrb2_2, CASE WHEN aggMeas.otgMaxThickMeasValuePrb2_2 IS NULL THEN - 1 ELSE aggMeas.otgMaxThickMeasValuePrb2_2 END AS otgMaxThickMeasValuePrb2_2, CASE WHEN aggMeas.otgOORMeasValuePrb2_2 IS NULL THEN - 1 ELSE aggMeas.otgOORMeasValuePrb2_2 END AS otgOORMeasValuePrb2_2, CASE WHEN aggMeas.otgMinThickMeasValuePrb2_3 IS NULL THEN - 1 ELSE aggMeas.otgMinThickMeasValuePrb2_3 END AS otgMinThickMeasValuePrb2_3, CASE WHEN aggMeas.otgMaxThickMeasValuePrb2_3 IS NULL THEN - 1 ELSE aggMeas.otgMaxThickMeasValuePrb2_3 END AS otgMaxThickMeasValuePrb2_3, CASE WHEN aggMeas.otgOORMeasValuePrb2_3 IS NULL THEN - 1 ELSE aggMeas.otgOORMeasValuePrb2_3 END AS otgOORMeasValuePrb2_3 FROM dbad.container_table LEFT OUTER JOIN (SELECT containerID, COALESCE(MIN(CASE WHEN (meas.inspectionID = 1) AND (meas.measurementID = 0) AND (meas.probe = 0) THEN meas.value END), - 1) AS otgMinThickMeasValuePrb2_1, COALESCE(MIN(CASE WHEN (meas.inspectionID = 1) AND (meas.measurementID = 1) AND (meas.probe = 0) THEN meas.value END), - 1) AS otgMaxThickMeasValuePrb2_1, COALESCE(MIN(CASE WHEN (meas.inspectionID = 1) AND (meas.measurementID = 2) AND (meas.probe = 0) THEN meas.value END), - 1) AS otgOORMeasValuePrb2_1, COALESCE(MIN(CASE WHEN (meas.inspectionID = 1) AND (meas.measurementID = 0) AND (meas.probe = 1) THEN meas.value END), - 1) AS otgMinThickMeasValuePrb2_2, COALESCE(MIN(CASE WHEN (meas.inspectionID = 1) AND (meas.measurementID = 1) AND (meas.probe = 1) THEN meas.value END), - 1) AS otgMaxThickMeasValuePrb2_2, COALESCE(MIN(CASE WHEN (meas.inspectionID = 1) AND (meas.measurementID = 2) AND (meas.probe = 1) THEN meas.value END), - 1) AS otgOORMeasValuePrb2_2, COALESCE(MIN(CASE WHEN (meas.inspectionID = 1) AND (meas.measurementID = 0) AND (meas.probe = 2) THEN meas.value END), - 1) AS otgMinThickMeasValuePrb2_3, COALESCE(MIN(CASE WHEN (meas.inspectionID = 1) AND (meas.measurementID = 1) AND (meas.probe = 2) THEN meas.value END), - 1) AS otgMaxThickMeasValuePrb2_3, COALESCE(MIN(CASE WHEN (meas.inspectionID = 1) AND (meas.measurementID = 2) AND (meas.probe = 2) THEN meas.value END), - 1) AS otgOORMeasValuePrb2_3 FROM (SELECT containerID, inspectionID, measurementID, probe, value, threshold, calibration FROM dbad.cont_meas_table AS a) AS meas GROUP BY containerID) AS aggMeas ON dbad.container_table.containerID = aggMeas.containerID) AS groupmeas INNER JOIN dbad.cavity_map ON groupmeas.mapID=dbad.cavity_map.mapID AND groupmeas.cavityID=dbad.cavity_map.cavity WHERE timestamp LIKE '2014-08-29%' AND otgMinThickMeasValuePrb2_1 BETWEEN 1 AND 499 AND otgMinThickMeasValuePrb2_2 BETWEEN 1 AND 499 AND otgMinThickMeasValuePrb2_3 BETWEEN 1 AND 499 AND otgMaxThickMeasValuePrb2_1 BETWEEN 1 AND 499 AND otgMaxThickMeasValuePrb2_2 BETWEEN 1 AND 499 AND otgMaxThickMeasValuePrb2_3 BETWEEN 1 AND 499 GROUP BY containerID) AS outside) ``` I have gotten rid of any `COUNT()` or `DISTINCT` and removed the leading '%' in my `WHERE timestamp LIKE '2014-08-29%'` so that timestamp's index can be used. Unfortunately, this hasn't helped. EDIT: After adding `WHERE timestamp >= '2014-08-29' AND timestamp < '2014-08-29' + INTERVAL 1 DAY` The query actually takes longer. I know this shouldn't be the case so I must be doing something terribly wrong in this query.
Just to make sure your database is properly configured to process this kind of workload, run the open source tool mysqltuner and look at the suggestions. Your problem description sounds like you possibly want different tmp\_table\_size and max\_heap\_table\_size in my.cnf You can find the tool here: <https://raw.githubusercontent.com/major/MySQLTuner-perl/master/mysqltuner.pl>
One thing jumps up and smacks me in the face here: ``` WHERE timestamp LIKE '2014-08-29%' /* slow! */ ``` This defeats the use of the index on the `timestamp` column, because it implicitly casts `timestamp` to a string. Try using this instead: ``` WHERE timestamp >= '2014-08-29' AND timestamp < '2014-08-29' + INTERVAL 1 DAY ``` That will allow the query to use an index range scan on the `timestamp` which may help a lot. It works because it casts the constant dates to the same datatype as `timestamp` instead of the other way around. The point of an index is to avoid what's called a full table scan, in which the MySQL server has to romp through every row of your table looking for matching data. Omitting a `WHERE` clause also makes the server look at every row of a table.
MySQL query slows way down when inserting into file
[ "", "mysql", "sql", "performance", "optimization", "query-optimization", "" ]
I frequently used `integer = ANY(integer[])` syntax, but now ANY operator doesn't work. This is the first time I use it to compare a scalar with an integer returned from CTE, but I thought this shouldn't cause problems. My query: ``` WITH bar AS ( SELECT array_agg(b) AS bs FROM foo WHERE c < 3 ) SELECT a FROM foo WHERE b = ANY ( SELECT bs FROM bar); ``` When I run it, it throws following error: > ERROR: operator does not exist: integer = integer[]: WITH bar AS ( > SELECT array\_agg(b) AS bs FROM foo WHERE c < 3 ) SELECT a FROM foo > WHERE b = ANY ( SELECT bs FROM bar) Details in this [SQL Fiddle](http://sqlfiddle.com/#!15/71479/2). So what am I doing wrong?
Based on the error message portion `operator does not exist: integer = integer[]`, it appears that the `bs` column needs to be `unnest`ed, in order to get the right hand side back to an `integer` so the comparison operator can be found: ``` WITH bar AS ( SELECT array_agg(b) AS bs FROM foo WHERE c < 3 ) SELECT a FROM foo WHERE b = ANY ( SELECT unnest(bs) FROM bar); ``` This results in the output: ``` A 2 3 ``` Given the doc for the [ANY function](http://www.postgresql.org/docs/current/static/functions-subquery.html#FUNCTIONS-SUBQUERY-ANY-SOME): > The right-hand side is a parenthesized subquery, which must return > exactly one column. The left-hand expression is evaluated and compared > to each row of the subquery result using the given operator, which > must yield a Boolean result. The result of ANY is "true" if any true > result is obtained. The result is "false" if no true result is found > (including the case where the subquery returns no rows). ... the error makes sense, as the left-hand expression is an `integer` -- column `b` -- while the right-hand expression is an array of `integer`s, or `integer[]`, and so the comparison ends up being of the form `integer` = `integer[]`, which doesn't have an operator, and therefore results in the error. `unnest`ing the `integer[]` value makes the left- and right-hand expressions `integer`s, and so the comparison can continue. Modified [SQL Fiddle](http://sqlfiddle.com/#!15/71479/14/0). **Note:** that the same behavior is seen when using `IN` instead of `= ANY`.
without unnest ``` WITH bar AS ( SELECT array_agg(b) AS bs FROM foo WHERE c < 3 ) SELECT a FROM foo WHERE ( SELECT b = ANY (bs) FROM bar); ```
Operator does not exist: integer = integer[] in a query with ANY
[ "", "sql", "postgresql", "" ]
I have a table in SQLite ``` /* Create a table called NAMES */ CREATE TABLE EVENTS(Id integer , Eventtype integer,value integer,Timestamp DATETIME); /* Create few records in this table */ INSERT INTO EVENTS VALUES(1,2,1,'2009-01-01 10:00:00'); --ROW1 INSERT INTO EVENTS VALUES(1,2,2,'2007-01-01 10:00:00'); --ROW2 INSERT INTO EVENTS VALUES(2,2,3,'2008-01-01 10:00:00’); --ROW3 ``` What is needed from the query is ROW1 and ROW3. The query should take most latest row based timestamp for duplicate ID & Eventtype combination. ROW1 and ROW2 are having same eventtype and id but ROW1 is latest so it should be picked.
In SQLite 3.7.11 or later, you can use GROUP BY with MAX() to select which row in a group to return: ``` SELECT *, MAX(timestamp) FROM events GROUP BY id, eventtype ``` In earlier versions, you have to look up some unique ID of the largest row in a group with a subquery (as in you answer).
I'm a bit late to this question, but I wasn't satisfied with the current answers as they mostly use [correlated subqueries](https://en.wikipedia.org/wiki/Correlated_subquery) which can seriously ruin performance. In many situations, single-column analytical functions can simulated using a standard join: ``` SELECT e.* FROM events e JOIN ( -- Our simulated window with analytical result SELECT id, eventtype, MAX(timestamp) AS timestamp FROM events GROUP BY id, eventtype ) win USING (id, eventtype, timestamp) ``` In general, the pattern is: ``` SELECT main.* FROM main JOIN ( SELECT partition_columns, FUNCTION(analyzed_column) AS analyzed_column FROM main GROUP BY partition_columns ) win USING (partition_columns, analyzed_column) ``` These simulated windows aren't perfect: 1. If your data has ties for your analyzed column result then you may need to remove duplicates from your result set. Otherwise you'll select *every* row from your partition that matches your analyzed column result. 2. If you analytical function requires ordering by more than one column, you will need to use correlated subqueries instead. The other answers can be modified to achieve the desired result.
SQLITE equivalent of ROW_NUMBER
[ "", "sql", "sqlite", "greatest-n-per-group", "" ]
I have a temp table where i have changed some values in the database and am trying to nsert these back into the real table but it is not working I have this below: ``` SELECT * INTO dbo.PackageProduct FROM #PackageP ``` where #PackageP is my temp table I am getting this error after the second run: ``` There is already an object named 'PackageProduct' in the database. ```
If the table already exists you have to use `INSERT` as `SELECT...INTO` creates a new table: ``` INSERT dbo.PackageProduct (<columns...>) SELECT (<columns...>) FROM #PackageP ```
TRY ``` INSERT INTO dbo.PackageProduct SELECT * FROM #PackageP ```
Insert contents from #Temp table into real table
[ "", "sql", "sql-server", "ssms", "" ]
I'm having a trouble with something that looks like simple thing. I'm trying to find first row that satisfies WHERE part of query and UPDATE it. ``` UPDATE Donation SET Available=0 WHERE Available != 0 and BloodGroup='" + bloodGroup + "' LIMIT 1" ``` bloodGroup is variable that gets filled automatically using C# and it keeps string value of selected blood group. When I try to run this I get incorrect syntax near 'limit'. What I'm doing wrong? Is it possible using LIMIT like during UPDATE query? During debugging I got query like this: ``` UPDATE Donation SET Available=0 WHERE Available != 0 AND BloodGroup='AB-' LIMIT 1 ```
Because C# is often used with SQL Server, perhaps the question is mistagged. The syntax looks fine for MySQL. In SQL Server, you can do this as: ``` UPDATE TOP (1) Donation SET Available = 0 WHERE Available <> 0 AND BloodGroup = 'AB-'; ``` Note that this chooses an arbitrary matching row, as does your original query (there is no `order by`).
Can you try it? way of getting row\_number ``` UPDATE Donation d1 join (SELECT id,(SELECT @Row:=0) as row,(@Row := @Row + 1) AS row_number FROM Donation where Available <> 0 AND BloodGroup='AB-') d2 ON d1.id=d2.id SET d1.Available='three' WHERE d1.Available <> 0 AND d1.BloodGroup='AB-' AND d2.row_number='1' ```
Update first row in SQL using LIMIT
[ "", "mysql", "sql", "limit", "" ]
I have a column stop\_execution\_date in sysjobactivity table. I need to get details of job only which ran after 4:10 AM. I tried it using ``` cast(DATEPART(hour, sja.stop_execution_date) as varchar)+cast(DATEPART(minute, sja.stop_execution_date) as varchar) >410 ``` But, if job completes at 9:05 AM it is not accepting because datepart(hour) is 9 and datepart(minute) is 5. By using + we are getting 95 instead of 905, which is less then 410. Can you please suggest me a good way to do that.
How about: ``` (DATEPART(hour, sja.stop_execution_date) = 4 and (DATEPART(minute, sja.stop_execution_date) > 10) OR DATEPART(hour, sja.stop_execution_date) > 4 ```
Could cast as a time. ``` cast(sja.stop_execution_date as time) > '04:10:00.0000000' ```
How can I compare only hour and minute in sql
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "sql-server-2008-r2", "" ]
This is my table/ source data, ``` |---------------------------------------------------| |ID | DT | DAY | ATTENDANCE | |-------+---------------+-----------+---------------| |89 | 2014-08-23 | NULL | 1 | |90 | 2014-08-24 | Sunday | NULL | |91 | 2014-08-25 | NULL | 1 | |92 | 2014-08-26 | NULL | 1 | |93 | 2014-08-27 | NULL | 0 | |94 | 2014-08-28 | NULL | 1 | |95 | 2014-08-29 | NULL | 0 | |96 | 2014-08-30 | NULL | 1 | |97 | 2014-08-31 | Sunday | NULL | |98 | 2014-08-01 | NULL | 1 | |99 | 2014-08-02 | NULL | 1 | |100 | 2014-08-03 | NULL | 0 | |101 | 2014-08-04 | NULL | 0 | |102 | 2014-08-05 | NULL | 1 | |103 | 2014-08-06 | NULL | 1 | |104 | 2014-08-07 | Sunday | NULL | |105 | 2014-08-08 | NULL | 1 | |106 | 2014-08-09 | NULL | 1 | |107 | 2014-08-10 | NULL | 1 | |---------------------------------------------------| ``` I want a result as given. The 5th column `[Streak]` is what I am trying to calculate. It is a value calculated based on attendance. At any day, If `[ATTENDANCE] = 0`, `[Streak]` becomes reset to 0. ``` |-------------------------------------------------------------| |ID | DT | DAY | ATTENDANCE | Streak | |-------+---------------+-----------+---------------+---------| |89 | 2014-08-23 | NULL | 1 | 1 | |90 | 2014-08-24 | Sunday | NULL | | |91 | 2014-08-25 | NULL | 1 | 2 | |92 | 2014-08-26 | NULL | 1 | 3 | |93 | 2014-08-27 | NULL | 0 | 0 | |94 | 2014-08-28 | NULL | 1 | 1 | |95 | 2014-08-29 | NULL | 0 | 0 | |96 | 2014-08-30 | NULL | 1 | 1 | |97 | 2014-08-31 | Sunday | NULL | | |98 | 2014-08-01 | NULL | 1 | 2 | |99 | 2014-08-02 | NULL | 1 | 3 | |100 | 2014-08-03 | NULL | 0 | 0 | |101 | 2014-08-04 | NULL | 0 | 0 | |102 | 2014-08-05 | NULL | 1 | 1 | |103 | 2014-08-06 | NULL | 1 | 2 | |104 | 2014-08-07 | Sunday | NULL | | |105 | 2014-08-08 | NULL | 1 | 3 | |106 | 2014-08-09 | NULL | 1 | 4 | |107 | 2014-08-10 | NULL | 1 | 5 | |-------------------------------------------------------------| ``` This is what I so far did. For me Sunday is also getting incremented as a Day. Any help, in solving it.. **SQL** ``` SELECT X.*, X.ID - LU.FROMID + 1 FROM @TAB X LEFT JOIN ( SELECT (SELECT MIN(ID) FROM @TAB) FROMID,MIN(ID) TOID FROM @TAB WHERE ATTENDANCE = 0 UNION SELECT A.ID FROMID,B.ID TOID FROM (SELECT ID,ROW_NUMBER() OVER (ORDER BY ID) R FROM @TAB WHERE ATTENDANCE = 0) A CROSS JOIN (SELECT ID,ROW_NUMBER() OVER (ORDER BY ID) R FROM @TAB WHERE ATTENDANCE = 0) B WHERE A.R = (B.R - 1) UNION SELECT MAX(ID),(SELECT MAX(ID) FROM @TAB) FROM @TAB WHERE ATTENDANCE = 0 UNION SELECT MAX(ID),MAX(ID) + 1 FROM @TAB ) LU ON X.ID >= LU.FROMID AND X.ID < LU.TOID ``` **Source Data for testing:** ``` SET DATEFORMAT DMY DECLARE @TAB TABLE (ID INT IDENTITY(89,1),DT DATE,DAY VARCHAR(15),ATTENDANCE BIT) INSERT INTO @TAB VALUES ('23-08-2014',Null,1), ('24-08-2014','Sunday',Null ), ('25-08-2014',Null ,1), ('26-08-2014',Null ,1), ('27-08-2014',Null ,0), ('28-08-2014',Null ,1), ('29-08-2014',Null ,0), ('30-08-2014',Null ,1), ('31-08-2014','Sunday',Null ), ('01-08-2014',Null ,1), ('02-08-2014',Null ,1), ('03-08-2014',Null ,0), ('04-08-2014',Null ,1), ('05-08-2014',Null ,0), ('06-08-2014',Null ,1), ('07-08-2014','Sunday',Null ), ('08-08-2014',Null ,1), ('09-08-2014',Null ,1), ('10-08-2014',Null ,1) ``` Thanks in Advance.
@HHH, I added another temp table around @TAB. This works, Please test it and tell. ``` DECLARE @TAB2 TABLE (MASTERID INT IDENTITY(1,1),ID INT,DT DATE,DAY VARCHAR(15),ATTENDANCE BIT) INSERT INTO @TAB2 SELECT * FROM @TAB WHERE DAY IS NULL SELECT Y.*, LU2.Streak FROM @TAB Y LEFT JOIN ( SELECT X.ID, X.MASTERID - LU.FROMID + 1 [Streak] FROM @TAB2 X LEFT JOIN ( SELECT (SELECT MIN(MASTERID) FROM @TAB2) FROMID,MIN(MASTERID) TOID FROM @TAB2 WHERE ATTENDANCE = 0 UNION SELECT A.MASTERID FROMID,B.MASTERID TOID FROM (SELECT MASTERID,ROW_NUMBER() OVER (ORDER BY MASTERID) R FROM @TAB2 WHERE ATTENDANCE = 0) A CROSS JOIN (SELECT MASTERID,ROW_NUMBER() OVER (ORDER BY MASTERID) R FROM @TAB2 WHERE ATTENDANCE = 0) B WHERE A.R = (B.R - 1) UNION SELECT MAX(MASTERID),(SELECT MAX(MASTERID) FROM @TAB2) FROM @TAB2 WHERE ATTENDANCE = 0 UNION SELECT MAX(MASTERID),MAX(MASTERID) + 1 FROM @TAB2 ) LU ON X.MASTERID >= LU.FROMID AND X.MASTERID < LU.TOID ) LU2 ON Y.ID = LU2.ID ``` --- RESULT: ![enter image description here](https://i.stack.imgur.com/BrBdj.jpg)
Like said Gordon, you need to identify groups of consecutive attendace day. But this is another version.. ``` select t.*, t2.Seq, (case ATTENDANCE when 1 then ROW_NUMBER() over (partition by t2.Seq,t.attendance order by t.id) when 0 then 0 else null end) Streak from @TAB t outer apply (select count(1) as Seq from @tab t2 where t2.id < t.ID and t2.ATTENDANCE = 0) as t2 order by t.id ```
Calculate Running count in SQL Server excluding certain rows
[ "", "sql", "sql-server", "t-sql", "sql-server-2012", "" ]
I am trying to understand how to UPDATE multiple rows with different values and I just don't get it. The solution is everywhere but to me it looks difficult to understand. For instance, three updates into 1 query: ``` UPDATE table_users SET cod_user = '622057' , date = '12082014' WHERE user_rol = 'student' AND cod_office = '17389551'; UPDATE table_users SET cod_user = '2913659' , date = '12082014' WHERE user_rol = 'assistant' AND cod_office = '17389551'; UPDATE table_users SET cod_user = '6160230' , date = '12082014' WHERE user_rol = 'admin' AND cod_office = '17389551'; ``` I [read](https://web.archive.org/web/20161027063005/http://www.karlrixon.co.uk/writing/update-multiple-rows-with-different-values-and-a-single-sql-query/) an example, but I really don't understand how to make the query. i.e: ``` UPDATE table_to_update SET cod_user= IF(cod_office = '17389551','622057','2913659','6160230') ,date = IF(cod_office = '17389551','12082014') WHERE ?? IN (??) ; ``` I'm not entirely clear how to do the query if there are multiple condition in the WHERE and in the IF condition..any ideas?
You can do it this way: ``` UPDATE table_users SET cod_user = (case when user_role = 'student' then '622057' when user_role = 'assistant' then '2913659' when user_role = 'admin' then '6160230' end), date = '12082014' WHERE user_role in ('student', 'assistant', 'admin') AND cod_office = '17389551'; ``` I don't understand your date format. Dates should be stored in the database using native date and time types.
MySQL allows a more readable way to combine multiple updates into a single query. This seems to better fit the scenario you describe, is much easier to read, and avoids those difficult-to-untangle multiple conditions. ``` INSERT INTO table_users (cod_user, date, user_rol, cod_office) VALUES ('622057', '12082014', 'student', '17389551'), ('2913659', '12082014', 'assistant','17389551'), ('6160230', '12082014', 'admin', '17389551') ON DUPLICATE KEY UPDATE cod_user=VALUES(cod_user), date=VALUES(date) ``` This assumes that the `user_rol, cod_office` combination is a primary key. If only one of these is the *primary key*, then add the other field to the UPDATE list. If neither of them is a primary key (that seems unlikely) then this approach will always create new records - probably not what is wanted. However, this approach makes prepared statements easier to build and more concise.
UPDATE multiple rows with different values in one query in MySQL
[ "", "mysql", "sql", "sql-update", "" ]
I have two tables with similar column names and I need to return records from the left table which are not found in the right table? I have a primary key(column) which will help me to compare both tables. Which join is preferred?
If you are asking for T-SQL then lets look at fundamentals first. There are three types of joins here each with its own set of logical processing phases as: 1. A `cross join` is simplest of all. It implements only one logical query processing phase, a `Cartesian Product`. This phase operates on the two tables provided as inputs to the join and produces a Cartesian product of the two. That is, each row from one input is matched with all rows from the other. So if you have m rows in one table and n rows in the other, you get m×n rows in the result. 2. Then are `Inner joins` : They apply two logical query processing phases: `A Cartesian product` between the two input tables as in a cross join, and then it `filters` rows based on a predicate that you specify in `ON` clause (also known as `Join condition`). 3. Next comes the third type of joins, `Outer Joins`: In an `outer join`, you mark a table as a `preserved` table by using the keywords `LEFT OUTER JOIN`, `RIGHT OUTER JOIN`, or `FULL OUTER JOIN` between the table names. The `OUTER` keyword is `optional`. The `LEFT` keyword means that the rows of the `left table` are preserved; the `RIGHT` keyword means that the rows in the `right table` are preserved; and the `FULL` keyword means that the rows in `both` the `left` and `right` tables are preserved. The third logical query processing phase of an `outer join` identifies the rows from the preserved table that did not find matches in the other table based on the `ON` predicate. This phase adds those rows to the result table produced by the first two phases of the join, and uses `NULL` marks as placeholders for the attributes from the nonpreserved side of the join in those outer rows. Now if we look at the question: To return records from the left table which are not found in the right table use `Left outer join` and filter out the rows with `NULL` values for the attributes from the right side of the join.
Try This ``` SELECT f.* FROM first_table f LEFT JOIN second_table s ON f.key=s.key WHERE s.key is NULL ``` For more please read this article : [Joins in Sql Server](https://medium.com/@CodAffection/complete-guide-on-joins-in-sql-server-726a76a1274d) [![enter image description here](https://i.stack.imgur.com/HbJTb.jpg)](https://medium.com/@CodAffection/complete-guide-on-joins-in-sql-server-726a76a1274d)
How to return rows from left table not found in right table?
[ "", "sql", "join", "left-join", "outer-join", "" ]
So I have these 3 tables: t\_student which looks like this: ``` STUDENT_ID| FIRST_NAME |LAST_NAME ----------------------------------- 1 | Ivan | Petrov 2 | Ivan | Ivanov 3 | Georgi | Georgiev ``` t\_course which looks like this: ``` course_id | NAME |LECTURER_NAME ----------------------------------- 1 | Basics | Vasilev 2 | Photography| Loyns ``` t\_enrolment which looks like this: ``` enrolment_id| student_fk |course_fk | Avarage_grade ------------------------------------------------------- 1 | 1 | 1 | 2 | 3 | 1 | 3 | 4 | 1 | 4 | 2 | 1 | 5 | 1 | 2 | 5.50 6 | 2 | 2 | 5.40 7 | 5 | 2 | 6.00 ``` I need to make 'select' statement and present the number of students per course. The result should be: ``` Count_students | Course_name ----------------------------- 4 | Basics 3 | Photography ```
Select all courses from your course Table, join the enrolment table and group by your course id. With count() you can select the number of Students ``` SELECT MAX(t_course.NAME) AS Course_name, COUNT(t_enrolment.student_fk) AS Count_students FROM t_course LEFT JOIN t_enrolment ON t_enrolment.course_fk = t_course.course_id GROUP BY t_course.course_id; ``` If you want to select the same student in one course only once (if more then one enrolment can happen) you can use COUNT(DISTINCT t\_enrolment.student\_fk) **UPDATE** To make it working not only in mySQL I added an aggregate function to the name column. Depending on the SQL database you are using you will have to add quotes or backticks.
You need a select statement with a join to the couse table (for the `Course_name`). Group by `'t_course'.name` to use the `COUNT(*)` function this will work: ``` SELECT COUNT(*) AS Count_students, c.NAME AS Course_name FROM t_enrolment e JOIN course c ON e.course_fk = c.course_id GROUP BY c.NAME ``` **More information** [Count function](http://dev.mysql.com/doc/refman/5.1/de/counting-rows.html) [Group by](http://dev.mysql.com/doc/refman/5.1/de/group-by-modifiers.html) [Join](http://dev.mysql.com/doc/refman/5.1/de/join.html)
How to get number of students per course in sql?
[ "", "sql", "count", "" ]
For example, string is `abc123CD` need to find out a method to read only *numbers* in the string i.e. ``` select a_postgres_function('abc123CD') ------ Result ------ 123 ``` --- *My try* ``` select substring('abc123CD' from '%#"^[0-9]+$#"%' for '#') ```
Try this: ``` select (regexp_matches('abc123CD', '\d+'))[1]; ``` Since `regexp_matches` returns array of text, you should access the first element by `[1]`.
As per [ntalbs's Answer](https://stackoverflow.com/a/25681789/3682599) --- Wrap that query into a *Function* ``` create or replace function shownums(text) returns integer as $$ select (regexp_matches($1,'\d+'))[1]::int; $$ language sql; ```
Return Only number from a string if it contains numbers
[ "", "sql", "regex", "postgresql", "substring", "" ]
There are two tables. I need to select the columns from ONLY the table where the color Red is in the Color or ColorName column. ``` Columns ------- moms: id,Name, Number, Color dads: id,FullName, Phone, ColorName Data ---- moms: 1, Jill, 832, Red dads: 1, Jack, 123, Blue -- SELECT * FROM moms,dads WHERE Color = Red or ColorName = 'Red' returns rows from both tables. -- ``` I only want all rows from moms table to be returned. The above returns all rows from both tables. Seems like I need some type of reverse IN clause (i.e. where values are in column names). DO I need a IF or IF Exists clause?
**Q: Do I need a IF or IF Exists clause?** **A:** No. What you need to recognize is that your query is returning a semi-Cartesian product. The syntax you have is equivalent to: ``` SELECT m.* , d.* FROM moms m CROSS JOIN dads d WHERE m.Color = 'Red' OR d.ColorName = 'Red' ``` Basically, *every* row from `moms` is being matched to *every* row from `dads`. (Run the query without a WHERE clause, if you have 10 rows in `moms` and 10 rows in `dads`, you'll get back a total of 100 rows. The `WHERE` clause is just filtering out the rows that don't meet the specified criteria. Usually, when we use a SQL join operation, we include some predicates that specify which rows from one table are to match which rows from another table. (Typically, a foreign key matches a primary key.) (I recommend you ditch the old-school comma syntax for the JOIN operation, and use the newer `JOIN` keyword instead.) --- Firstly, what result set do you expect to be returned? If you want rows from `moms`, you could get those with this query: ``` SELECT m.* FROM moms m WHERE m.Color = 'Red' ``` If you want rows from `dads`, then this: ``` SELECT d.* FROM dads d WHERE d.ColorName = 'Red' ``` If the columns in the `moms` and `dads` tables "line up", in terms of the number of columns, the order of the columns and the datatypes of the columns, you could use a UNION ALL set operator to combine the results from the two queries, although we would typically also include a discriminator column, to tell which query each row was returned from: ``` SELECT 'moms' AS `source` , m.id AS `id` , m.Name AS `name` , m.Number AS `number` , m.Color AS `color` FROM moms m WHERE m.Color = 'Red' UNION ALL SELECT 'dads' AS `source` , d.id AS `id` , d.FullName AS `name` , d.Phone AS `number` , d.ColorName AS `color` FROM dads d WHERE d.ColorName = 'Red' ``` Beyond those suggestions, it's hard to provide any more help, absent a clearer definition of the actual resultset you are attempting to return.
You can use Union: ``` select color as c from moms where color = 'Red' union all select colorName as c from dads where colorName = 'Red' ```
How to select data from columns in only one table when checking two tables limited by where clause in MySQL
[ "", "mysql", "sql", "select", "multiple-tables", "" ]
Hoping someone can help me out here. I'm trying to have subtotals in an sql query, but not as another column. See screenshot with the results I get and the explanation ![Screenshot](https://i.stack.imgur.com/mKJoj.png) This is the query I have so far ``` SELECT ar.Person_Id AS 'Id', pa.Serial_Number, ar.Family_Name + ', '+ar.First_Name AS 'Name', CASE WHEN ar.Line_Type = 'A' THEN 'Activity: '+ar.Item_Name ELSE 'Hotel: '+ar.Item_Name END AS 'Description', CAST(IsNull(ar.Old_Amount_Excl_VAT,0) AS DECIMAL(10,2)) AS 'Amount Excl VAT', CAST(IsNull(ar.Old_Amount_Incl_VAT,0) AS DECIMAL(10,2)) AS 'Amount Incl VAT', CAST(IsNull(ar.Old_Amount_Incl_VAT,0) - IsNull(ar.Old_Amount_Excl_VAT,0) AS DECIMAL(10,2)) AS 'VAT Amount', p.Currency_Code, ( SELECT TOP 1 pt.Payment_Type + ' ' + pt.Description AS Payment_Type FROM PaymentsPerPerson ppp LEFT OUTER JOIN PaymentCodes pc ON ppp.Client_Id = pc.Client_Id AND ppp.Project_Id = pc.Project_Id AND ppp.Payment_Code = pc.Payment_Code LEFT OUTER JOIN PaymentTypes pt ON ppp.Client_Id = pt.Client_Id AND ppp.Project_Id = pt.Project_Id AND pt.Payment_Type = pc.Payment_Type WHERE ppp.Client_Id = ar.Client_Id AND ppp.Project_Id = ar.Project_Id AND ppp.Person_Id = ar.Person_Id AND ppp.Line_Number IN (SELECT MAX(Line_Number) FROM PaymentsPerPerson WHERE Person_Id = ar.Person_Id) ) AS Payment_Type, ( SELECT TOP 1 convert(varchar, Payment_Date, 120) AS Payment_Date FROM PaymentsPerPerson WHERE Client_Id = ar.Client_Id AND Project_Id = ar.Project_Id AND Person_Id = ar.Person_Id ORDER BY Line_Number DESC ) AS Payment_Date, ( SELECT TOP 1 Card_Reference FROM PaymentsPerPerson WHERE Client_Id = ar.Client_Id AND Project_Id = ar.Project_Id AND Person_Id = ar.Person_Id ORDER BY Line_Number DESC ) AS Transaction_Id, convert(varchar, getdate(), 120) AS Date FROM AccountingReport ar LEFT OUTER JOIN Participants pa ON ar.Client_Id = pa.Client_Id AND ar.Project_Id = pa.Project_Id AND ar.Person_Id = pa.Person_Id AND pa.Date_Registered IS NOT NULL LEFT OUTER JOIN Projects p ON ar.Client_Id = p.Client_Id AND ar.Project_Id = p.Project_Id RIGHT OUTER JOIN PaymentsPerPerson ppp ON ar.Client_Id = ppp.Client_Id AND ar.Project_Id = ppp.Project_Id AND ar.Person_Id = ppp.Person_Id WHERE ar.Client_Id = 'CLIENTID' AND ar.Project_Id = 'PROJECTID' AND (IsNull(Old_Amount_Excl_VAT,0) <> 0 OR IsNull(Old_Amount_Incl_VAT,0) <> 0) AND pa.Date_Registered IS NOT NULL ORDER BY ar.Person_Id, Item_Id, SubItem_Id, SubSubItem_Id ``` I've tried using ROLLUP but this is what I get ![Screenshot 2](https://i.stack.imgur.com/FzcsK.png) This is the query I've used using rollup ``` SELECT CASE WHEN (GROUPING(ar.Person_Id) = 1) THEN 0 ELSE ISNULL(ar.Person_Id, 'UNKNOWN') END AS 'Id', CASE WHEN (GROUPING(pa.Serial_Number) = 1) THEN 0 ELSE ISNULL(pa.Serial_Number, 'UNKNOWN') END AS Serial_Number, CASE WHEN (GROUPING(ar.Family_Name + ', '+ar.First_Name) = 1) THEN 'ALL' ELSE ISNULL(ar.Family_Name + ', '+ar.First_Name, 'UNKNOWN') END AS 'Name', CASE WHEN ar.Line_Type = 'A' THEN 'Activity: '+ar.Item_Name ELSE 'Hotel: '+ar.Item_Name END AS 'Description', SUM(CAST(IsNull(ar.Old_Amount_Excl_VAT,0) AS DECIMAL(10,2))) AS 'Amount Excl VAT', SUM(CAST(IsNull(ar.Old_Amount_Incl_VAT,0) AS DECIMAL(10,2))) AS 'Amount Incl VAT', SUM(CAST(IsNull(ar.Old_Amount_Incl_VAT,0) - IsNull(ar.Old_Amount_Excl_VAT,0) AS DECIMAL(10,2))) AS 'VAT Amount', CASE WHEN (GROUPING(p.Currency_Code) = 1) THEN 'ALL' ELSE ISNULL(p.Currency_Code, 'UNKNOWN') END AS Currency_Code, ( SELECT TOP 1 pt.Payment_Type + ' ' + pt.Description AS Payment_Type FROM PaymentsPerPerson ppp LEFT OUTER JOIN PaymentCodes pc ON ppp.Client_Id = pc.Client_Id AND ppp.Project_Id = pc.Project_Id AND ppp.Payment_Code = pc.Payment_Code LEFT OUTER JOIN PaymentTypes pt ON ppp.Client_Id = pt.Client_Id AND ppp.Project_Id = pt.Project_Id AND pt.Payment_Type = pc.Payment_Type WHERE ppp.Client_Id = ar.Client_Id AND ppp.Project_Id = ar.Project_Id AND ppp.Person_Id = ar.Person_Id AND ppp.Line_Number IN (SELECT MAX(Line_Number) FROM PaymentsPerPerson WHERE Person_Id = ar.Person_Id) ) AS Payment_Type, ( SELECT TOP 1 convert(varchar, Payment_Date, 120) AS Payment_Date FROM PaymentsPerPerson WHERE Client_Id = ar.Client_Id AND Project_Id = ar.Project_Id AND Person_Id = ar.Person_Id ORDER BY Line_Number DESC ) AS Payment_Date, ( SELECT TOP 1 Card_Reference FROM PaymentsPerPerson WHERE Client_Id = ar.Client_Id AND Project_Id = ar.Project_Id AND Person_Id = ar.Person_Id ORDER BY Line_Number DESC ) AS Transaction_Id, CONVERT(varchar, GETDATE(), 120) AS Date FROM AccountingReport ar LEFT OUTER JOIN Participants pa ON ar.Client_Id = pa.Client_Id AND ar.Project_Id = pa.Project_Id AND ar.Person_Id = pa.Person_Id AND pa.Date_Registered IS NOT NULL LEFT OUTER JOIN Projects p ON ar.Client_Id = p.Client_Id AND ar.Project_Id = p.Project_Id RIGHT OUTER JOIN PaymentsPerPerson ppp ON ar.Client_Id = ppp.Client_Id AND ar.Project_Id = ppp.Project_Id AND ar.Person_Id = ppp.Person_Id WHERE ar.Client_Id = 'CLIENTID' AND ar.Project_Id = 'PROJECTID' AND (IsNull(Old_Amount_Excl_VAT,0) <> 0 OR IsNull(Old_Amount_Incl_VAT,0) <> 0) AND pa.Date_Registered IS NOT NULL GROUP BY ar.Client_Id, ar.Project_Id, ar.Person_Id, pa.Serial_Number, ar.Line_Type, ar.Item_Name, ar.Item_Id, ar.SubItem_Id, ar.SubSubItem_Id, p.Currency_Code, ar.Family_Name + ', '+ar.First_Name WITH ROLLUP ORDER BY ar.Person_Id, Item_Id, SubItem_Id, SubSubItem_Id ``` Any ideas would be great, since I'm no SQL expert. Is it possible to do with SQL Server? Or do I need some script (asp,php) to generate this? It'd be better if it can be done with SQL queries, since we export the results as Excel Reports when the client clicks on a button.
``` WITH DataSet AS ( --Your original query. For simplicity, I hard-coded the results. SELECT '4142722'AS ID, 1 AS Serial_Number, 'Name1' AS Name, 'Activity: Description' AS DESCRIPTION, 10000.00 AS AmountExclVAT,10000.00 AS AmountInclVAT, 0 AS VATAmount, 'EUR' AS Currency_Code, NULL AS Payment_Type, NULL AS Payment_Date, NULL AS Trasaction_ID, NULL as Date UNION SELECT '4142722'AS ID, 1 AS Serial_Number, 'Name1' AS Name, 'Activity: Description1' AS DESCRIPTION, 2000.00 AS AmountExclVAT,2000.00 AS AmountInclVAT, 0 AS VATAmount, 'EUR' AS Currency_Code, NULL AS Payment_Type, NULL AS Payment_Date, NULL AS Trasaction_ID, NULL as Date UNION SELECT '4142722'AS ID, 1 AS Serial_Number, 'Name1' AS Name, 'Activity: Description' AS DESCRIPTION, -1000.00 AS AmountExclVAT,-1000.00 AS AmountInclVAT, 0 AS VATAmount, 'EUR' AS Currency_Code, NULL AS Payment_Type, NULL AS Payment_Date, NULL AS Trasaction_ID, NULL as Date UNION SELECT '4142724'AS ID, 3 AS Serial_Number, 'Name2' AS Name, 'Activity: Description' AS DESCRIPTION, 5000.00 AS AmountExclVAT,5000.00 AS AmountInclVAT, 0 AS VATAmount, 'EUR' AS Currency_Code, NULL AS Payment_Type, NULL AS Payment_Date, NULL AS Trasaction_ID, NULL as Date UNION SELECT '4142724'AS ID, 3 AS Serial_Number, 'Name2' AS Name, 'Activity: Description' AS DESCRIPTION, 2000.00 AS AmountExclVAT,2000.00 AS AmountInclVAT, 0 AS VATAmount, 'EUR' AS Currency_Code, NULL AS Payment_Type, NULL AS Payment_Date, NULL AS Trasaction_ID, NULL as Date ) , SubTotals AS ( SELECT ID ,NULL AS SERIAL_NUMBER ,NULL AS NAME ,'Subtotal' DESCRIPTION ,SUM(AmountExclVAT) sum_AmountExclVAT ,SUM(AmountInclVAT) sum_AmountInclVAT ,SUM(VATAmount) sum_VATAmount ,NULL AS CurrencyCode ,NULL AS Payment_Type ,NULL AS Payment_Date ,NULL AS Trasaction_ID ,NULL as Date FROM DataSet GROUP BY ID ,SERIAL_NUMBER ,NAME ) SELECT * FROM DataSet UNION SELECT * FROM SubTotals ORDER BY ID, DESCRIPTION ```
If you're exporting to Excel, I think your best bet would be to export as is from your first query and leverage subtotaling in Excel (under the Data menu). You could automate this via a macro in Excel, or in your application code (upon clicking "Export" button).
SQL Query with multiple subtotals
[ "", "sql", "sql-server", "sql-server-2008", "" ]