Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
`insert into friends (user_id1,user_id2) select user_id from user where UserName='summer'or UserName='winter'` This gives an error. I want to insert user\_id of 'summer' into user\_id1 and user\_id of 'winter' into user\_id2. Please help?
Select must have same number of columns as insert, therefore: ``` INSERT INTO friends (user_id1,user_id2) SELECT (SELECT user_id FROM user where UserName='Summer') AS user_id1, (SELECT user_id FROM user WHERE UserName='Winter') AS user_id2 ``` Should do the trick
``` insert into friends (user_id1,user_id2) select user_id, -1 from user where UserName='summer' update friends set user_id2 = (select user_id from user where UserName='winter') where user_id2 = -1 ```
SQL query...I need to insert 2 values from same column of table1 to 2 different columns of table2
[ "", "mysql", "sql", "" ]
I followed advice [in this question](https://stackoverflow.com/questions/12146267/select-one-cell-from-sql-database), but for my purposes I don't want a `WHERE`. I don't know the value, so I cannot say `rawQuery("... WHERE x = ?", y)`, I don't care what `y` is, it's just a cell I want, and it is known that there is a single row. If it is not possible to lose the condition (perhaps because of causing an indeterminate number of results?) - then how can I say "from column z and row 0"? I'm lacking either terminology, or outright understanding, because my searches are turning up nothing. ***Edit:*** Eclipse doesn't complain at: ``` result = db.rawQuery("SELECT col FROM tbl", my_unused_string_array); ``` I'm not at a testing stage yet, and I can't enter this into the SQL db reader I was using to test `SELECT col FROM tbl` and ~ with `WHERE`.. will it work?
As per your edit, you don't need to specify `WHERE` clause, if you want to get all the records from a table: ``` result = db.rawQuery("SELECT col FROM tbl", new String[0]); ```
The SQL query "SELECT \* FROM table" will return the entire table. "SELECT colX, colY FROM table" will return columns colX and colY for all the rows in the table. If your table contains just one row, "SELECT col FROM table" will return the value of col for that one row. To use the SQLiteDatabase API to make that query, you would say: ``` result = db.rawQuery("SELECT col FROM tbl", null); ``` ... because you are not supplying any query parameters. Assuming that there is just one row seems dangerous to me. I would **not** use the "LIMIT" clause, because, while that will always get one row, it will hide the fact that there is more than one row, if that happens. Instead, I suggest that you assert that the cursor contains one row, like this: ``` if (1 != result.getCount()) { throw Exception("something's busted"); } ```
SELECT entry in SQLite, without a WHERE condition?
[ "", "android", "sql", "sqlite", "" ]
I'm wondering how to retrieve the top 10% athletes in terms of points, without using any clauses such as TOP, Limit etc, just a plain SQL query. My idea so far: Table Layout: **Score**: ``` ID | Name | Points ``` Query: ``` select * from Score s where 0.10 * (select count(*) from Score x) > (select count(*) from Score p where p.Points < s.Points) ``` Is there an easier way to do this? Any suggestions?
Try: ``` select s1.id, s1.name s1.points, count(s2.points) from score s1, score s2 where s2.points > s1.points group by s1.id, s1.name s1.points having count(s2.points) <= (select count(*)*.1 from score) ``` Basically calculates the count of players with a higher score than the current score, and if that count is less than or equal to 10% of the count of all scores, it's in the top 10%.
In most databases, you would use the ANSI standard window functions: ``` select s.* from (select s.*, count(*) over () as cnt, row_number() over (order by score) as seqnum from s ) s where seqnum*10 < cnt; ```
Query in sql to get the top 10 percent in standard sql (without limit, top and the likes, without window functions)
[ "", "sql", "correlated-subquery", "" ]
I want to create a basic ranking and sort the results of a stored procedure from the most appearing presenter to the lowest. I tried the below which works fine if I remove the groupCount and the GROUP BY but I can't get it to include this so that it groups by the presenters. Basically what I want is to see the person who presented most (i.e. max of groupCount) on top and then the lower ranks up to the person who presented least (i.e. min of groupCount). The error I am getting is the following: ``` Msg 8120, Level 16, State 1, Procedure CountPresenters, Line 17 Column 'MeetingDetails.topic' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause. ``` My stored procedure so far: ``` ALTER PROCEDURE [dbo].[CountPresenters] @title nvarchar(200) AS BEGIN SET NOCOUNT ON; SELECT B.presenter, B.topic, A.meetingDate, A.title, COUNT(*) AS groupCount FROM MeetingDetails B INNER JOIN MeetingDates A ON B.meetingID = A.meetingID WHERE B.itemStatus = 'active' AND A.title LIKE '%'+@title+'%' GROUP BY B.presenter ORDER BY groupCount desc, B.presenter, A.meetingDate desc FOR XML PATH('presenters'), ELEMENTS, TYPE, ROOT('ranks') END ``` Many thanks for any help with this, Tim.
I got this resolved by using temp tables. Thanks anyways !
Whatever field you took into select statement that must be included while you'r using group by function. Here is modified query. Hope it would be worked. ``` ALTER PROCEDURE [dbo].[CountPresenters] @title nvarchar(200) AS BEGIN SET NOCOUNT ON; SELECT B.presenter, B.topic, A.meetingDate, A.title, COUNT(*) AS groupCount FROM MeetingDetails B INNER JOIN MeetingDates A ON B.meetingID = A.meetingID WHERE B.itemStatus = 'active' AND A.title LIKE '%'+@title+'%' GROUP BY B.presenter, B.topic, A.meetingDate, A.title ORDER BY groupCount desc, B.presenter, A.meetingDate desc FOR XML PATH('presenters'), ELEMENTS, TYPE, ROOT('ranks') ```
SQL Server: calculate sum of count and order by this
[ "", "sql", "sql-server", "stored-procedures", "count", "group-by", "" ]
I am use this query for join table, and it works but return only 1 value and I want color wise data not all data. Here is my Query and its **[fiddle](http://sqlfiddle.com/#!2/b9a92/2/0)** ``` SELECT *, (SELECT pname FROM tbl_product WHERE id = tbl_productcolor.pid ) as productname, (SELECT image FROM tbl_product WHERE id = tbl_productcolor.pid ) as image FROM tbl_productcolor WHERE id = 1 ORDER BY id ASC; ``` And Here Is My Two Table **tbl\_product** ``` id category pcode pname image ----------------------------------------------------- 1 cat1 567 product1 img1.jpg 2 cat2 544 product2 img2.jpg 3 cat3 5665 product3 img3.jpg 4 cat1 5665 t1 img4.jpg 5 cat1 567 t5 img5.jpg 6 cat3 75 t8 img6.jpg 7 cat4 12 name img7.jpg ``` **tbl\_productcolor** ``` id pid color ----------------------------- 1 1 Black (79) 2 1 Grey (205) 3 1 Green (69) 4 1 Blue (89) 5 1 Purple (14) 6 1 Yellow (31) 7 1 Brown (124) 8 1 Beige (95) 9 1 Cream (164) 10 1 White (09) 11 2 Multi-coloured (113) 12 2 Black (79) 13 3 Pattern (84) 14 3 Metallic (48) 15 3 Black (79) 16 3 Orange (29) 17 3 Beige (95) 18 4 Green (69) 19 4 Red (33) 20 4 Yellow (31) 21 4 Brown (124) 22 4 Beige (95) 23 4 White (09) 24 5 Black (79) 25 5 Green (69) 26 5 Purple (14) 27 5 Red (33) 28 6 Metallic (48) 29 6 Black (79) 30 6 Grey (205) 31 6 Green (69) 32 6 Purple (14) 33 6 Pink (20) 34 6 Red (33) 35 7 Pattern (84) 36 7 Multi-coloured (113) 37 7 Metallic (48) 38 7 Black (79) 39 7 Grey (205) 40 7 Green (69) 41 7 Blue (89) 42 7 Purple (14) 43 7 Pink (20) 44 7 Red (33) 45 7 Orange (29) 46 7 Yellow (31) 47 7 Brown (124) 48 7 Beige (95) 49 7 Cream (164) 50 7 White (09) ```
If I've got it right you need to normalize `tbl_productcolor` table to make your Life a little bit easier. You should divide your table in two: ``` tbl_productcolor id pid color_id ----------------------------- 1 1 1 29 6 1 2 1 2 30 6 2 ``` and ``` tbl_colors id name ---------------------- 1 Black (79) 2 Grey (205) ``` For your current structure there is a query I guess you need: ``` select C1.ID, C1.PID, c1.color, p.pname, p.image FROM tbl_productcolor C1 JOIN tbl_productcolor C2 ON C1.Color=C2.Color AND C2.ID=1 JOIN tbl_product p ON C1.PID=p.ID order by p.id asc ; ``` `SQLFiddle demo`
What you want exactly.(Output)? If you remove (where id=1) then you get all result; ``` SELECT prdc.id, prdc.pid, prdc.color, pr.pname, pr.image FROM tbl_productcolor prdc INNER JOIN tbl_productcolor prd ON prdc.color = prd.color AND prd.id = 2 INNER JOIN tbl_product pr ON prdc.pid = pr.id; ``` you can use this query it should be work.
SQL Table Join, Query Issue
[ "", "mysql", "sql", "sql-server", "join", "" ]
I notice in VS2010's Tools > Options dialog, there is a section on how to format the SQL: ![PL/SQL Options](https://i.stack.imgur.com/lEyV6.jpg) I am working with a very large stored procedure on our server that is not formatted well at all. I was thinking it would be great to create a new SQL File in my project, copy and paste the text from the stored procedure in there, and have the text editor apply auto formatting to all of the SQL. When I call "Add > New Item" and get that dialog, there does not seem to be any Installed Template with the .sql extension. ![Add New Item Dialog](https://i.stack.imgur.com/NNTkB.jpg) I see the **SQL Server Database** template, but that is not what I need. The **LINQ to SQL Classes** is not right, either. What template do I need to use to use the auto formatting built into the VS2010 interface?
Judging from your screenshot, you have a C# project (e.g. a library dll) open. This won't show an option to add a .sql file as those files are not normally associated with a C# kind of projects. One way around it is: 1. In VS2010 main menu, go to File -> New -> File. In General tab, there's a SqlFile file type. ![enter image description here](https://i.stack.imgur.com/PzqNR.png) 2. Add a file and save it to the disk in the location of your project. 3. Right-click on your project and select Add -> Existing Item. In the open file dialog, change the extension to `*.*` to show your .sql file. 4. Add file to the project. If needed, change "Build Action" and "Copy to Output Directory" properties to control how it behaves during the build.
I hate Visual Studio's T-SQL designer, even with SQL Data Tools installed. I opt to open my project's .sql files in Management Studio. Right-Click any .sql file in VS. Navigate to "Open-with" Choose "Add" In the Program dialog type "explorer.exe" Type whatever you want for the friendly Name, I use Management Studio Click "OK" Highlight the new record and choose "Set as default". Now double clicking any .sql file in VS will open up whatever program opens when you double click a .sql file outside of VS. Using this method, I'm able to edit, add, and modify my sql files in management studio and save them back to the project seamlessly. Hope this helps.
Where is the SQL File Template in Visual Studio 2010?
[ "", "sql", "visual-studio-2010", "" ]
I have different simple SQL request that return only one value. Example ``` SELECT COUNT(*) FROM Person ``` OR ``` SELECT COUNT(*) FROM Category ``` I would to get all these infos in a unique request, with a column by request... I tried something like that but it doesn't work : ``` SELECT COUNT(C.CategoryId) As nbPeople, COUNT(P.PersonID) As nbCategories FROM Category C, Person P ``` This works but I get only one column, and a row by request ``` SELECT COUNT(*) FROM Person UNION SELECT COUNT(*) FROM Category ``` How Can I simply do that ? Thanks
When using SQL Server, you can try this: ``` SELECT ( select COUNT(C.CategoryId) from Category C ) As nbPeople , ( select COUNT(P.PersonID) from Person P ) As nbCategories ``` In Oracle for example, you need to add this at the bottom ``` FROM dual ```
Try this. ``` select * from (select count(*) cnt1 from Table1) t1 join (select count(*) as cnt2 from Table2) t2 on 1=1 ```
On SQL request by column
[ "", "sql", "" ]
Below query is taking long time frequently logging to slow log. Is there any possible way to rewrite the below query, i mean better than current query. ``` select p . *, pt.pretty_name, pt.seo_name, pt.description, pt.short_description, pt.short_description_2 from cat_products p, cat_product_catalog_map pcm, cat_current_product_prices cpp, cat_product_text pt where pcm.catalog_id = 2 and pcm.product_id = p.product_id and p.owner_catalog_id = 2 and cpp.product_id = p.product_id and cpp.currency = 'GBP' and cpp.catalog_id = 2 and cpp.state <> 'unavail' and pt.product_id = p.product_id and pt.language_id = 'EN' and p.product_id not in (select distinct product_id from cat_product_detail_map where short_value in ('ft_section' , 'ft_product')) order by pt.pretty_name limit 200 , 200; ```
Firstly I would switch to ANSI 92 explicit join syntax rather than the ANSI 89 implicit join syntax you are using, as the name suggests this is over 20 years out of date: ``` select ... from cat_products p INNER JOIN cat_product_catalog_map pcm ON pcm.product_id=p.product_id INNER JOIN cat_current_product_prices cpp ON cpp.product_id = p.product_id INNER JOIN cat_product_text pt ON pt.product_id=p.product_id WHERE .... ``` This won't affect performance but will make your query more legible, and less prone to accidental cross joins. Aaron Bertrand has written a [good article on the reasons to switch](https://sqlblog.org/2009/10/08/bad-habits-to-kick-using-old-style-joins) that is worth a read (it is aimed at SQL Server but many of the principles are universal). Then I would remove the `NOT IN (Subquery)` MySQL does not optimise subqueries like this well. It will rewrite it to: ``` AND NOT EXISTS (SELECT 1 FROM cat_product_detail_map WHERE short_value in ('ft_section','ft_product') AND cat_product_detail_map.product_id = p.product_id ) ``` It will then execute this subquery once for every row. The inverse of this scenario (`WHERE <expression> IN (Subquery)` is described in the article [Optimizing Subqueries with EXISTS Strategy](http://dev.mysql.com/doc/refman/5.0/en/subquery-optimization-with-exists.html)) You can exclude these product\_ids using the `LEFT JOIN/IS NULL` method which performs better in MySQL as it avoids a subquery completely: ``` SELECT ... FROM cat_products p LEFT JOIN cat_product_detail_map exc ON exc.product_id = p.product_id AND exc.short_value in ('ft_section','ft_product') WHERE exc.product_id IS NULL ``` This allows for better use of indexes and means that you don't have to execute a subquery for every row in the outer query. So your full query would then be: ``` SELECT p.*, pt.pretty_name, pt.seo_name, pt.description, pt.short_description, pt.short_description_2 FROM cat_products p INNER JOIN cat_product_catalog_map pcm ON pcm.product_id = p.product_id INNER JOIN cat_current_product_prices cpp ON cpp.product_id = p.product_id INNER JOIN cat_product_text pt ON pt.product_id = p.product_id LEFT JOIN cat_product_detail_map exc ON exc.product_id = p.product_id AND exc.short_value in ('ft_section','ft_product') WHERE exc.product_id IS NULL AND pcm.catalog_id = 2 AND p.owner_catalog_id = 2 AND cpp.currency = 'GBP' AND cpp.catalog_id = 2 AND cpp.state <> 'unavail' AND pt.language_id = 'EN' ORDER BY pt.pretty_name limit 200,200; ``` The final thing to look at would be the indexes on your tables, I don't know what you already have but I'd suggest an index on product\_id on each of your tables as a bare minimum, and perhaps on the columns you are filtering on.
To start with: Your statement is not as readable as it should be. As you don't use the ANSI join syntax it is some work to see how the tables are related. My first guess was that you link table to table to get from cat\_products to cat\_product\_text. That turned out wrong. What you do is select fields from cat\_products and cat\_product\_text only. So why join with the other two tables at all? Are you trying to see if records in these tables exist? Then use the EXISTS clause for that. When joining tables on the catalogue id, then show the dbms that this links your tables (pcm.catalog\_id = p.owner\_catalog\_id) rhather than making it seem they are not related (pcm.catalog\_id = 2 , p.owner\_catalog\_id = 2). You should not have to use DISTINCT for an IN query. The dbms should decide by itself if it is of advantage to remove duplicates or not in that set. The following query doesn't necessarily produce the same reasult as your original query. It depends on the purpose of joining the tables cat\_product\_catalog\_map and cat\_current\_product\_prices. I assume, it is just a check for existence. So the query can be re-written as follows. It should be faster, because you don't have to join tables that don't add fields to the result. But even this depends on table sizes etc. ``` SELECT p.*, pt.pretty_name, pt.seo_name, pt.description, pt.short_description, pt.short_description_2 FROM cat_products p JOIN cat_product_text pt ON pt.product_id = p.product_id and pt.language_id = 'EN' WHERE p.owner_catalog_id = 2 AND p.product_id NOT IN ( SELECT product_id FROM cat_product_detail_map WHERE short_value in ('ft_section','ft_product') ) AND EXISTS ( SELECT * FROM cat_product_catalog_map pcm WHERE pcm.product_id = p.product_id AND pcm.catalog_id = p.owner_catalog_id ) AND EXISTS ( SELECT * FROM cat_current_product_prices cpp WHERE cpp.product_id = p.product_id AND cpp.catalog_id = p.owner_catalog_id AND cpp.currency = 'GBP' AND cpp.state <> 'unavail' ) ORDER BY pt.pretty_name LIMIT 200,200; ```
Any possible way to rewrite this query for best performance?
[ "", "mysql", "sql", "" ]
I am trying to write a stored procedure which returns a result combining 2 table variables which looks something like this. ``` Name | LastName | course | course | course | course <- Columns Name | LastName | DVA123 | DVA222 | nothing | nothing <- Row1 Pete Steven 200 <- Row2 Steve Lastname 50 <- Row3 ``` From these 3 tables Table **Staff**: ``` Name | LastName | SSN | Steve Lastname 234 Pete Steven 132 ``` Table **Course Instance**: ``` Course | Year | Period | DVA123 2013 1 DVA222 2014 2 ``` Table **Attended by**: ``` Course | SSN | Year | Period | Hours | DVA123 234 2013 1 200 DVA222 132 2014 2 50 ``` I am taking `@year` as a parameter that will decide what year in the course will be displayed in the result. ``` ALTER proc [dbo].[test4] @year int as begin -- I then declare the 2 tables which I will then store the values from the tables DECLARE @Table1 TABLE( Firstname varchar(30) NOT NULL, Lastname varchar(30) NOT NULL ); DECLARE @Table2 TABLE( Course varchar(30) NULL ); ``` Declare @variable varchar(max) -- variable for saving the cursor value and then set the course1 to 4 I want at highest 4 results/course instances which I later order by the period of the year ``` declare myCursor1 CURSOR for SELECT top 4 period from Course instance where year = @year open myCursor1 fetch next from myCursor1 into @variable --print @variable while @@fetch_status = 0 Begin UPDATE @Table2 SET InstanceCourse1 = @variable where current of myCursor1 fetch next from myCursor1 into @variable print @variable End Close myCursor1 deallocate myCursor1 insert into @table1 select 'Firstname', 'Lastname' insert into @table1 select Firstname, Lastname from staff order by Lastname END select * from @Table1 -- for testing purposes select * from @Table2 -- for testing purposes --Then i want to combine these tables into the output at the top ``` This is how far I've gotten, I don't know how to get the courses into the columns and then get the amount of hours for each staff member. If anyone can help guide me in the right direction I would be very grateful. My idea about the cursor was to get the top (0-4) values from the top4 course periods during that year and then add them to the `@table2`.
Ok. This is not pretty. It is a really ugly dynamic sql, but in my testing it seems to be working. I have created an extra subquery to get the courses values as the first row and then Union with the rest of the result. The top four courses are gathered by using ROW\_Number() and order by Year and period. I had to make different versions of the courses string I am creating in order to use them for both column names, and in my pivot. Give it a try. Hopefully it will work on your data as well. ``` DECLARE @Year INT SET @Year = 2014 DECLARE @Query NVARCHAR(2000) DECLARE @CoursesColumns NVARCHAR(2000) SET @CoursesColumns = (SELECT '''' + Course + ''' as c' + CAST(ROW_NUMBER() OVER(ORDER BY Year, Period) AS nvarchar(50)) + ',' AS 'data()' FROM AttendedBy where [Year] = @Year for xml path('')) SET @CoursesColumns = LEFT(@CoursesColumns, LEN(@CoursesColumns) -1) SET @CoursesColumns = CASE WHEN CHARINDEX('c1', @CoursesColumns) = 0 THEN @CoursesColumns + 'NULL as c1, NULL as c2, NULL as c3, NULL as c4' WHEN CHARINDEX('c2', @CoursesColumns) = 0 THEN @CoursesColumns + ',NULL as c2, NULL as c3, NULL as c4' WHEN CHARINDEX('c3', @CoursesColumns) = 0 THEN @CoursesColumns + ', NULL as c3, NULL as c4' WHEN CHARINDEX('c4', @CoursesColumns) = 0 THEN @CoursesColumns + ', NULL as c4' ELSE @CoursesColumns END DECLARE @Courses NVARCHAR(2000) SET @Courses = (SELECT Course + ' as c' + CAST(ROW_NUMBER() OVER(ORDER BY Year, Period) AS nvarchar(50)) + ',' AS 'data()' FROM AttendedBy where [Year] = @Year for xml path('')) SET @Courses = LEFT(@Courses, LEN(@Courses) -1) SET @Courses = CASE WHEN CHARINDEX('c1', @Courses) = 0 THEN @Courses + 'NULL as c1, NULL as c2, NULL as c3, NULL as c4' WHEN CHARINDEX('c2', @Courses) = 0 THEN @Courses + ',NULL as c2, NULL as c3, NULL as c4' WHEN CHARINDEX('c3', @Courses) = 0 THEN @Courses + ', NULL as c3, NULL as c4' WHEN CHARINDEX('c4', @Courses) = 0 THEN @Courses + ', NULL as c4' ELSE @Courses END DECLARE @CoursePivot NVARCHAR(2000) SET @CoursePivot = (SELECT Course + ',' AS 'data()' FROM AttendedBy where [Year] = @Year for xml path('')) SET @CoursePivot = LEFT(@CoursePivot, LEN(@CoursePivot) -1) SET @Query = 'SELECT Name, LastName, c1, c2, c3, c4 FROM ( SELECT ''Name'' as name, ''LastName'' as lastname, ' + @CoursesColumns + ' UNION SELECT Name, LastName,' + @Courses + ' FROM( SELECT s.Name ,s.LastName ,ci.Course ,ci.Year ,ci.Period ,CAST(ab.Hours AS NVARCHAR(100)) AS Hours FROM Staff s LEFT JOIN AttendedBy ab ON s.SSN = ab.SSN LEFT JOIN CourseInstance ci ON ab.Course = ci.Course WHERE ci.Year=' + CAST(@Year AS nvarchar(4)) + ' ) q PIVOT( MAX(Hours) FOR Course IN (' + @CoursePivot + ') )q2 )q3' SELECT @Query execute(@Query) ``` Edit: Added some where clauses so only courses from given year is shown. Added Screenshot of my results. ![enter image description here](https://i.stack.imgur.com/lH49u.png)
try this ``` DECLARE @CourseNameString varchar(max), @query AS NVARCHAR(MAX); SET @CourseNameString='' select @CourseNameString = STUFF((SELECT distinct ',' + QUOTENAME(Course) FROM Attended where [Year]= 2013 FOR XML PATH(''), TYPE ).value('.', 'NVARCHAR(MAX)') ,1,1,'') set @query = ' select Name,LastName,'+@CourseNameString+' from Staff as e inner join ( SELECT * FROM (SELECT [Hours],a.SSN,a.Course as c FROM Attended as a inner JOIN Staff as s ON s.SSN = s.SSN) p PIVOT(max([Hours])FOR c IN ('+@CourseNameString+')) pvt)p ON e.SSN = p.SSN' execute(@query) ```
Stored procedure that returns a table from 2 combined
[ "", "sql", "sql-server-2008", "stored-procedures", "cursor", "" ]
i have the following statement ``` SELECT di_id, di_name, di_location, ig_name, in_latitude, in_longitude FROM dam_info LEFT JOIN instrument_group ON ig_diid = di_id LEFT JOIN instruments ON in_igid = ig_id; ``` which returned the result as follow ``` di_id di_name di_location ig_name in_latitude in_longitude 13 Macap "Kluang, Johor" "Standpipe Piezometer" 1.890895 103.266853 13 Macap "Kluang, Johor" "Standpipe Piezometer" 1.888353 103.267067 1 "Timah Tasoh" "Kangar, Perlis" NULL NULL NULL 2 "Padang Saga" "Langkawi, Kedah" NULL NULL NULL 3 "Bukit Kwong" "Pasir Mas, Kelantan" NULL NULL NULL 4 "Bukit Merah" "Kerian, Perak" NULL NULL NULL 5 Gopeng "Gopeng, Perak" NULL NULL NULL 6 Repas "Bentong, Pahang" NULL NULL NULL 7 Batu "Gombak, Selangor" NULL NULL NULL 8 Pontian "Rompin, Pahang" NULL NULL NULL 9 "Anak Endau" "Rompin, Pahang" NULL NULL NULL 10 Labong "Mersing, Johor" NULL NULL NULL 11 Bekok "Batu Pahat, Johor" NULL NULL NULL 12 Sembrong "Batu Pahat, Johor" NULL NULL NULL 14 Perting "Bentong, Pahang" NULL NULL NULL 15 Beris "Sik, Kedah" NULL NULL NULL ``` as you can see from the result, there are repeated row which i would like to eliminate one of those and if the `table` instruments have more rows to return, then I only want one. What is the correct statement of achieving that? thanks in advance
you could also use a group by clause: ``` SELECT di_id, di_name, di_location, ig_name, in_latitude, in_longitude FROM dam_info LEFT JOIN instrument_group ON ig_diid = di_id LEFT JOIN instruments ON in_igid = ig_id GROUP BY di_id; ```
Looking at your schema it seems to me that it is on purpose that one `di_id` contains multiple instruments, i.e. the `instruments` table contains multiple records per `in_igid`, as the phrase 'instrument\_group' suggests. One meaningful thing you could do is to return the bounding box of your instruments: ``` SELECT di_id, di_name, di_location, ig_name, MIN(in_latitude) as min_latitude, MAX(in_latitude) as max_latitude, MIN(in_longitude) as min_longitude, MAX(in_longitude) as max_longitude FROM dam_info left join instrument_group on ig_diid = di_id left join instruments on in_igid = ig_id GROUP BY di_id, di_name, di_location, ig_name; ```
SQL statement Join three tables
[ "", "mysql", "sql", "innodb", "tokudb", "" ]
Below is my query which I am running against Postgres database. ``` select distinct(col1),col2,col3,col4 from tableNot where col5 = 100 and col6 = '78' order by col4 DESC limit 100; ``` And below is the output I am getting back on the console- ``` col1 col2 col3 col4 entry.one 1.2.3 18 subject entry.one 1.2.8 18 newSubject entry.two 3.4.9 20 lifePerfect entry.two 3.4.5 17 helloPartner ``` * Now if you see my above output, `entry.one` and `entry.two` is coming twice so I would compare `col2` value for `entry.one` and whichever is higher, I will keep that row. So for `entry.one` I would compare `1.2.3` with `1.2.8` and `1.2.8` is higher for `entry.one` so I would keep this row only for `entry.one`. * Similarly with `entry.two`, I would compare `3.4.9` with `3.4.5` and `3.4.9` is higher for `entry.two` so I would keep this row only for `entry.two`. And below is the output I would like to see on the console - ``` col1 col2 col3 col4 entry.one 1.2.8 18 newSubject entry.two 3.4.9 20 lifePerfect ``` Is this possible to do in SQL? P.S Any Fiddle example would be great.
You are probably looking for `distinct on` which is a Postgres specific extension to the `distinct` operator. ``` select distinct on (col1) col1, col2, col3, col4 from tableNot where col5 = 100 and col6 = '78' order by col1, col4 DESC limit 100; ``` <http://sqlfiddle.com/#!15/04698/1> --- Please note that `distinct` is **not** a function. `select distinct (col1), col2, col3` is exactly the same thing as `select distinct col1, col2, col3`. The (standard SQL) `distinct` operator always operates on *all* columns of the select list, not just one. The difference between `select distinct (col1), col2` and `select distinct col1, col2` is the same as the difference between `select (col1), col2` and `select col1, col2`.
Try this query: ``` SELECT * FROM (select distinct(col1),col2,col3,col4 from tableNot where col5 = 100 and col6 = '78' order by col4 DESC limit 100)t1 WHERE t1.col2 = (SELECT MAX(t2.col2) FROM (select distinct(col1),col2,col3,col4 from tableNot where col5 = 100 and col6 = '78' order by col4 DESC limit 100)t2 WHERE t1.col1 = t2.col1 group by t2.col1) ``` ## [SQL Fiddle](http://sqlfiddle.com/#!15/47c3b/2)
How to compare two rows of same entry in PostgreSQL?
[ "", "sql", "postgresql", "comparison", "aggregate-functions", "" ]
I have a table of bus route. this table has fields like bus no. , route code , starting point , end point, and upto 10 halts from halt1 , halt2...halt10 . i have filled data in this table. now i want to select all rows having two values,for example jaipur and vasai. in my table, there are two rows that have jaipur and vasai. In one row, jaipur is in column halt2 and vasai in halt9. Similarly another row has jaipur in halt4 column and vasai in halt10 column. please help me to find out sql query. I am using MS SQL server. scrip ``` USE [JaipuBus] GO /****** Object: Table [dbo].[MyRoutes] Script Date: 02/24/2014 13:28:54 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[MyRoutes]( [id] [int] IDENTITY(1,1) NOT NULL, [Route_No] [nvarchar](50) NULL, [Route_Code] [nvarchar](50) NULL, [Color] [nvarchar](50) NULL, [Start_Point] [nvarchar](200) NULL, [End_Point] [nvarchar](200) NULL, [halt1] [nvarchar](50) NULL, [halt2] [nvarchar](50) NULL, [halt3] [nvarchar](50) NULL, [halt4] [nvarchar](50) NULL, [halt5] [nvarchar](50) NULL, [halt6] [nvarchar](50) NULL, [halt7] [nvarchar](50) NULL, [halt8] [nvarchar](50) NULL, [halt9] [nvarchar](50) NULL, [halt10] [nvarchar](50) NULL, CONSTRAINT [PK_MyRoutes] PRIMARY KEY CLUSTERED ( [id] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO ``` ![enter image description here](https://i.stack.imgur.com/x7e96.png)
use `CONTAINS` ``` SELECT * WHERE CONTAINS((startingpoint,endpoint,halt1,halt2,halt3,halt4,halt5,halt6,halt7,halt8,halt9,halt10), 'jaipur') AND CONTAINS((startingpoint,endpoint,halt1,halt2,halt3,halt4,halt5,halt6,halt7,halt8,halt9,halt10), 'vasai'); ```
``` SELECT * FROM bus_rout WHERE (halt1='aaa' OR halt2='aaa' OR .... halt10='aaa') AND (halt1='bbb' OR halt2='bbb' OR ....... halt10='bbb') ``` The where clause could be generated by code.
Select all rows from table having two given values anywhere in rows
[ "", "sql", "sql-server-2008-r2", "" ]
I have two tables (I'll list only fields that I want to search) MySQL version: 5.0.96-community ``` Table 1: Clients (860 rows) ID_CLIENT (varcahar 10) ClientName (text) Table 2: Details (22380 rows) ID_CLIENT (varchar 10) Details (varchar 1000) ``` The `Details` table can have multiple rows from the same client. I need to search into those two tables and retrive the ids of clients that match a search value. If i have a search value "blue" it has to match `CLientName` (ex the `Blue`berries Corp), or the `Details` in the second table (ex "it has a `blue` logo) The result should be a list of client id's that match the criteria. If I make a query for one table, it takes a decent time ``` #0.002 sec select a.`ID_CLIENT` from clienti a where a.`ClientName` LIKE '%blue%' #0.1 sec SELECT b.`ID_CLIENT` FROM Details b WHERE b.`Details` LIKE '%blue%' GROUP BY b.`GUID_CLIENT` ``` But if I try to join those two queries it takes ages. My questions(s) 1. What's the best way of doing what I need here, to get a list of ID-s based on the search result from both tables 2. What to change to improve search performance in the Details table, I know that %..% is not fast, but I need partial matches too. Edit (based on the answers) ``` #~0.2 sec (SELECT a.`ID_CLIENT` FROM `clienti` a where a.`ClientName` like '%blue%') union (SELECT distinct b.`ID_CLIENT` FROM `Details` b where b.`Details` like '%blue%') ``` It returns a list of IDs from both tables filtred by the search value. Edit 2: final query And with that list of ids I can filter the client table, to get only the clients that are in boths tables based on their id ``` select cl.`ID_CLIENT`, `ClientName`, `OtherField` from clients cl join ((SELECT a.`ID_CLIENT` FROM `clients` a where a.`ClientName` like '%blue%') union (SELECT distinct b.`ID_Client` FROM `Details` b where b.`Detail` like '%blue%' )) rez on cl.`ID_CLIENT` = rez.`ID_CLIENT` ```
If your two queries work, just use `union`: ``` select a.`ID_CLIENT` from clienti a where a.`ClientName` LIKE '%blue%' union SELECT b.`ID_CLIENT` FROM Details b WHERE b.`Details` LIKE '%blue%'; ``` The `union` will remove all duplicates, so you don't need a separate `group by` query. Why are the two search strings different for the two tables? The question suggests searching for `blue` in both of them. If the individual queries don't perform well, you might need to switch to a full text index.
If both queries are functioning as you wish, simply union the results together. This is much faster than 'or' on a query that cannot effectively use indexes and it will also allow you to remove duplicates in the same statement.
sql query search into two tables, performance issue
[ "", "sql", "sqlperformance", "qsqlquery", "" ]
I have a table t, which has an "after insert" trigger called trgInsAfter. Exactly how do i debug it? i'm no expert on this, so the question and steps performed might look silly. The steps i performed so far are: 1. connect to the `server instance` via `SSMS` (using a Windows Admin account) 1. right click the trigger node from the lefthand tree in SSMS and double click to open it, the code of the trigger is opened in a new query window (call this Window-1) as : blah...., ``` ALTER TRIGGER trgInsAfter AS .... BEGIN ... END ``` 2. open another query window (call this Window-2), enter the sql to insert a row into table t: ``` insert t(c1,c2) values(1,'aaa') ``` 3. set a break point in Window-1 (in the trigger's code) 4. set a break point in Window-2 (the insert SQL code) 5. click the Debug button on the toolbar while Window-2 is the current window the insert SQL code's breakpoint is hit, but when I look at Window-1, the break point in the trigger's code has a tooltip saying `'unable to bind SQL breakpoint, object containing the breakpoint not loaded'` I can sort of understand issue: how can `SSMS` know that the code in Window-1 is the trigger I want to debug? i can't see where to tell SSMS that 'hey, the code in this query editor is table t's inssert trigger's code' Any suggestions? Thanks
You're actually over-thinking this. I first run this query in one window (to set things up): ``` create table X(ID int not null) create table Y(ID int not null) go create trigger T_X on X after insert as insert into Y(ID) select inserted.ID go ``` I can then discard that window. I open a new query window, write: ``` insert into X(ID) values (1),(2) ``` And set a breakpoint on that line. I then *start* the debugger (`Debug` from menu or toolbar or Alt-F5) and wait (for a while, the debugger's never been too quick) for it to hit that breakpoint. And then, having hit there, I choose to `Step Into` (F11). And lo (after another little wait) a new window is opened which is my trigger, and the next line of code where the debugger stops is the `insert into Y...` line in the trigger. I can now set any further breakpoints I want to within the trigger.
There is a DEBUG menu in SSMS, but you'll likely need to be on the server to be able to debug, so if it is a remote access, it's probably not gonna be set up for it. That debug option will allow you to execute code, and step into your trigger and debug it in that manner (as you'd debug most any other code). ![Debug menu](https://i.stack.imgur.com/FFDuQ.png) If not having access to the debug menu/function, you'll have to debug "manually": First ensure your trigger is running correctly by inserting the input of the trigger into a debug table. Then you can verify that its called correctly. Then you can debug the query of the trigger as you would any other sql query, using the values from the debug table.
How to debug a T-SQL trigger?
[ "", "sql", "" ]
I have this table: ``` ID | name | result | -------------------- 1 | A | 1 | -------------------- 2 | B | 2 | -------------------- 3 | C | 1 | -------------------- 1 | A | 2 | -------------------- 4 | E | 2 | -------------------- ``` I want to add a new temporary column next to |result|, and where result=1 the value should be 100, and where result=2 the value should be 80 so it should look like this: ``` ID | name | result | NewColumn| ------------------------------- 1 | A | 1 | 100 | ------------------------------- 2 | B | 2 | 80 | ------------------------------- 3 | C | 1 | 100 | ------------------------------- 1 | A | 2 | 80 | ------------------------------- 4 | E | 2 | 80 | ------------------------------- ``` How can I query this in SQL ?
Use a [`CASE` expression](http://technet.microsoft.com/en-us/library/ms181765.aspx) in your `SELECT`'s column list - something like this: ``` SELECT ID, name, result, CASE WHEN result = 1 THEN 100 WHEN result = 2 THEN 80 ELSE NULL END AS NewColumn FROM YourTable ``` Add additional `WHEN` expressions or alter the `ELSE` expression as needed.
You could add a `case` statement to your query: ``` SELECT id, name, result, CASE result WHEN 1 THEN 100 WHEN 2 THEN 80 ELSE NULL END from my_table ```
add a temporary column in SQL, where the values depend from another column
[ "", "mysql", "sql", "mysql-workbench", "" ]
I use a SQL Server 2008 database. I have two tables with columns like these: `Table A`: ``` request_id|order_id|e-mail 100 |2567 |jack@abc.com 100 |4784 |jack@abc.com 450 |2578 |lisa@abc.com 450 |8432 |lisa@abc.com 600 |9032 |john@abc.com 600 |9033 |john@abc.com ``` `Table B` has also `id` and `order_no` columns and many others columns: `Table B`: ``` request_id|order_id|e-mail 100 |2563 |oscar@abc.com 300 |4784 |peter@abc.com 600 |9032 |john@abc.com 650 |2578 |bob@abc.com 850 |8432 |alice@abc.com ``` As you can see, a given `request_id` in table A can occur more than once (see 100 & 450 records) I need to find all records from table A, which are not present in table B by `order_id`, but have equal `request_id` column values. For above example I expect something like this: `Output`: ``` request_id|order_id|e-mail 100 |2567 |jack@abc.com 100 |4784 |jack@abc.com 450 |2578 |lisa@abc.com 450 |8432 |lisa@abc.com 600 |9033 |john@abc.com ``` As you can see above records from table A are not present in table B. This criteria is only satisfied with record where `order_id=600` I created the sketch of T-SQL query: ``` select D.request_id, D.order_id from table A AS D where D.request_id = 600 and D.order_id not in (select M.order_id from table B AS M where M.request_id = 600) ``` Unfortunately I don't have idea how to transform my query for all `request_id`. The first think is to use while loop over all `request_id` from table A, but it seems not smart in SQL world. Thank you for any help
Try this - ``` SELECT a.* FROM table_a a LEFT JOIN table_b b ON a.request_id = b.request_id AND a.order_id = b.order_id WHERE b.request_id IS NULL ``` Check here - [SQL Fiddle](http://sqlfiddle.com/#!6/7dec7/1/0)
``` select request_id, order_id from table_a except select request_id, order_id from table_b ``` EDIT: this does not work in MS SQL: If you want the email addresses as well: ``` select request_id, order_id, email from table_a where (request_id, order_id) not in ( select request_id, order_id from table_b ) ```
How to find records from table A, which are not present in table B?
[ "", "sql", "database", "sql-server-2008", "t-sql", "" ]
I have several values in my `name` column within the `contacts` table similar to this one: `test 3100509 DEMO NPS` I want to return only the numeric piece of each value from `name`. I tried this: `select substring(name FROM '^[0-9]+|.*') from contacts` But that doesn't do it. Any thoughts on how to strip all characters that are not numeric from the returned values?
Try this : ``` select substring(name FROM '[0-9]+') from contacts ```
`select regexp_replace(name , '[^0-9]*', '', 'g') from contacts;` This should do it. It will work even if you have more than one numeric sequences in the name. Example: ``` create table contacts(id int, name varchar(200)); insert into contacts(id, name) values(1, 'abc 123 cde 555 mmm 999'); select regexp_replace(name , '[^0-9]*', '', 'g') from contacts; ```
Return only numeric values from column - Postgresql
[ "", "sql", "postgresql", "pattern-matching", "" ]
I have multiple databases with the same tables. I have a table called Invoices. The way I am doing my query now is like: ``` Select * from [Db1].dbo.Invoices where [Id] = 'someId' UNION ALL Select * from [Db2].dbo.Invoices where [Id] = 'someId' UNION ALL Select * from [Db3].dbo.Invoices where [Id] = 'someId' ``` That query throws an error if Db3 does not exists for example. I was hopping to create something like ``` IF db_id('Db1') is not null -- if database Db1 exists Select * from [Db1].dbo.Invoices where [Id] = 'someId' IF db_id('Db2') is not null UNION ALL Select * from [Db2].dbo.Invoices where [Id] = 'someId' IF db_id('Db3') is not null UNION ALL Select * from [Db3].dbo.Invoices where [Id] = 'someId' ``` That query does not work but hope I illustrate what am I trying to accomplish ## Edit **Thanks a lot for the help! If the first database is not found my query will start with `UNION ALL` thus giving an error . How can I prevent that?**
You can use dynamic SQL for that ``` DECLARE @sql NVARCHAR(MAX) = '' IF db_id('Db1') is not null -- if database Db1 exists SET @sql = @sql + 'Select * from [Db1].dbo.Invoices where [Id] = ''someId''' IF db_id('Db2') is not null BEGIN IF LEN(@sql) > 0 SET @sql = @sql + N' UNION ALL ' SET @sql = @sql + 'Select * from [Db2].dbo.Invoices where [Id] = ''someId''' END IF db_id('Db3') is not null BEGIN IF LEN(@sql) > 0 SET @sql = @sql + N' UNION ALL ' SET @sql = @sql + 'Select * from [Db3].dbo.Invoices where [Id] = ''someId''' END exec sp_executesql @sql ```
You can't do this because SQL server parses the query before execution and is trying to validate the dbs/tables in your select script. So the only way you can do this is to do it using dynamic SQL: ``` DECLARE @sql NVARCHAR(MAX) SET @sql = '' IF db_id('Db1') is not null -- if database Db1 exists SET @sql+='Select * from [Db1].dbo.Invoices where [Id] = ''someId''' IF db_id('Db2') is not null SET @sql+='UNION ALL Select * from [Db2].dbo.Invoices where [Id] = ''someId''' IF db_id('Db3') is not null SET @sql+='UNION ALL Select * from [Db3].dbo.Invoices where [Id] = ''someId''' EXEC (@sql) ``` Stu.
Union only if database exists
[ "", "sql", "sql-server", "" ]
I am trying to assign a value returned from a SQL command in visual studio to a variable. This will then allow me to have a tiered login system. My current code is:` ``` Imports System.Data.SqlClient Imports System.Data.OleDb Public Class Form1 Private Sub Button1_Click(sender As Object, e As EventArgs) Handles cmdLogin.Click Dim Con As SqlConnection Dim cmd As New OleDbCommand Dim sqlstring As String Dim connstring As String Dim ds As DataSet Dim da As SqlDataAdapter Dim inc As Integer = 0 Dim userlevel As Integer connstring = "Data Source=(LocalDB)\v11.0;AttachDbFilename=|DataDirectory|\Assignment.mdf;Integrated Security=True;Connect Timeout=30" Con = New SqlConnection(connstring) Con.Open() sqlstring = ("SELECT level FROM Users WHERE Id='" & _txtUsername.Text & "' and pass='" & _txtPassword.Text & "'") da = New SqlDataAdapter(sqlstring, Con) ds = New DataSet da.Fill(ds, "Users") If ds.Tables(0).Rows.Count > 0 Then MsgBox("Username Correct. Welcome!") Else MsgBox("Hackers are not welcome! Shoooo") End If End Sub ``` This works. However I am unsure on how to assign the level to a visual basic variable. Any help greatfully received.
You could do ``` If ds.Tables(0).Rows.Count > 0 userlevel = CInt(ds.Tables(0).Rows(0)("level")) ``` However, since your query returns a single value, I would use [`ExecuteScalar`](http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlcommand.executescalar%28v=vs.110%29.aspx) instead of filling the dataset. Note however that your query is susceptible to Sql Injection attacks. You should replace the bindings with a [parameterized approach](http://msdn.microsoft.com/en-us/library/yy6y35y8%28v=vs.110%29.aspx?cs-save-lang=1&cs-lang=vb#code-snippet-2).
You already have it in the table: ``` MessageBox.Show("Level = " & ds.Tables(0).Rows(0)("level")) ``` Btw., even though you do not welcome Hackers, Hackers would highly welcome your code. Read about SQL Injection, and then use parameters. Try ``` 0'; UPDATE Users SET pass='welcomeHackers'; -- ``` in the username field.
Assigning a value returned from a SQL command to a variable
[ "", "sql", "vb.net", "visual-studio-2013", "" ]
I have a table with records that I want to group. Some column can contain a little different values, for example: ID---PRODUCT---AMOUNT 1----Candy--------23.44 1----CAND--------14.42 1----CAND---------8.18 I want to group the records by ID and summarize the amount, finally I want to ignore the product by simply saying I want to use the first value. This is my SQL statement. ``` SELECT [ID] ,[PRODUCT] ,sum([AMOUNT]) AMOUNT FROM [TABLE] GROUP BY [ID], [PRODUCT] ``` If I don't write [PRODUCT] in my group by I get this error: > "Column 'PRODUCT' is invalid in the select list because it is not > contained in either an aggregate function or the GROUP BY clause."
You need to aggregate all columns that are not in your `group by` clause. You could use `min()` to get the alphabetically smallest ``` SELECT [ID] ,min([PRODUCT]) PRODUCT ,sum([AMOUNT]) AMOUNT FROM [TABLE] GROUP BY [ID] ``` Or remove the `product` entirely ``` SELECT [ID], sum([AMOUNT]) AMOUNT FROM [TABLE] GROUP BY [ID] ```
When you use group by, each column in the select clause must be either defined via an Aggregate function (like MAX, MIN, AVG, ...) or must be included in the group by clause. Solution: Add all columns in the SELECT clause into the GROUP BY clause or remove the column that causing problem from the SELECT clause.
How to ignore different values in a column you want to group in SQL
[ "", "sql", "sql-server", "" ]
this is a table in mysql. sub\_table: ``` id name parent_id 1 sub 0 //it means this the sub has not any parents. 2 sub1 1 //it means the parent is sub 3 sub2 1 4 sub3 3 5 sub4 4 6 sub5 0 7 sub6 6 ``` how can Give an ID number and get its root parent ID? for example: if ID=5 return me 1 if ID=6 return me 6 if ID=7 return me 6 ``` SELECT id from table sub_table WHILE parent_id != 0 BEGIN ...?.. END ```
with PHP, u can achieve this in following way : ``` $id = 5;$parent_id = 5; /* set default to values you want*/ while($parent_id != 0) { $sql = 'SELECT id,parent_id from sub_table where id = $parent_id '; $rs= $mysqli->query($sql); /* fetch details */ $old_id = $id; /* this will save the last id in current iteration */ $parent_id = $rs['parent_id']; $id = $rs['id']; } ``` So once coming out of the loop, u will have your result saved in `$old_id`
``` DROP FUNCTION IF EXISTS HATEST; DELIMITER // CREATE FUNCTION HATEST(input_id INT UNSIGNED) RETURNS INT UNSIGNED BEGIN DECLARE in_id INT UNSIGNED; DECLARE v_pid INT UNSIGNED; SET in_id := input_id; WHILE in_id > 0 DO SET v_pid := in_id; SELECT parent_id into in_id FROM TABLE1 WHERE id = in_id LIMIT 1; END WHILE; RETURN v_pid; END// DELIMITER ; ```
Return parent ID in a nested table in sql
[ "", "mysql", "sql", "" ]
The Sales person enters their daily reports of visits to the companies. The same companies may be visited by the different sales persons. I want to display company name, sales person name and their no of visits to the company. The problem with the below query is that, it groups with `company_name` even though it has two more than one sales person. i want to separate it. ``` SELECT `employee_name`, COUNT(`cid`) FROM `tbl_reports` GROUP BY `company_name` ``` Thank you!
You want to group it by `company_name` **and** `employee_name` - just do it. ``` SELECT `company_name`, `employee_name`, COUNT(`cid`) FROM `tbl_reports` GROUP BY `company_name`, `employee_name`; ```
Add `companyname` in your query...Because you need to display `companyname` ``` SELECT `companyname`, `employee_name`,COUNT(`cid`) FROM `tbl_reports` GROUP BY `company_name`, `employee_name`; ``` # [SQL FIDDLE](http://sqlfiddle.com/#!2/a2e6c/1)
What is the SQL query for the below condition?
[ "", "mysql", "sql", "" ]
I have the following database schema ``` posts(id_post, post_content, creation_date) comments(id_comment, id_post, comment_content, creation_date) ``` What i want to accomplish is query all posts sorted by most recent activity date, which basically means the first post fetched will be the latest post added OR the one with the latest comment added. Hence, my first thought was something like ``` Select * From posts P, comments C WHERE P.id_post=c.id_post GROUP BY P.id_post ORDER BY C.creation_date ``` The problem is the following: if a post doesn't have a comment it won't be fetched, because `P.id_post=C.id_post` won't match. Basically, the idea is the following: I order by creation\_date. If there are comments, creation\_date will be the comment's creation date, elsewhere it will be the post's creation date. Any help?
Start by using proper `join` syntax. Then return appropriate columns. You are aggregating by `id_post` so there is only one row for each post. Putting an arbitrary comment on the same row would not (in general) be a sensible thing to do. The answer to your question, though, is to order by the greatest of the two dates. The two dates are `p.creation_date` and `max(c.creation\_date). ``` select P.*, max(P.creation_date) From posts P left join comments C on P.id_post=c.id_post group by P.id_post order by greatest(p.creation_date, coalesce(max(c.creation_date), P.post_date)); ``` The `coalesce()` is necessary because of the `left outer join`; the comment creation date could be `NULL`. EDIT: If you assume that comments come after posts, you can simplify the `order by` to: ``` order by coalesce(max(c.creation_date), P.post_date); ```
Use `LEFT JOIN` ``` Select * ,(CASE WHEN C.creation_date IS NULL THEN P.creation_date ELSE C.creation_date END) new_creation_date From posts P LEFT JOIN comments C ON (P.id_post=c.id_post) GROUP BY P.id_post ORDER BY new_creation_date DESC ``` or ``` Select * From posts P LEFT JOIN comments C ON (P.id_post=c.id_post) GROUP BY P.id_post ORDER BY CASE WHEN C.creation_date IS NULL THEN P.creation_date ELSE C.creation_date END DESC ```
SQL Query to order posts by post date OR comments date
[ "", "mysql", "sql", "join", "sql-order-by", "" ]
I Have a Query Displaying a Result, Need to Find a Value of one of these Columns in that Query by it's Field Name. The query is Displaying all the fields in a table (*Along with other fields from other tables*) using The (**[Table name].\***) (*cause it has too many fields*) in the query builder. So if it possible to add a criteria under this one that would be helpful, but I'm also welcome any other suggestions ***Edit:*** > I didn't post my problem clearly, here is the situation: Table > "Default Payments" It has a Name field and a 60 other fields named > like : 1/1/2013, 1/2/2013 ... 1/12/2017 Under each field of them the > payment that should be paid in that date. > > now my report should display the real payment from another table and > the default payment depending on a date given to it like the fields > names for example let's say 1/10/2015. > > the query should find the field that is named like 1/10/2015 and give > me it's value. ( the search is filtered based on the name and that date so the query display only one row with one value under that field). the answers are all depend on knowing the field name but it will not be hard coded in the query, it will be given to it.
Ok, i think i was over thinking in this problem, here is a way if any one faced something like it. The problem was providing the field name which was changing, but i forgot to say it was calculated using the DateSerial function to get the first day of a date. And Instead of searching in a query, i searched in the main table directly using Dlookup function. The field name prosperity of that function is using the date serial to get the column name, and the the table name and finally the criteria, which is the client name that is provided to the report and it gave the right value i needed. **Thanks all for your efforts.**
Just use this query...Is this what you want? ``` SELECT FieldName1,FieldName2 FROM TableName WHERE FieldName1='Whatever'; ``` Just select whatever field names you have, from just 1 to every field name.
Search a value from an Access Query Result by it's Column"Field" Name
[ "", "sql", "ms-access", "criteria", "" ]
I have the following three tables: > **Customers**: > Cust\_ID, > Cust\_Name > > **Products**: > Prod\_ID, > Prod\_Price > > **Orders**: > Order\_ID, > Cust\_ID, > Prod\_ID, > Quantity, > Order\_Date How do I display each costumer and how much they spent excluding their very first purchase? [A] - I can get the total by multiplying Products.Prod\_Price and Orders.Quantity, then GROUP by Cust\_ID [B] - I also can get the first purchase by using TOP 1 on Order\_Date for each customer. But I couldnt figure out how to produce [A]-[B] in one query. Any help will be greatly appreciated.
For SQL-Server 2005, 2008 and 2008R2: ``` ; WITH cte AS ( SELECT c.Cust_ID, c.Cust_Name, Amount = o.Quantity * p.Prod_Price, Rn = ROW_NUMBER() OVER (PARTITION BY c.Cust_ID ORDER BY o.Order_Date) FROM Customers AS c JOIN Orders AS o ON o.Cust_ID = c.Cust_ID JOIN Products AS p ON p.Prod_ID = o.Prod_ID ) SELECT Cust_ID, Cust_Name, AmountSpent = SUM(Amount) FROM cte WHERE Rn >= 2 GROUP BY Cust_ID, Cust_Name ; ``` For SQL-Server 2012, using the `FIRST_VALUE()` analytic function: ``` SELECT DISTINCT c.Cust_ID, c.Cust_Name, AmountSpent = SUM(o.Quantity * p.Prod_Price) OVER (PARTITION BY c.Cust_ID) - FIRST_VALUE(o.Quantity * p.Prod_Price) OVER (PARTITION BY c.Cust_ID ORDER BY o.Order_Date) FROM Customers AS c JOIN Orders AS o ON o.Cust_ID = c.Cust_ID JOIN Products AS p ON p.Prod_ID = o.Prod_ID ; ``` Another way (that works in 2012 only) using `OFFSET FETCH` and `CROSS APPLY`: ``` SELECT c.Cust_ID, c.Cust_Name, AmountSpent = SUM(x.Quantity * x.Prod_Price) FROM Customers AS c CROSS APPLY ( SELECT o.Quantity, p.Prod_Price FROM Orders AS o JOIN Products AS p ON p.Prod_ID = o.Prod_ID WHERE o.Cust_ID = c.Cust_ID ORDER BY o.Order_Date OFFSET 1 ROW -- FETCH NEXT -- not needed, -- 20000000000 ROWS ONLY -- can be removed ) AS x GROUP BY c.Cust_ID, c.Cust_Name ; ``` Tested at **[SQL-Fiddle](http://sqlfiddle.com/#!6/42344/8)** Note that the second solution returns also the customers with only one order (with the `Amount` as `0`) while the other two solutions do not return those customers.
Which version of SQL? If 2012 you might be able to do something interesting with OFFSET 1, but I'd have to ponder much more how that works with grouping. --- EDIT: Adding a 2012 specific solution inspired by @ypercube I wanted to be able to use OFFSET 1 within the WINDOW to it al in one step, but the syntax I want isn't valid: ``` SUM(o.Quantity * p.Prod_Price) OVER (PARTITION BY c.Cust_ID ORDER BY o.Order_Date OFFSET 1) ``` Instead I can specify the row boxing, but have to filter the result set to the correct set. The query plan is different from @ypercube's, but the both show 50% when run together. They each run twice as as fast as my original answer below. ``` WITH cte AS ( SELECT c.Cust_ID ,c.Cust_Name ,SUM(o.Quantity * p.Prod_Price) OVER(PARTITION BY c.Cust_ID ORDER BY o.Order_ID ROWS BETWEEN 1 FOLLOWING AND UNBOUNDED FOLLOWING) AmountSpent ,rn = ROW_NUMBER() OVER(PARTITION BY c.Cust_ID ORDER BY o.Order_ID) FROM Customers AS c INNER JOIN Orders AS o ON o.Cust_ID = c.Cust_ID INNER JOIN Products AS p ON p.Prod_ID = o.Prod_ID ) SELECT Cust_ID ,Cust_Name ,ISNULL(AmountSpent ,0) AmountSpent FROM cte WHERE rn=1 ``` --- My more general solution is similar to peter.petrov's, but his didn't work "out of the box" on my sample data. That might be an issue with my sample data or not. Differences include use of CTE and a NOT EXISTS with a correlated subquery. ``` CREATE TABLE Customers (Cust_ID INT, Cust_Name VARCHAR(10)) CREATE TABLE Products (Prod_ID INT, Prod_Price MONEY) CREATE TABLE Orders (Order_ID INT, Cust_ID INT, Prod_ID INT, Quantity INT, Order_Date DATE) INSERT INTO Customers SELECT 1 ,'Able' UNION SELECT 2, 'Bob' UNION SELECT 3, 'Charlie' INSERT INTO Products SELECT 1, 10.0 INSERT INTO Orders SELECT 1, 1, 1, 1, GetDate() UNION SELECT 2, 1, 1, 1, GetDate() UNION SELECT 3, 1, 1, 1, GetDate() UNION SELECT 4, 2, 1, 1, GetDate() UNION SELECT 5, 2, 1, 1, GetDate() UNION SELECT 6, 3, 1, 1, GetDate() ;WITH CustomersFirstOrder AS ( SELECT Cust_ID ,MIN(Order_ID) Order_ID FROM Orders GROUP BY Cust_ID ) SELECT c.Cust_ID ,c.Cust_Name ,ISNULL(SUM(Quantity * Prod_Price),0) CustomerOrderTotalAfterInitialPurchase FROM Customers c LEFT JOIN ( SELECT Cust_ID ,Quantity ,Prod_Price FROM Orders o INNER JOIN Products p ON o.Prod_ID = p.Prod_ID WHERE NOT EXISTS (SELECT 1 FROM CustomersFirstOrder a WHERE a.Order_ID=o.Order_ID) ) b ON c.Cust_ID = b.Cust_ID GROUP BY c.Cust_ID ,c.Cust_Name DROP TABLE Customers DROP TABLE Products DROP TABLE Orders ```
Sum of all values except the first
[ "", "sql", "sql-server", "t-sql", "exception", "sum", "" ]
I have the following table in my SQL Server database: ``` Product Week Units Exta units effective Extra units A515 2014-01-11 51 2014-01-25 23.24 A515 2014-01-11 51 2014-01-11 25.86 A515 2014-01-18 52 2014-01-25 23.24 A515 2014-01-18 52 2014-01-11 25.86 A515 2014-01-25 50 2014-01-25 23.24 A515 2014-01-25 50 2014-01-11 25.86 A515 2014-02-01 45 2014-01-25 23.24 A515 2014-02-01 45 2014-01-11 25.86 ``` The values in the `week` and `units` columns repeats which i don't want. The duplicate records should be deleted. I want the `extra units effective` column to start at the earliest date corresponding to the `week` column. Basicly i want a table from the above table that look like this, ``` Product Week Units Exta units effective Extra units A515 2014-01-11 51 2014-01-11 25.86 A515 2014-01-18 52 2014-01-11 25.86 A515 2014-01-25 50 2014-01-25 23.24 A515 2014-02-01 45 2014-01-25 23.24 ``` I thought about building a query table from the original table to create the last table but i am not sure how. Any help is much appreciated.
Just a slightly more complex use of [`ROW_NUMBER`](http://technet.microsoft.com/en-us/library/ms186734.aspx) and a CTE seem to be in order: ``` ;With Ordered as ( SELECT *,ROW_NUMBER() OVER (PARTITION BY Week,Units ORDER BY CASE WHEN [Exta units effective] >= Week THEN 0 ELSE 1 END, [Exta units effective]) as rn FROM [Unknown Table] ) select * from Ordered where rn = 1 ``` Normally this would just be an `ORDER BY` in the `ROW_NUMBER()` expression to select the earliest date for each `Week` and `Units` combiation, but I see that we need to exclude any `[Exta units effective]` which pre-date the `Week` value so I've added a `CASE` expression to filter those to the end of the row numbering.
You can do this with `row_number()`, which assigns a sequential value within each group ordered by the date you want. Then just choose the first one: ``` select product, week, units, extraunitseffective, extraunits from (select t.*, row_number() over (partition by product, week order by extraunitseffective) as seqnum from table t ) t where seqnum = 1; ```
Create a seqential table from another table in SQL Server
[ "", "sql", "sql-server", "" ]
I'm working on a simple ordering system in MySQL and I came across this snag that I'm hoping some SQL genius can help me out with. I have a table for Orders, Payments (with a foreign key reference to the Order table), and OrderItems (also, with a foreign key reference to the Order table) and what I would like to do is get the total outstanding balance (Total and Paid) for the Order with a single query. My initial thought was to do something simple like this: ``` SELECT Order.*, SUM(OrderItem.Amount) AS Total, SUM(Payment.Amount) AS Paid FROM Order JOIN OrderItem ON OrderItem.OrderId = Order.OrderId JOIN Payment ON Payment.OrderId = Order.OrderId GROUP BY Order.OrderId ``` However, if there are multiple Payments or multiple OrderItems, it messes up Total or Paid, respectively (eg. One OrderItem record with an amount of 100 along with two Payment Records will produce a Total of 200). In order to overcome this, I can use some subqueries in the following way: ``` SELECT Order.OrderId, OrderItemGrouped.Total, PaymentGrouped.Paid FROM Order JOIN ( SELECT OrderItem.OrderId, SUM(OrderItem.Amount) AS Total FROM OrderItem GROUP BY OrderItem.OrderId ) OrderItemGrouped ON OrderItemGrouped.OrderId = Order.OrderId JOIN ( SELECT Payment.OrderId, SUM(Payment.Amount) AS Paid FROM Payment GROUP BY Payment.OrderId ) PaymentGrouped ON PaymentGrouped.OrderId = Order.OrderId ``` As you can imagine (and as an `EXPLAIN` on this query will show), this is not exactly an optimal query so, I'm wondering, is there any way to convert these two subqueries with `GROUP BY` statements into `JOIN`s?
The following is likely to be faster with the right indexes: ``` select o.OrderId, (select sum(oi.Amount) from OrderItem oi where oi.OrderId = o.OrderId ) as Total, (select sum(p.Amount) from Payment p where oi.OrderId = o.OrderId ) as Paid from Order o; ``` The right indexes are `OrderItem(OrderId, Amount)` and `Payment(OrderId, Amount)`. I don't like writing aggregation queries this way, but it can sometimes help performance in MySQL.
Some answers have already suggested using a correlated subquery, but have not really offered an explanation as to why. MySQL does not materialise correlated subqueries, but it will materialise a derived table. That is to say with a simplified version of your query as it is now: ``` SELECT Order.OrderId, OrderItemGrouped.Total FROM Order JOIN ( SELECT OrderItem.OrderId, SUM(OrderItem.Amount) AS Total FROM OrderItem GROUP BY OrderItem.OrderId ) OrderItemGrouped ON OrderItemGrouped.OrderId = Order.OrderId; ``` At the start of execution MySQL will put the results of your subquery into a temporary table, and hash this table on OrderId for faster lookups, whereas if you run: ``` SELECT Order.OrderId, ( SELECT SUM(OrderItem.Amount) FROM OrderItem WHERE OrderItem.OrderId = OrderId ) AS Total FROM Order; ``` The subquery will be executed once for each row in Order. If you add something like `WHERE Order.OrderId = 1`, it is obviously not efficient to aggregate the entire OrderItem table, hash the result to only lookup one value, but if you are returning all orders then the inital cost of creating the hash table will make up for itself it not having to execute the subquery for every row in the Order table. If you are selecting a lot of rows and feel the materialisation will be of benefit, you can simplifiy your JOIN query as follows: ``` SELECT Order.OrderId, SUM(OrderItem.Amount) AS Total, PaymentGrouped.Paid FROM Order INNER JOIN OrderItem ON OrderItem.OrderID = Order.OrderID INNER JOIN ( SELECT Payment.OrderId, SUM(Payment.Amount) AS Paid FROM Payment GROUP BY Payment.OrderId ) PaymentGrouped ON PaymentGrouped.OrderId = Order.OrderId; GROUP BY Order.OrderId, PaymentGrouped.Paid; ``` Then you only have one derived table.
Converting Multiple subqueries with GROUP BY to JOIN
[ "", "mysql", "sql", "subquery", "" ]
I have a Profiles table, a Workers table and a Studios table. My profiles table looks like this: ``` mysql> select * from Profiles; | profile_id | owner_luser_id | profile_type | profile_name | profile_entity_name | description | profile_image_url | search_rating | ``` my Studios and Workers tables basically have extended data about the entity, depending on what profile\_type (ENUM) is set to [Worker, Studio]. I need to know how I can do a single SQL query to retrieve (all) the profile data from Profiles, but only join on to the end of the result, specific fields from the Studios or Workers table depending on if that entity is a Studio or a Worker. I've tried UNION with no success (UNION requires all tables to have the same number of fields?, which seems to not be what I'm looking for). I've tried JOIN with no success also; it just shows Workers if I join on Profiles.profile\_type = 'Worker'. I just want to be able to add additional columns to the result, from either the Worker or Studio tables, depending on what the Profiles.profile\_type is. SQLFiddle of my schema is at <http://sqlfiddle.com/#!2/3f599/1> - I would like to (for example) SELECT \* FROM Profiles, but depending on what profile\_type is, join on the end of that result either the Worker or Studio's licence number (Workers.license\_number, Studios.premises\_license\_number respectively), and call that column for example 'profilelicense\_number'.
I think this is what you want: ``` SELECT p.*, case when w.license_number is not null then w.license_number when s.premises_license_number is not null then s.premises_license_number else null end as profilelicense_number FROM Profiles p LEFT JOIN Studios s on p.profile_id = s.profile_id LEFT JOIN Workers w on p.profile_id = w.profile_id ```
You want to use an outer join, this will join the data if they exist, and join with an empty row if they don't exist ``` SELECT * FROM Profiles p LEFT JOIN Studios s on p.profile_id = s.profile_id LEFT JOIN Workers w on p.profile_id = w.profile_id ``` alternatively if you want all profiles data but not all studios and workers data you can use `p.*`followed by the specific columns from Studios and Workers, for instance: ``` SELECT p.*, s.phone, w.phone FROM Profiles p LEFT JOIN Studios s on p.profile_id = s.profile_id LEFT JOIN Workers w on p.profile_id = w.profile_id ```
MySQL Selecting data from table2 where table1.field is equal (and add it on to the result)
[ "", "mysql", "sql", "" ]
I have a table representing messages between users which is, roughly, as follows: ``` Message TABLE ( Id INT (PK) , User_1_Id INT (FK) , User_2_Id INT (FK) , ... ) ``` I would like to write a query which outputs a summary of how many unique conversations were held between any two users - regardless which direction the message went. To illustrate: Let's say we have 3 users: * User A (Id: 1), * User B (Id: 2), and * User C (Id: 3) In the table, we have the following entries: ``` Id User_1_Id User_2_Id ... 1 1 2 ... 2 2 1 ... 3 1 2 ... 4 2 3 ... 5 1 2 ... ``` The query I desire would indicate that there were two unique conversations: One between: * A) User A and User B, and * B) User B and User C. What I don't want is for the query to also say that there is a conversation between: * C) User B and User A (the combination has already been covered by A, above - but in the reverse order). This is easy if I'm working at the level of individual User Ids - but I can't figure out any kind of set-based method to achieve the outcome in single query. Currently, the best I've been able to do is isolate that messages have been sent between users in each direction (i.e. it's returning C in addition to A and B). **UPDATE** A conversation includes all messages between any two users - regardless of the order or position of the individual messages in the context of the whole table. I'm actually aiming to build a conversation table which probably should have been included in the original database model but was sadly left out. It wouldn't make sense to make the conversation direction-specific.
The solution is to use a `CASE` statement which compares the size of the two columns and returns the smallest value in the first column and the largest value in the second column: ``` SELECT CASE WHEN User_1_Id > User_2_Id THEN User_1_Id ELSE User_2_Id END , CASE WHEN User_2_Id > User_1_Id THEN User_1_Id ELSE User_2_Id END FROM Messages ``` Hat-tip to @Strawberry for the answer which pointed me in the right direction. There might be some funny results if there are users who have messaged themselves I guess - but that shouldn't happen in practice...
The answer would be appear to be equal to the number of rows returned by this query... ``` DROP TABLE IF EXISTS messages; CREATE TABLE messages (id INT NOT NULL AUTO_INCREMENT PRIMARY KEY ,from_user INT NOT NULL ,to_user INT NOT NULL ,INDEX(from_user,to_user) ); INSERT INTO messages VALUES (1, 1, 2), (2 ,2 ,1), (3 ,1 ,2), (4 ,2 ,3); SELECT DISTINCT LEAST (from_user,to_user) user1,GREATEST(from_user,to_user) user1 FROM messages; +-------+-------+ | user1 | user1 | +-------+-------+ | 1 | 2 | | 2 | 3 | +-------+-------+ 2 rows in set (0.00 sec) ```
Identify Smallest Number of Combinations Between 2 Users in a SQL Server Conversation Table
[ "", "sql", "sql-server", "" ]
Consider the following recordset: ``` 1 1000 -1 2 500 2 3 1000 -1 4 500 3 5 500 2 6 1000 -1 7 500 1 ``` So 3x a number 1000 with -1, total -3. 4x a number 500 with different values Now I'm in need of a query which divides the sum of code 1000 over the 4 number 500 and removes code 1000. So the end result would look like: ``` 1 500 1.25 2 500 2.25 3 500 1.25 4 500 0.25 ``` The sum of code 1000 = -3 There's 4 times code 500 in the table over which -3 has to be divided. -3/4 = -0.75 so the record "2 500 2" becomes "2 500 (2+ -0.75)" = 1.25 etc As an SQL newbie I have no clue how to get this done, can anyone help?
``` Select col1,( (Select sum(col2 ) from tab where col1 =1000) / (Select count(*) from tab where col1 =500))+Col2 as new_value From tab Where col1=500 ``` Here tab, col1,col2 are table name, column with (1000 , 500) value, column with (1,2,3 value)
You can use CTEs to do it "step-wise" and build your solution. Like this: ``` with sumup as ( select sum(colb) as s from table where cola = 1000 ), countup as ( select count(*) as c from table where cola = 500 ), change as ( select s / c as v from sumup, countup ) select cola, colb - v from table, change where cola = 500 ``` Two things to note: This might not be the fastest solution, but it is often close. You can test this code easy, just change to final select statement to select the name of the CTE and see what it is. For example this would be a good test if you are getting a bad result: ``` with sumup as ( select sum(colb) as s from table where cola = 1000 ), countup as ( select count(*) as c from table where cola = 500 ), change as ( select s / c as v from sumup, countup ) select * form change ```
sql query to divide a value over different records
[ "", "sql", "sql-server", "t-sql", "" ]
I am trying to figure out what SQL query I need to fetch records from my users table. ``` "SELECT * FROM users WHERE level!='4'" ``` Level 4 users are "COMPLETED" so by default, they are not shown. But I do want to show users that are COMPLETED if their ACTIVITY value is less than 5. Any user with an activity value of 5 is "Green" and does not require my attention. Knowing my luck, it's a very simple answer, but it's been a long day and my head is fried.
You can do like so: ``` SELECT * FROM users WHERE level <> '4' OR (level = '4' and Activity < 5); ``` I've switched the inequality operator - `<>` [is more portable](https://stackoverflow.com/a/723426/314291)
This should solve your problem > SELECT \* FROM users WHERE level != '4' OR (activity < '5' AND level = > '4' )
SQL Query to show records
[ "", "mysql", "sql", "sql-server", "" ]
I think this will be easier to explain as a story instead of sample code. * I have a table of Parents and Children. * A Parent could be the Child of another Parent, and so on. * There are top Level Parents. * Siblings are possible (2 or more Children of same Parent) I am trying to crawl this table in SQL and get all relationships. Below is an bulleted example of how the relationships would work. ``` o 6220 (Top Level Parent) - 6219 (Child1 of 6220) - 6221 (Child2 of 6220) * 6222 (Child1 of 6221/GrandChild1 of 6220) * 6223 (Child2 of 6221/GrandChild2 of 6220) x 6224 (Child1 of 6223/GrandChild1 of 6221/GreatGrandChild1 of 6220) o 6225 (Top Level Parent) ``` Here is how it looks in table form: ``` Parent, Child NULL, 6220 6220, 6221 6220, 6219 6221, 6222 6221, 6223 6223, 6224 NULL, 6225 ``` The result I am looking for (optimally) is: ``` Parent, Child, Level 6220, 6220, 0 6220, 6221, 1 6220, 6219, 1 6220, 6222, 2 6220, 6223, 2 6220, 6224, 4 6221, 6222, 1 6221, 6223, 1 6221, 6224, 2 6223, 6224, 1 6225, 6225, 0 ``` I've tried recursive CTE's and Loops, and I can get close, but not quite. Looking to get close to this or something similar that makes sense. I want to put this into a table that I can query to find all parents, grandparents, children, grandchildren, etc of a parent or child. Sibling relationships aren't important. Thanks for looking. Please let me know if I can clarify. Here's how I had been trying to do this: ``` DECLARE @InsertedCount INT DECLARE @RelationshipLevel TINYINT DECLARE @AgyCount INT DECLARE @LoopCount INT = 1 DECLARE @AgencyId INT SELECT @AgyCount = COUNT(AgencyId) FROM dbo.Agency AS A /* ============================================================================== Get All AgencyIDs into a table with an incremented column ============================================================================== */ IF (OBJECT_ID('tempdb.dbo.#tmpAllAgy') > 0) DROP TABLE #tmpAllAgy CREATE TABLE #tmpAllAgy (tmpAllAgyId INT IDENTITY(1,1) PRIMARY KEY, AgencyId INT) INSERT INTO #tmpAllAgy (AgencyId) SELECT AgencyId FROM dbo.Agency ORDER BY AgencyId /* ============================================================================== Temp table for the results of the matrix to compare to physical table ============================================================================== */ IF (OBJECT_ID('tempdb.dbo.#ChildAgencies') > 0) DROP TABLE #ChildAgencies CREATE TABLE #ChildAgencies (LoopOrder INT, ParentAgencyId INT, ChildAgencyId INT, RelationshipLevel TINYINT) /* ============================================================================== Outer loop to get the next Agency Id from Temp Table ============================================================================== */ WHILE @LoopCount <= @AgyCount BEGIN SET @RelationshipLevel = 0 SELECT @AgencyId = AgencyId FROM #tmpAllAgy WHERE tmpAllAgyId = @LoopCount INSERT INTO #ChildAgencies(LoopOrder, ParentAgencyId, ChildAgencyId, RelationshipLevel) SELECT @LoopCount, NULL, AgencyId, @RelationshipLevel FROM dbo.Agency WHERE AgencyId = @AgencyID SET @InsertedCount = 1--@@ROWCOUNT /* ============================================================================== Inner loop to create the hierarchy for each AgencyId ============================================================================== */ WHILE @InsertedCount > 0 BEGIN SET @InsertedCount = NULL SET @RelationshipLevel = @RelationshipLevel + 1 INSERT INTO #ChildAgencies (LoopOrder, ParentAgencyId, ChildAgencyId, RelationshipLevel) SELECT @LoopCount, @AgencyId, AgencyId, @RelationshipLevel FROM dbo.Agency WHERE AgencyId NOT IN (SELECT ChildAgencyId FROM #ChildAgencies) AND ParentAgencyId IN (SELECT ChildAgencyId FROM #ChildAgencies) AND StatusCode <> 109 /*QA-Deleted*/ SET @InsertedCount = @@ROWCOUNT END SET @LoopCount = @LoopCount + 1 END ``` A recursive CTE seems to only bring direct parents and children, while incrementing the count arbitrarily. The above looping code, is close, but seems to do things in a weird order.
So, I finally got something to work. It's almost the same thing as I had been doing. Instead of the loop in a loop, I took the inner loop and made it a function. Then I created a SP that loops, and feeds the function an ID at a time and appends the results to a temp table. Here is basically the inner loop: ``` DECLARE @AgencyId INT = 6220 DECLARE @InsertedCount INT DECLARE @LevelCount INT = 0 CREATE TABLE #ChildAgencies(AgencyId INT, LevelCount INT) INSERT INTO #ChildAgencies(AgencyId, LevelCount) SELECT AgencyId, @LevelCount FROM dbo.Agency WHERE AgencyId = @AgencyId SET @InsertedCount = @@ROWCOUNT WHILE @InsertedCount > 0 BEGIN SET @InsertedCount = NULL SET @LevelCount = @LevelCount + 1 INSERT INTO #ChildAgencies(AgencyId, LevelCount) SELECT AgencyId, @LevelCount FROM dbo.Agency WHERE AgencyId NOT IN (SELECT AgencyId FROM #ChildAgencies) AND ParentAgencyId IN (SELECT AgencyId FROM #ChildAgencies) AND StatusCode <> 109 /*QA-Deleted*/ SET @InsertedCount = @@ROWCOUNT END ``` Which produces: ``` AgencyId LevelCount 6220 0 6219 1 6221 1 6222 2 6223 2 6224 3 ``` Obviously, I don't have the @AgencyId variable defaulted in the final product. I set @AgencyId in the stored procedure and pass it to this.
I believe what you are proposing is not possible using a SQL query. It is theoretically possible that your relationships have cycles. You are basically implementing graphs in a SQL table, since siblings are possible. Let's say that 6221 is parent of 6222, 6222 is parent of 6223, and 6223 is parent of 6221? Then you have a cycle. If such a structure is guaranteed not to exist, then you still have a fully connected graph to deal with in the worst case. Is there a maximum "age difference"? If so, then you can just make a finite number of joins to do this, using outer joins to insure those without a relationship still get included. For each "level" of the relationship, you have to perform three joins: one for the parent, one for the siblings, and one for the children. You also need to make sure you don't traverse back into your original node. In short, assuming no cycles, you are implementing tree traversal in a table containing a forest of trees. I don't know how you could do this in SQL. You could do it in a programming language, or perhaps as a stored procedure using loops, but only if there are no cycles, or you implement cycle detection in some way. Best of luck. Edit: Also, unless you are given the tree roots (top level nodes) any node in a tree can be considered a root. If the base graph is a directed graph, however (parent-child relationship is explicit) then a root is a node with no parent. For an N-Ary tree with a defined root and finite depth, you could implement any standard graph search algorithm with a traversal cost of 1, until all nodes have a cost associated with them. There's breadth-first search, depth-frist search, best-first search, A\*, etc. Edit: Try something like this: ``` create procedure update_node_depths as begin set nocount on update nodes set depth = null update nodes set depth = 0 where parent_id is null while exists (select node_id from nodes where depth is null) begin update nodes set depth = (select n2.depth+1 from nodes n2 where n2.node_id = nodes.parent_id) where parent_id in (select node_id from nodes n3 where n3.depth is not null) end select * from nodes end go ``` You can then do something like this to get your table, automatically updating depths when you do so. ``` execute update_node_depths go ```
SQL Crawl Hierarchy - Get All Relationships
[ "", "sql", "loops", "recursion", "hierarchy", "recursive-query", "" ]
Following is my table with data : Table Name : jay ``` RollNo Name Age 58 Jay 18 ``` Table Name : ren ``` RollNo Name Age 23 Renish 20 ``` How can I extract data in following form : ``` RollNo Name Age 58 Jay 18 23 Renish 20 ``` Please Help me.
Use [union](http://dev.mysql.com/doc/refman/5.0/en/union.html) ``` UNION is used to combine the result from multiple SELECT statements into a single result set. ``` --- ``` select * from jay UNION select * from Ren ``` # [SQl FIDDLE](http://sqlfiddle.com/#!2/3f37b5/2) **OUTPUT** ![enter image description here](https://i.stack.imgur.com/5ynGT.png)
If it is SQL use below query ``` SELECT RollNo, Name, Age FROM jay UNION SELECT RollNo, Name, Age FROM ren ```
Extracting data from two tables with same in the form of appending
[ "", "mysql", "sql", "database", "" ]
I have a MySQL Database , I want for each lines SQL, sum variable for one colum. My request (works): ``` SELECT * FROM test "WHERE DATE_FORMAT(datetime,'%Y-%m-%d')=DATE(NOW())"; ``` For each line, CONSO is the name of 1 column : ``` (CONSO + CONSO + CONSO + CONSO +...) * 12 ``` Can you help me please ?
To sum up all the CONSO values, and multiply the sum by 12: ``` SELECT sum(conso) * 12 FROM test WHERE DATE_FORMAT(datetime, '%Y-%m-%d') = DATE(NOW()); ``` To sum up all the CONSO values, each multiplied by 12: ``` SELECT sum(conso * 12) FROM test WHERE DATE_FORMAT(datetime, '%Y-%m-%d') = DATE(NOW()); ```
I think what you want is ``` SELECT SUM(CONSO) AS consoTotal FROM test WHERE [condition] ``` You might be able to multiply by 12 there as well - I haven't tried it.....
Some variable for each lines SQL
[ "", "mysql", "sql", "" ]
I'm using this code to sum decimal values. ``` SUM(Amount) as TotalAmount ``` Amount column is decimal datatype ``` Amount decimal (10,4) ``` After summing up the columns I get values like this ``` 13500.8765 12005.0000 ``` My expected output should be like this ``` 13501 12005 ```
The ROUND() function is used to round a numeric field to the number of decimals specified. Syntax: ``` SELECT ROUND(column_name,decimals) FROM table_name; ``` For your problem, try this: ``` CONVERT(int,ROUND(SUM(Amount),0)) as TotalAmount --Converting to int to remove the fractional part ``` Read more about `ROUND` [here](http://technet.microsoft.com/en-us/library/ms175003.aspx).
If you want to round and remove the fractional part you can simply CAST it to a DECIMAL with different precision: ``` select cast(SUM(Amount) as DECIMAL(18,0)) as TotalAmount ```
Round sum of decimal column in sql
[ "", "sql", "sql-server", "" ]
I have a Table with 10 columns and in that table I have thousands/millions of rows. In some scenario, I want to update more than 10K records at a time. currently my scenario code works sequentially like, ``` for i in (primary key ids for all records to be updated) executeupdate(i) ``` what I thought is instead of running same query 10K times, I will add all ids in a string and run a single update query like, ``` executeupdate(all ids) ``` actual DB queries can be like this, suppose I have primary key ids like, ``` 10001,10002,10003,10004,10005 ``` so in first case My queries will be like ``` update tab1 set status="xyz" where Id="10001" update tab1 set status="xyz" where Id="10002" update tab1 set status="xyz" where Id="10003" update tab1 set status="xyz" where Id="10004" update tab1 set status="xyz" where Id="10005" ``` and My bulk update query will be like, ``` update tab1 set status="xyz" where id in ("10001","10002","10003","10004","10005") ``` so My question is, will I get any Performance improvement (executime time) by doing bulk update or total query execution time will be same as for each record index scan will happen and update will take place? **Note : I am using DB2 9.5 as database** Thanks.
In general, a "bulk" update will be faster, regardless of database. Of course, you can test the performance of the two, and report back. Each call to `update` requires a bunch of overhead, in terms of processing the query, setting up locks on tables/pages/rows. Doing a single update consolidates this overhead. The downside to a single `update` is that it might be faster overall, *but* it might lock underlying resources for longer periods of time. For instance, the single updates might take 10 milliseconds each, for an elapsed time of 10 seconds for 1,000 of them. However, no resource is locked for more than 10 milliseconds. The bulk update might take 5 seconds, but the resources would be locked for more of this period. To speed these updates, be sure that `id` is indexed. I should note. This is a general principle. I have not specifically tested single versus multiple update performance on DB2.
You will definitely see a performance improvement, because you will reduce the number of roundtrips. However, this approach does not scale very well; thousands of ID's in one statement could get a bit tricky. Also, there is a limit on the size of your query (could be 64k). You could consider to 'page' through your table and update - say - 100 records per update statement.
Is bulk update faster than single update in db2?
[ "", "sql", "database", "db2", "" ]
I have a query that works fine when searching for data for 1 game only `WHERE gameid = 36`, however once I try to join multiple games `gameid = 36 AND gameid = 37` the query runs fine but returns all 0's for all data ``` SELECT CONCAT_WS( '. ', SUBSTR( p.playerfname, 1, 1 ) , p.playerlname ) name, COALESCE( goalEvents.goals, 0 ) goals, COALESCE( a1.assists, 0 ) + COALESCE( a2.assists, 0 ) assists, COALESCE( goalEvents.goals, 0 ) + COALESCE( a1.assists, 0 ) + COALESCE( a2.assists, 0 ) points FROM players p LEFT JOIN (SELECT scorer, COUNT( scorer ) goals FROM goalEvents WHERE gameid = 36 AND gameid = 37GROUP BY scorer) goalEvents ON p.playerid = goalEvents.scorer LEFT JOIN (SELECT assist1, COUNT( assist1 ) assists FROM goalEvents WHERE gameid = 36 AND gameid = 37 GROUP BY assist1) a1 ON p.playerid = a1.assist1 LEFT JOIN (SELECT assist2, COUNT( assist2 ) assists FROM goalEvents WHERE gameid = 36 AND gameid = 37 GROUP BY assist2) a2 ON p.playerid = a2.assist2 WHERE p.playerteam = 26 ORDER BY points DESC, goals DESC ``` There error must occur with the multiple game lines being called ``` FROM goalEvents WHERE gameid = 36 AND gameid = 37 ``` Am I searching by multiple games wrong? Im basically trying to search by the last 4 gameids of my table
Try: ``` FROM goalEvents WHERE gameid = 36 OR gameid = 37 ``` You will need to use `OR` instead of `AND`, gameid can never be 36 `AND` 37. You want to find two different records, one with `gameid=36` and another with `gameid=37`. If you use `AND` they both fail the condition. If you have more than 2, you can also try `gameid IN (36, 37, ...)` it is short hand for `OR`, `OR`, `OR` etc
you can use `IN` CLAUSE I think that will do the trick like try this out ``` FROM goalEvents WHERE gameid in(36,37) ``` best of luck.
SQL query works but data is corrupting?
[ "", "mysql", "sql", "" ]
i have tables like this: Table: Factors ``` FactorID SellerID ContractorID --------------------------------- 1 -1 2 2 1 -1 3 -1 1 4 2 -1 ``` Table: Sellers ``` SellerID SellerName --------------------- 1 SellerA 2 SellerB 3 SellerC 4 SellerD ``` Table Contractors: ``` ContractorID ContractorName ------------------------------- 1 W 2 X 3 Y 4 Z ``` i want to have query that return some like this: ``` FactorID SellerID ContractorID ContractorName SellerName ------------------------------------------------------------------ 1 -1 2 X NULL 2 1 -1 NULL SellerA 3 -1 1 W NULL 4 2 -1 NULL SellerB ``` this is my query but not worked correctly: ``` SELECT Factos.* , Seller.sellerName AS sellerName , Contractor.Contractor AS contractorName FROM Fctors , Seller , Contractor WHERE Seller.SellerID = CASE WHEN Factors.ContrarID = -1 THEN Factors.SellerID ELSE '' END AND Contractor.ContractorID = CASE WHEN Factors.SellerID = -1 THEN Factors.ContractorID ELSE '' END ```
No need to use `CASE`. [`LEFT OUTER JOIN`](http://technet.microsoft.com/en-us/library/aa213228%28v=sql.80%29.aspx) should be enough. That should do: ``` SELECT f.FactorID, f.SellerID, f.ContractorID, c.ContractorName, s.SellerName FROM Factors f LEFT OUTER JOIN Sellers s ON f.SellerID = s.SellerID LEFT OUTER JOIN Contractor c ON f.SellerID = c.SellerID ORDER BY f.FactorID ASC; ```
A simple JOIN statement should provide required result without any CASE statements ``` SELECT f.*, s.sellerName as sellerName, c.Contractor as contractorName FROM Factors f LEFT OUTER JOIN Sellers s ON(f.SellerId=s.SellerId) LEFT OUTER JOIN Contractors c ON(f.ContractorId=c.ContractorId) ```
SQL query for conditional select
[ "", "sql", "sql-server", "" ]
here is my code ``` SELECT h.ENTERED_DATE, d.DENOMINATION, sum(h.invoice_total-h.TOTTAL_DISCOUNT)as amount FROM sales_details d LEFT JOIN sales_header h ON d.invoice_id=h.invoice_id WHERE entered_by='2254' --HERE IS NEED TO GET DETAILS CURRENT MONTH 1st Date to Sysdate GROUP BY ENTERED_DATE, d.DENOMINATION ORDER BY entered_date, denomination ``` * In my application just send only **sysdate** as parameter. **no need to SYSDATE-30. need 1st date to SYSDATE** here shows my two tables **sales\_header table** ![enter image description here](https://i.stack.imgur.com/AWcn6.png) **sales\_details table** ![enter image description here](https://i.stack.imgur.com/viDLQ.png)
try this: ``` WHERE entered_by='2254' AND ENTERED_DATE BETWEEN trunc (sysdate, 'mm')/*current month*/ AND SYSDATE ```
Mudassir's answer worked for me but I would also check the year to make sure you are not getting last year's records. ``` WHERE TO_CHAR(ENTERED_DATE, 'mm') = TO_CHAR(SYSDATE, 'mm') AND TO_CHAR(ENTERED_DATE, 'yyyy') = TO_CHAR(SYSDATE, 'yyyy') ``` Also, if you are running a large query, functions in your `WHERE` clause can slow it down. If so, you may want to consider a function based index.
How to get current month records from ORACLE since first date.?
[ "", "sql", "oracle", "" ]
I was working in MSSQL server 2012. I wrote a query ``` select * from Mytable where col1 is not null and col1 != '' ``` and ``` select * from Mytable where col1 is not null and col1 <> '' ``` Both returns same value. I am just curious to know, what is the actual difference between `<>` and `!=` operators?
`!=` is not ANSI compliant. That's all. Use `<>` UPD. Oh, [here](http://msdn.microsoft.com/en-us/library/ms188074.aspx)
My understanding is that there is no difference. The `<>` operator is the ANSI SQL standard inequality operator, while Microsoft included `!=` to make it like some programming languages.
Difference between <> and != operators in MSSQL Server
[ "", "sql", "sql-server", "operators", "" ]
This query takes a long time to run on MS Sql 2008 DB with 70GB of data. If i run the 2 where clauses seperately it takes a lot less time. EDIT - I need to change the 'select \*' to 'delete' afterwards, please keep it in mind when answering. thanks :) ``` select * From computers Where Name in ( select T2.Name from ( select Name from computers group by Name having COUNT(*) > 1 ) T3 join computers T2 on T3.Name = T2.Name left join policyassociations PA on T2.PK = PA.EntityId where (T2.EncryptionStatus = 0 or T2.EncryptionStatus is NULL) and (PA.EntityType <> 1 or PA.EntityType is NULL) ) OR ClientId in ( select substring(ClientID,11,100) from computers ) ```
Swapping `IN` for `EXISTS` will help. Also, as per Gordon's answer: `UNION` can out-perform `OR`. ``` SELECT computers.* FROM computers LEFT JOIN policyassociations ON policyassociations.entityid = computers.pk WHERE ( computers.encryptionstatus = 0 OR computers.encryptionstatus IS NULL ) AND ( policyassociations.entitytype <> 1 OR policyassociations.entitytype IS NULL ) AND EXISTS ( SELECT name FROM ( SELECT name FROM computers GROUP BY name HAVING Count(*) > 1 ) As duplicate_computers WHERE name = computers.name ) UNION SELECT * FROM computers As c WHERE EXISTS ( SELECT SubString(clientid, 11, 100) FROM computers WHERE SubString(clientid, 11, 100) = c.clientid ) ``` --- You've now updated your question asking to make this a delete. Well the good news is that instead of the "OR" you just make two `DELETE` statements: ``` DELETE FROM computers LEFT JOIN policyassociations ON policyassociations.entityid = computers.pk WHERE ( computers.encryptionstatus = 0 OR computers.encryptionstatus IS NULL ) AND ( policyassociations.entitytype <> 1 OR policyassociations.entitytype IS NULL ) AND EXISTS ( SELECT name FROM ( SELECT name FROM computers GROUP BY name HAVING Count(*) > 1 ) As duplicate_computers WHERE name = computers.name ) ; DELETE FROM computers As c WHERE EXISTS ( SELECT SubString(clientid, 11, 100) FROM computers WHERE SubString(clientid, 11, 100) = c.clientid ) ; ```
`or` can be poorly optimized sometimes. In this case, you can just split the query into two subqueries, and combine them using `union`: ``` select * From computers Where Name in ( select T2.Name from ( select Name from computers group by Name having COUNT(*) > 1 ) T3 join computers T2 on T3.Name = T2.Name left join policyassociations PA on T2.PK = PA.EntityId where (T2.EncryptionStatus = 0 or T2.EncryptionStatus is NULL) and (PA.EntityType <> 1 or PA.EntityType is NULL) ) UNION select * From computers WHERE ClientId in ( select substring(ClientID,11,100) from computers ); ``` You might also be able to improve performance by replacing the subqueries with explicit `join`s. However, this seems like the shortest route to better performance. EDIT: I think the version with join's is: ``` select c.* From computers c left outer join (select c.Name from (select c.*, count(*) over (partition by Name) as cnt from computers c ) c left join policyassociations PA on T2.PK = PA.EntityId and PA.EntityType <> 1 where (c.EncryptionStatus = 0 or c.EncryptionStatus is NULL) and c.cnt > 1 ) cpa on c.Name = cpa.Name left outer join (select substring(ClientID, 11, 100) as name from computers ) csub on c.Name = csub.name Where cpa.Name is not null or csub.Name is not null; ```
Is there a way to make this query more efficient performance wise?
[ "", "sql", "sql-server-2008", "" ]
I was asked this question in an interview: From the 2 tables below, write a query to pull customers with no sales orders. How many ways to write this query and which would have best performance. * Table 1: `Customer` - `CustomerID` * Table 2: `SalesOrder` - `OrderID, CustomerID, OrderDate` Query: ``` SELECT * FROM Customer C RIGHT OUTER JOIN SalesOrder SO ON C.CustomerID = SO.CustomerID WHERE SO.OrderID = NULL ``` Is my query correct and are there other ways to write the query and get the same results?
*Answering for MySQL instead of SQL Server, cause you tagged it later with SQL Server, so I thought (since this was an interview question, that it wouldn't bother you, for which DBMS this is). Note though, that the queries I wrote are standard sql, they should run in every RDBMS out there. How each RDBMS handles those queries is another issue, though.* I wrote this little procedure for you, to have a test case. It creates the tables customers and orders like you specified and I added primary keys and foreign keys, like one would usually do it. No other indexes, as every column worth indexing here is already primary key. 250 customers are created, 100 of them made an order (though out of convenience none of them twice / multiple times). A dump of the data follows, posted the script just in case you want to play around a little by increasing the numbers. ``` delimiter $$ create procedure fill_table() begin create table customers(customerId int primary key) engine=innodb; set @x = 1; while (@x <= 250) do insert into customers values(@x); set @x := @x + 1; end while; create table orders(orderId int auto_increment primary key, customerId int, orderDate timestamp, foreign key fk_customer (customerId) references customers(customerId) ) engine=innodb; insert into orders(customerId, orderDate) select customerId, now() - interval customerId day from customers order by rand() limit 100; end $$ delimiter ; call fill_table(); ``` For me, this resulted in this: ``` CREATE TABLE `customers` ( `customerId` int(11) NOT NULL, PRIMARY KEY (`customerId`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; INSERT INTO `customers` VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10),(11),(12),(13),(14),(15),(16),(17),(18),(19),(20),(21),(22),(23),(24),(25),(26),(27),(28),(29),(30),(31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41),(42),(43),(44),(45),(46),(47),(48),(49),(50),(51),(52),(53),(54),(55),(56),(57),(58),(59),(60),(61),(62),(63),(64),(65),(66),(67),(68),(69),(70),(71),(72),(73),(74),(75),(76),(77),(78),(79),(80),(81),(82),(83),(84),(85),(86),(87),(88),(89),(90),(91),(92),(93),(94),(95),(96),(97),(98),(99),(100),(101),(102),(103),(104),(105),(106),(107),(108),(109),(110),(111),(112),(113),(114),(115),(116),(117),(118),(119),(120),(121),(122),(123),(124),(125),(126),(127),(128),(129),(130),(131),(132),(133),(134),(135),(136),(137),(138),(139),(140),(141),(142),(143),(144),(145),(146),(147),(148),(149),(150),(151),(152),(153),(154),(155),(156),(157),(158),(159),(160),(161),(162),(163),(164),(165),(166),(167),(168),(169),(170),(171),(172),(173),(174),(175),(176),(177),(178),(179),(180),(181),(182),(183),(184),(185),(186),(187),(188),(189),(190),(191),(192),(193),(194),(195),(196),(197),(198),(199),(200),(201),(202),(203),(204),(205),(206),(207),(208),(209),(210),(211),(212),(213),(214),(215),(216),(217),(218),(219),(220),(221),(222),(223),(224),(225),(226),(227),(228),(229),(230),(231),(232),(233),(234),(235),(236),(237),(238),(239),(240),(241),(242),(243),(244),(245),(246),(247),(248),(249),(250); CREATE TABLE `orders` ( `orderId` int(11) NOT NULL AUTO_INCREMENT, `customerId` int(11) DEFAULT NULL, `orderDate` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`orderId`), KEY `fk_customer` (`customerId`), CONSTRAINT `orders_ibfk_1` FOREIGN KEY (`customerId`) REFERENCES `customers` (`customerId`) ) ENGINE=InnoDB AUTO_INCREMENT=128 DEFAULT CHARSET=utf8; INSERT INTO `orders` VALUES (1,247,'2013-06-24 19:50:07'),(2,217,'2013-07-24 19:50:07'),(3,8,'2014-02-18 20:50:07'),(4,40,'2014-01-17 20:50:07'),(5,52,'2014-01-05 20:50:07'),(6,80,'2013-12-08 20:50:07'),(7,169,'2013-09-10 19:50:07'),(8,135,'2013-10-14 19:50:07'),(9,115,'2013-11-03 20:50:07'),(10,225,'2013-07-16 19:50:07'),(11,112,'2013-11-06 20:50:07'),(12,243,'2013-06-28 19:50:07'),(13,158,'2013-09-21 19:50:07'),(14,24,'2014-02-02 20:50:07'),(15,214,'2013-07-27 19:50:07'),(16,25,'2014-02-01 20:50:07'),(17,245,'2013-06-26 19:50:07'),(18,182,'2013-08-28 19:50:07'),(19,166,'2013-09-13 19:50:07'),(20,69,'2013-12-19 20:50:07'),(21,85,'2013-12-03 20:50:07'),(22,44,'2014-01-13 20:50:07'),(23,103,'2013-11-15 20:50:07'),(24,19,'2014-02-07 20:50:07'),(25,33,'2014-01-24 20:50:07'),(26,102,'2013-11-16 20:50:07'),(27,41,'2014-01-16 20:50:07'),(28,94,'2013-11-24 20:50:07'),(29,43,'2014-01-14 20:50:07'),(30,150,'2013-09-29 19:50:07'),(31,218,'2013-07-23 19:50:07'),(32,131,'2013-10-18 19:50:07'),(33,77,'2013-12-11 20:50:07'),(34,2,'2014-02-24 20:50:07'),(35,45,'2014-01-12 20:50:07'),(36,230,'2013-07-11 19:50:07'),(37,101,'2013-11-17 20:50:07'),(38,31,'2014-01-26 20:50:07'),(39,56,'2014-01-01 20:50:07'),(40,176,'2013-09-03 19:50:07'),(41,223,'2013-07-18 19:50:07'),(42,145,'2013-10-04 19:50:07'),(43,26,'2014-01-31 20:50:07'),(44,62,'2013-12-26 20:50:07'),(45,195,'2013-08-15 19:50:07'),(46,153,'2013-09-26 19:50:07'),(47,179,'2013-08-31 19:50:07'),(48,104,'2013-11-14 20:50:07'),(49,7,'2014-02-19 20:50:07'),(50,209,'2013-08-01 19:50:07'),(51,86,'2013-12-02 20:50:07'),(52,110,'2013-11-08 20:50:07'),(53,204,'2013-08-06 19:50:07'),(54,187,'2013-08-23 19:50:07'),(55,114,'2013-11-04 20:50:07'),(56,38,'2014-01-19 20:50:07'),(57,236,'2013-07-05 19:50:07'),(58,79,'2013-12-09 20:50:07'),(59,96,'2013-11-22 20:50:07'),(60,37,'2014-01-20 20:50:07'),(61,207,'2013-08-03 19:50:07'),(62,22,'2014-02-04 20:50:07'),(63,120,'2013-10-29 20:50:07'),(64,200,'2013-08-10 19:50:07'),(65,51,'2014-01-06 20:50:07'),(66,181,'2013-08-29 19:50:07'),(67,4,'2014-02-22 20:50:07'),(68,123,'2013-10-26 19:50:07'),(69,108,'2013-11-10 20:50:07'),(70,55,'2014-01-02 20:50:07'),(71,76,'2013-12-12 20:50:07'),(72,6,'2014-02-20 20:50:07'),(73,18,'2014-02-08 20:50:07'),(74,211,'2013-07-30 19:50:07'),(75,53,'2014-01-04 20:50:07'),(76,216,'2013-07-25 19:50:07'),(77,32,'2014-01-25 20:50:07'),(78,74,'2013-12-14 20:50:07'),(79,138,'2013-10-11 19:50:07'),(80,197,'2013-08-13 19:50:07'),(81,221,'2013-07-20 19:50:07'),(82,118,'2013-10-31 20:50:07'),(83,61,'2013-12-27 20:50:07'),(84,28,'2014-01-29 20:50:07'),(85,16,'2014-02-10 20:50:07'),(86,39,'2014-01-18 20:50:07'),(87,3,'2014-02-23 20:50:07'),(88,46,'2014-01-11 20:50:07'),(89,189,'2013-08-21 19:50:07'),(90,59,'2013-12-29 20:50:07'),(91,249,'2013-06-22 19:50:07'),(92,127,'2013-10-22 19:50:07'),(93,47,'2014-01-10 20:50:07'),(94,178,'2013-09-01 19:50:07'),(95,141,'2013-10-08 19:50:07'),(96,188,'2013-08-22 19:50:07'),(97,220,'2013-07-21 19:50:07'),(98,15,'2014-02-11 20:50:07'),(99,175,'2013-09-04 19:50:07'),(100,206,'2013-08-04 19:50:07'); ``` Okay, now to the queries. Three ways came to my mind, I omitted the `right join` that MDiesel did, because it's actually just another way of writing `left join`. It was invented for lazy sql developers, that don't want to switch table names, but instead just rewrite one word. Anyway, first query: ``` select c.* from customers c left join orders o on c.customerId = o.customerId where o.customerId is null; ``` Results in an execution plan like this: ``` +----+-------------+-------+-------+---------------+-------------+---------+------------------+------+--------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+-------+---------------+-------------+---------+------------------+------+--------------------------+ | 1 | SIMPLE | c | index | NULL | PRIMARY | 4 | NULL | 250 | Using index | | 1 | SIMPLE | o | ref | fk_customer | fk_customer | 5 | wtf.c.customerId | 1 | Using where; Using index | +----+-------------+-------+-------+---------------+-------------+---------+------------------+------+--------------------------+ ``` --- Second query: ``` select c.* from customers c where c.customerId not in (select distinct customerId from orders); ``` Results in an execution plan like this: ``` +----+--------------------+--------+----------------+---------------+-------------+---------+------+------+--------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+--------------------+--------+----------------+---------------+-------------+---------+------+------+--------------------------+ | 1 | PRIMARY | c | index | NULL | PRIMARY | 4 | NULL | 250 | Using where; Using index | | 2 | DEPENDENT SUBQUERY | orders | index_subquery | fk_customer | fk_customer | 5 | func | 2 | Using index | +----+--------------------+--------+----------------+---------------+-------------+---------+------+------+--------------------------+ ``` --- Third query: ``` select c.* from customers c where not exists (select 1 from orders o where o.customerId = c.customerId); ``` Results in an execution plan like this: ``` +----+--------------------+-------+-------+---------------+-------------+---------+------------------+------+--------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+--------------------+-------+-------+---------------+-------------+---------+------------------+------+--------------------------+ | 1 | PRIMARY | c | index | NULL | PRIMARY | 4 | NULL | 250 | Using where; Using index | | 2 | DEPENDENT SUBQUERY | o | ref | fk_customer | fk_customer | 5 | wtf.c.customerId | 1 | Using where; Using index | +----+--------------------+-------+-------+---------------+-------------+---------+------------------+------+--------------------------+ ``` --- We can see in all execution plans, that the customers table is read as a whole, but from the index (the implicit one as the only column is primary key). This may change, when you select other columns from the table, that are not in an index. The first one seems to be the best. For each row in customers only one row in orders is read. The `id` column suggests, that MySQL can do this in one step, as only indexes are involved. The second query seems to be the worst (though all 3 queries shouldn't perform too bad). For each row in customers the subquery is executed (the select\_type column tells this). The third query is not much different in that it uses a dependent subquery, but should perform better than the second query. Explaining the small differences would lead to far now. If you're interested, here's the manual page that explains what each column and their values mean here: [EXPLAIN output](http://dev.mysql.com/doc/refman/5.0/en/explain-output.html) **Finally:** I'd say, that the first query will perform best, but as always, in the end one has to measure, to measure and to measure.
I can thing of two other ways to write this query: ``` SELECT C.* FROM Customer C LEFT OUTER JOIN SalesOrder SO ON C.CustomerID = SO.CustomerID WHERE SO.CustomerID IS NULL SELECT C.* FROM Customer C WHERE NOT C.CustomerID IN(SELECT CustomerID FROM SalesOrder) ```
SQL Query 2 tables null results
[ "", "sql", "sql-server", "" ]
I need to find the average number of days between records grouped by the `user_id` of the record. The PK for the table is `eval_id`, so my columns are `eval_id`, `user_id`, and `quality_date`. The output would be just the `user_id` and average days: There is no consistent number of records per user, and the average would be one value for all records related to the user. Trying to find a dynamic approach has so far been unsuccessful. I am using `SQL Server 2012`, SSMS.
**Original Answer:** ``` select user_id, datediff(day, min(quality_date), max(quality_date)) / (count(*) - 1.0) from eval group by user_id ``` **Edit following Jason's comment:** ``` select user_id, datediff(day, min(quality_date), max(quality_date)) / NULLIF((count(*) - 1.0), 0) from eval group by user_id ``` will avoid a divide by zero error if there is only one record for a user. That user will have NULL in the average column. ``` select user_id, case when count(*) = 1 then 1 else datediff(day, min(quality_date), max(quality_date)) / (count(*) - 1.0) end from eval group by user_id ``` will also avoid the error, but gives 1 instead of NULL for the affected users.
You can get the average by taking the total span and dividing by one minus the number of records. ``` select user_id, (case when count(*) > 1 then datediff(day, min(quality_date), max(quality_date)) / (count(*) - 1.0) end) as AvgTimeBetween from eval group by user_id; ``` There are harder ways to do the calculation, such as calculating each span (using `lag()` say) and then using the `avg()` function.
Average Days between Records by User_ID
[ "", "sql", "sql-server", "t-sql", "" ]
I'm simply trying to use min() and max() functions in an expression like so: ``` SELECT WSN, MIN(TOP) - MAX(BASE) FROM PERFS GROUP BY WSN ``` But this doesn't appear to be valid SQL. I keep getting told I can't use aggregate functions in an expression. Could I get some help?
I threw the question onto the DBISAM forum at Elevate (not sure why I didn't do that to begin with), and this is what they came up with: Use two passes and create a memory table. ``` SELECT WSN, MAX(BASE) as MaxBase, MIN(TOP) as MinTop INTO memory FirstPass FROM PERFS GROUP BY WSN ; SELECT (MaxBase - MinTop) as Calc FROM memory FirstPass ``` Thanks for all of the help guys.
Without the database platform we can't tell that much, but try this: ``` select wsn, top - base from ( SELECT WSN , MIN(TOP) top , MAX(BASE) base FROM PERFS GROUP BY WSN ) ``` I think the actual answer can be found in the [documentation](http://www.elevatesoft.com/manual?action=viewtopic&id=dbisam4&product=d&version=7&topic=naming_conventions) where you can find that `TOP` is a keyword indeed and tables are escaped with `"`. Try this: ``` SELECT WSN , MIN("TOP") - MAX(BASE) FROM PERFS GROUP BY WSN ```
SQL Subtract Min And Max
[ "", "sql", "dbisam", "" ]
I have the below table with following columns - ID-1, ID-2 and ID-3 - Key columns Rate - Shows the rate of a product based on the above IDs Date - implied the date for which that Rate applies. ``` +-------+----------+-------------+----------+------------+ | ID-1 | ID-2 | ID-3 | Rate | Date | +-------+----------+-------------+----------+------------+ | 2000 | 1 | 100 | 50 | 12/30/2013 | +-------+----------+-------------+----------+------------+ | 2000 | 1 | 100 | 75 | 10/11/2013 | +-------+----------+-------------+----------+------------+ | 2000 | 1 | 100 | 100 | 12/15/2013 | +-------+----------+-------------+----------+------------+ | 2000 | 2 | 100 | 50 | 10/30/2013 | +-------+----------+-------------+----------+------------+ | 2000 | 2 | 100 | 75 | 10/11/2013 | +-------+----------+-------------+----------+------------+ | 2000 | 2 | 100 | 100 | 09/15/2013 | +-------+----------+-------------+----------+------------+ | 3000 | 2 | 200 | 25 | 1/1/2014 | +-------+----------+-------------+----------+------------+ | 4000 | 2 | 100 | 100 | 12/1/2013 | +-------+----------+-------------+----------+------------+ | 4000 | 1 | 200 | 75 | 1/1/2014 | +-------+----------+-------------+----------+------------+ | 4000 | 2 | 100 | 25 | 11/1/2014 | +-------+----------+-------------+----------+------------+ ``` For each combination of ID-1, ID-2 and ID-3 I want to ouput 2 recent most rates in the following format - Previous Rate - is the rate for the second most recent date Current Rate - is the rate for the recent most date ``` +-------+----------+-------------+---------------+----------------+ | ID-1 | ID-2 | ID-3 | Previous Rate | Current Rate | +-------+----------+-------------+---------------+----------------+ | 2000 | 1 | 100 | 100 | 50 | +-------+----------+-------------+---------------+----------------+ | 2000 | 2 | 100 | 75 | 50 | +-------+----------+-------------+---------------+----------------+ | 3000 | 2 | 200 | | 25 | +-------+----------+-------------+---------------+----------------+ | 4000 | 1 | 200 | | 75 | +-------+----------+-------------+---------------+----------------+ | 4000 | 2 | 200 | 25 | 100 | +-------+----------+-------------+---------------+----------------+ ``` Any ideas?
[SQL Fiddle](http://sqlfiddle.com/#!4/a2833/5) **Oracle 11g R2 Schema Setup**: ``` CREATE TABLE test ( "ID-1", "ID-2", "ID-3", Rate, "Date" ) AS SELECT 2000 , 1 , 100 , 50 , TO_DATE( '12/30/2013', 'MM/DD/YYYY' ) FROM DUAL UNION ALL SELECT 2000 , 1 , 100 , 75 , TO_DATE( '10/11/2013', 'MM/DD/YYYY' ) FROM DUAL UNION ALL SELECT 2000 , 1 , 100 , 100 , TO_DATE( '12/15/2013', 'MM/DD/YYYY' ) FROM DUAL UNION ALL SELECT 2000 , 2 , 100 , 50 , TO_DATE( '10/30/2013', 'MM/DD/YYYY' ) FROM DUAL UNION ALL SELECT 2000 , 2 , 100 , 75 , TO_DATE( '10/11/2013', 'MM/DD/YYYY' ) FROM DUAL UNION ALL SELECT 2000 , 2 , 100 , 100 , TO_DATE( '09/15/2013', 'MM/DD/YYYY' ) FROM DUAL UNION ALL SELECT 3000 , 2 , 200 , 25 , TO_DATE( '1/1/2014', 'MM/DD/YYYY' ) FROM DUAL UNION ALL SELECT 4000 , 2 , 100 , 100 , TO_DATE( '12/1/2013', 'MM/DD/YYYY' ) FROM DUAL UNION ALL SELECT 4000 , 1 , 200 , 75 , TO_DATE( '1/1/2014', 'MM/DD/YYYY' ) FROM DUAL UNION ALL SELECT 4000 , 2 , 100 , 25 , TO_DATE( '11/1/2014', 'MM/DD/YYYY' ) FROM DUAL; ``` **Query 1**: ``` WITH rankings AS ( SELECT "ID-1", "ID-2", "ID-3", Rate, DENSE_RANK() OVER (PARTITION BY "ID-1", "ID-2", "ID-3" ORDER BY "Date" DESC, Rate DESC ) AS "Rank" FROM test ) SELECT "ID-1", "ID-2", "ID-3", MAX( CASE "Rank" WHEN 1 THEN Rate END ) AS current_rate, MAX( CASE "Rank" WHEN 2 THEN Rate END ) AS previous_rate FROM rankings GROUP BY "ID-1", "ID-2", "ID-3" ``` **[Results](http://sqlfiddle.com/#!4/a2833/5/0)**: ``` | ID-1 | ID-2 | ID-3 | CURRENT_RATE | PREVIOUS_RATE | |------|------|------|--------------|---------------| | 2000 | 1 | 100 | 50 | 100 | | 2000 | 2 | 100 | 50 | 75 | | 3000 | 2 | 200 | 25 | (null) | | 4000 | 1 | 200 | 75 | (null) | | 4000 | 2 | 100 | 25 | 100 | ```
Maybe something like this? ``` select id-1, id-2, id-3, first_value(rate) over (partition by id-1, id-2, id-3 order by date desc) as current_rate lead(rate,1) over (partition by id-1, id-2, id-3 order by date desc) as prev_rate from table ```
SQL - Compare Data from the same table
[ "", "sql", "oracle", "oracle11g", "" ]
``` SELECT * FROM Customers c INNER JOIN CustomerAccounts ca ON ca.CustomerID = c.CustomerID AND c.State = 'NY' INNER JOIN Accounts a ON ca.AccountID = a.AccountID AND a.Status = 1 ``` Equivalent: ``` SELECT * FROM Customers c INNER JOIN CustomerAccounts ca ON ca.CustomerID = c.CustomerID INNER JOIN Accounts a ON ca.AccountID = a.AccountID WHERE c.State = 'NY' AND a.Status = 1 ``` **Left Join:** ``` SELECT * FROM Customers c LEFT JOIN CustomerAccounts ca ON ca.CustomerID = c.CustomerID AND c.State = 'NY' LEFT JOIN Accounts a ON ca.AccountID = a.AccountID AND a.Status = 1 ``` Equivalent: ``` SELECT * FROM Customers c LEFT JOIN CustomerAccounts ca ON ca.CustomerID = c.CustomerID LEFT JOIN Accounts a ON ca.AccountID = a.AccountID WHERE c.State = 'NY' AND a.Status = 1 ``` **Right Join** ``` SELECT * FROM Customers c RIGHT JOIN CustomerAccounts ca ON ca.CustomerID = c.CustomerID AND c.State = 'NY' RIGHT JOIN Accounts a ON ca.AccountID = a.AccountID AND a.Status = 1 ``` Equivalent: ``` SELECT * FROM Customers c RIGHT JOIN CustomerAccounts ca ON ca.CustomerID = c.CustomerID RIGHT JOIN Accounts a ON ca.AccountID = a.AccountID WHERE c.State = 'NY' AND a.Status = 1 ``` What difference it makes when we specify the join condition in "WHERE" clause vs "ON join condition"? Do we get same results in inner, left outer, right outer join's by specifying the join conditions in "ON" clause vs in "WHERE" clause. Please advise.
Well, what you call "equivalent" is not an equivalent for outer joins. Let's take the left join for example. Condition in JOIN: ``` SELECT * FROM Customers c LEFT JOIN CustomerAccounts ca ON ca.CustomerID = c.CustomerID AND c.State = 'NY' LEFT JOIN Accounts a ON ca.AccountID = a.AccountID AND a.Status = 1 ``` vs WHERE: ``` SELECT * FROM Customers c LEFT JOIN CustomerAccounts ca ON ca.CustomerID = c.CustomerID LEFT JOIN Accounts a ON ca.AccountID = a.AccountID WHERE c.State = 'NY' AND a.Status = 1 ``` Putting the conditions into the WHERE clause effectively makes the joins *INNER* joins, because the WHERE clause is a row *filter* that is applied *after* the joins have been made.
For an inner join, Oracle will choose which conditions to use to join and which to filter based on the cost-based optimiser's analysis. You are likely to see the same execution plan from the first two queries. It won't necessarily join using the `on` clause and then filter using the `where` clauses. (It rewrites it to its internal format, the pre-ANSI version, under the hood anyway - which you can see if you trace the query - and there is no distinction in that format). You can demonstrate that by looking at the explain plan. One interesting demonstration is if you have a foreign key relationship on two columns, and join the parent to the child with one of those related columns in the `on` and the other in the `where`. ``` create table parent (pid1 number, pid2 number, constraint parent_pk primary key (pid1, pid2)); create table child (cid number, pid1 number not null, pid2 number not null, constraint child_pk primary key (cid), constraint child_fk_parent foreign key (pid1, pid2) references parent (pid1, pid2)); create index child_fk_index on child (pid1, pid2); set autotrace on explain select * from parent p join child c on c.pid2 = p.pid2 where c.pid1 = p.pid1; ----------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ----------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 65 | 2 (0)| 00:00:01 | | 1 | NESTED LOOPS | | | | | | | 2 | NESTED LOOPS | | 1 | 65 | 2 (0)| 00:00:01 | | 3 | TABLE ACCESS FULL | PARENT | 1 | 26 | 2 (0)| 00:00:01 | |* 4 | INDEX RANGE SCAN | CHILD_FK_INDEX | 1 | | 0 (0)| 00:00:01 | | 5 | TABLE ACCESS BY INDEX ROWID| CHILD | 1 | 39 | 0 (0)| 00:00:01 | ----------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 4 - access("C"."PID1"="P"."PID1" AND "C"."PID2"="P"."PID2") ``` The plan shows both columns being used for access, and the index being used. Oracle doesn't necessarily join in the order you expect - the order of the tables in the `from` doesn't restrict Oracle's decision on the best plan: ``` select * from parent p join child c on c.pid2 = p.pid2 where c.pid1 = p.pid1 and c.cid = 1; ------------------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 65 | 1 (0)| 00:00:01 | | 1 | NESTED LOOPS | | 1 | 65 | 1 (0)| 00:00:01 | | 2 | TABLE ACCESS BY INDEX ROWID| CHILD | 1 | 39 | 1 (0)| 00:00:01 | |* 3 | INDEX UNIQUE SCAN | CHILD_PK | 1 | | 1 (0)| 00:00:01 | |* 4 | INDEX UNIQUE SCAN | PARENT_PK | 82 | 2132 | 0 (0)| 00:00:01 | ------------------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 3 - access("C"."CID"=1) 4 - access("C"."PID1"="P"."PID1" AND "C"."PID2"="P"."PID2") ``` So for inner joins they are equivalent, but it can be useful to separate out the columns that define relationships in the `on` clauses, e.g. the columns in the keys/indexes you *expect* it to use; and anything that is just filtering in the `where`. Oracle might still not do what you expect, but it shows your intent and is somewhat self-documenting. ``` select * from child c join parent p on p.pid1 = c.pid1 and p.pid2 = c.pid2 where c.cid = 1; ``` ... which gets the same execution plan as the previous one, despite appearing quite different: ``` ------------------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 65 | 1 (0)| 00:00:01 | | 1 | NESTED LOOPS | | 1 | 65 | 1 (0)| 00:00:01 | | 2 | TABLE ACCESS BY INDEX ROWID| CHILD | 1 | 39 | 1 (0)| 00:00:01 | |* 3 | INDEX UNIQUE SCAN | CHILD_PK | 1 | | 1 (0)| 00:00:01 | |* 4 | INDEX UNIQUE SCAN | PARENT_PK | 82 | 2132 | 0 (0)| 00:00:01 | ------------------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 3 - access("C"."CID"=1) 4 - access("P"."PID1"="C"."PID1" AND "P"."PID2"="C"."PID2") ``` From tracing that and looking in the trace file you can see it's transformed into: ``` Final query after transformations:******* UNPARSED QUERY IS ******* SELECT "C"."CID" "CID","C"."PID1" "PID1","C"."PID2" "PID2","P"."PID1" "PID1", "P"."PID2" "PID2" FROM "STACKOVERFLOW"."CHILD" "C","STACKOVERFLOW"."PARENT" "P" WHERE "C"."CID"=1 AND "P"."PID1"="C"."PID1" AND "P"."PID2"="C"."PID2" ``` ... so internally there is no distinction - all the conditions are in the `where` clause. Others have already covered why this doesn't apply for outer joins, but since I mentioned the old format, moving an outer-join condition to the `where` is roughly the same as omitting the `(+)` from that condition in the old syntax. Compare the transformation of these queries; an outer join where both conditions are in the `on` clause: ``` select * from parent p left outer join child c on c.pid1 = p.pid1 and c.pid2 = p.pid2; Final query after transformations:******* UNPARSED QUERY IS ******* SELECT "P"."PID1" "PID1","P"."PID2" "PID2","C"."CID" "CID","C"."PID1" "PID1", "C"."PID2" "PID2" FROM "STACKOVERFLOW"."PARENT" "P","STACKOVERFLOW"."CHILD" "C" WHERE "C"."PID2"(+)="P"."PID2" AND "C"."PID1"(+)="P"."PID1" ``` ... and the 'same' query where one of the conditions has been moved to the `where` clause: ``` select * from parent p left outer join child c on c.pid1 = p.pid1 where c.pid2 = p.pid2; Final query after transformations:******* UNPARSED QUERY IS ******* SELECT "P"."PID1" "PID1","P"."PID2" "PID2","C"."CID" "CID","C"."PID1" "PID1", "C"."PID2" "PID2" FROM "STACKOVERFLOW"."PARENT" "P","STACKOVERFLOW"."CHILD" "C" WHERE "C"."PID2"="P"."PID2" AND "C"."PID1"="P"."PID1" ``` Notice that the first query has both conditions marked with `(+)`, while the second has neither. The details in the trace show its decisions about (outer) join elimination: ``` OJE: Begin: find best directive for query block SEL$58A6D7F6 (#0) OJE: Considering outer-join elimination on query block SEL$58A6D7F6 (#0) OJE: considering predicate"C"."PID1"(+)="P"."PID1" rejected OJE: outer-join not eliminated OJE: End: finding best directive for query block SEL$58A6D7F6 (#0) ... OJE: Begin: find best directive for query block SEL$9E43CB6E (#0) OJE: Considering outer-join elimination on query block SEL$9E43CB6E (#0) OJE: considering predicate"C"."PID2"="P"."PID2" OJE: Converting outer join of CHILD and PARENT to inner-join. considered OJE: considering predicate"C"."PID1"="P"."PID1" rejected Registered qb: SEL$AE545566 0x2d07c338 (OUTER-JOIN REMOVED FROM QUERY BLOCK SEL$9E43CB6E; SEL$9E43CB6E; "C"@"SEL$1") ``` The outer-join query has become the same as this inner-join: ``` select * from parent p inner join child c on c.pid1 = p.pid1 where c.pid2 = p.pid2; Final query after transformations:******* UNPARSED QUERY IS ******* SELECT "P"."PID1" "PID1","P"."PID2" "PID2","C"."CID" "CID","C"."PID1" "PID1", "C"."PID2" "PID2" FROM "STACKOVERFLOW"."PARENT" "P","STACKOVERFLOW"."CHILD" "C" WHERE "C"."PID2"="P"."PID2" AND "C"."PID1"="P"."PID1" ```
join condition "ON" vs in "WHERE"
[ "", "sql", "join", "oracle11g", "" ]
How to convert `123456.78` to `123,456.78` in SQL Server? I am getting the data from a table like `123456.78` and I have to show it as `123,456.78`
``` SELECT CONVERT(varchar, CAST(123456 AS money), 1) ```
Try this, ``` SELECT CAST(CONVERT(varchar, CAST(123456 AS money), 1) AS varchar) ``` replace 123456 with your number
Conversion in sql server
[ "", "sql", "sql-server", "sql-server-2005", "" ]
Can you do a "SELECT \* FROM TABLE WHERE attribute has last character which is 'A' or 'B'"
I see, that everyone who answered here suggest `LIKE`. However, the following would be faster than `LIKE`, so I suggest you use it. ``` SELECT * FROM t1 WHERE substring(attribute, -1)= 'A' OR substring(attribute, -1)= 'B'; ``` [**SQLFiddle**](http://www.sqlfiddle.com/#!2/1e70c/1)
You can try: ``` SELECT * FROM TABLE WHERE attribute LIKE '%A' OR attribute LIKE '%B' ```
SQL - can you select everything where a certain string ends with a few characters?
[ "", "mysql", "sql", "" ]
I need to fetch all the column names from the database tables.If it is a single table I am able to do it using: ``` SELECT `COLUMN_NAME` FROM `INFORMATION_SCHEMA`.`COLUMNS` WHERE `TABLE_SCHEMA`='nameofschema' AND `TABLE_NAME`='tbl_name' GROUP BY `COLUMN_NAME` ``` Now i need to get the column names from ten tables and show the columns names at one go. How can I achieve this? Please help me in this regrard.
Try [`IN()`](http://dev.mysql.com/doc/refman/5.6/en/comparison-operators.html#function_in) operator like this ``` SELECT `COLUMN_NAME` FROM `INFORMATION_SCHEMA`.`COLUMNS` WHERE `TABLE_SCHEMA`='nameofschema' AND `TABLE_NAME` IN ('tbl_name','tbl_name2','tbl_name3') GROUP BY `COLUMN_NAME` ``` If tables are in different schemas , you can use combination of AND , OR opeartors ``` SELECT `COLUMN_NAME` FROM `INFORMATION_SCHEMA`.`COLUMNS` WHERE (`TABLE_SCHEMA`='nameofschema' AND `TABLE_NAME` = 'tbl_name') OR (`TABLE_SCHEMA`='nameofschema1' AND `TABLE_NAME` = 'tbl_name1') GROUP BY `COLUMN_NAME` ```
You can use `IN()` ``` SELECT `COLUMN_NAME` FROM `INFORMATION_SCHEMA`.`COLUMNS` WHERE `TABLE_SCHEMA`='dbname' AND `TABLE_NAME` IN ('table1' ,'table2'.....) GROUP BY `COLUMN_NAME` ``` Or to show all the columns from database ``` SELECT `COLUMN_NAME` FROM `INFORMATION_SCHEMA`.`COLUMNS` WHERE `TABLE_SCHEMA`='dbname' GROUP BY `COLUMN_NAME` ```
How to Fetch all the columns from the multiple tables by joining the tables
[ "", "mysql", "sql", "" ]
While trying to initialise a database in MySQl, we have ran into the same errors (1064 & 1146) numerous times and are out of ideas on how to correct it. Here is what we have so far: any help would be greatly appreciated. Thank you.
You are using strings for table names,use back ticks.Also you have foreign keys referencing different column types,they must be the same type and size.Also referenced column must have primary or unique key. Here it is,but I don't think this is a correct design. [SQL Fiddle](http://sqlfiddle.com/#!2/6b324)
Just missing commas: ``` CREATE TABLE 'Customer' ( customerCode VARCHAR(5) PRIMARY KEY, firstName VARCHAR(20) NOT NULL, lastName VARCHAR(20)NOT NULL, pointsTotal VARCHAR(5) )ENGINE=INNODB; CREATE TABLE 'GameList' ( gameCode INT PRIMARY KEY AUTO_INCREMENT, gameName VARCHAR(25) NOT NULL, consoleName VARCHAR(25) NOT NULL, pointsValue VARCHAR(25) NOT NULL )ENGINE=INNODB; ``` And last select should be like: ``` SELECT custCode, SUM(points) as pointsTotal from CustomerHistory GROUP BY custCode; ```
MySQL errors 1064 & 1146
[ "", "mysql", "sql", "" ]
I need a help to create a query to return if the period selected already has a meeting in the room. I have table `rep_reuniao` with the following design ``` ID --> int ROOM --> int DATE_BEGIN --> DATETIME DATE_END --> DATETIME ``` Now an insert of a row in the table `rep_reuniao` looks like this ``` ID --> 1 ROOM --> 2 DATE_BEGIN --> 2014-02-21 17:00:00 DATE_END --> 2014-02-21 18:00:00 ``` I will create an Jquery Ajax script to return a `true` or `false` JSON response based on the selected datetime. So the query will have the `ROOM`, `DATE_BEGIN` in this format `2014-02-21 17:00` and `DATE_END` in the same format `2014-02-21 18:00` I Have checked another responses, but the most of them is a query to show the Rooms available. In my desing, a room is always available, except when exist a register in the table. ``` Example 1 ``` ROOM `1` is already in use from `2014-02-21 17:00:00` to `2014-02-21 18:00:00` Need create a query to return a value (Anything) if the user select a room in: ``` BEGIN = 2012-02-21 16:30:00 END = 2012-02-21 17:30:00 ``` See? The room in use in the table begins 17:00 so today my query easly return a value if the begin selected by the user is the same. But the End date selected by the user is between the time already in use. Well, the selected time by the user cannot be in use lookin into the begin date and end date. I have no ideas right now, anyone can help me? You need to check if one date range overlaps another date range. ``` New Date Range = |-----------| Test1 = |=====| Test2 = |=====| Test3 = |=====| Test4 = |=====| Test5 = |=====| Test6 = |=================| ``` Only Test1 and Test5 do not overlap. Sorry if has any English errors. If someone are able to fix, I'll appreciate.
Try this, if the query returns any rows that means the room is already booked. A simple explanation is the user gives `@UserBegin` and `@UserEnd` time that they need the room for. And if the @UserBegin is between a row's begin and end then it is already booked. Similarly, if the @UserEnd is between a row's begin and end, even then the room is booked. ``` SELECT * FROM rep_reuniao r WHERE (@UserBegin BETWEEN r.DATE_BEGIN AND r.DATE_END) OR (@UserEnd BETWEEN r.DATE_BEGIN AND r.DATE_END) OR (@UserBegin <= r.DATE_BEGIN AND @UserEnd >= r.DATE_END) ``` From here you can extend this to return a bool like shown ``` SELECT CASE WHEN EXISTS ( /* The above query */ ) THEN 0 ELSE 1 ``` This should return 1 or 0, when the room is available or not respectively.
The correct logic over overlapping timeframes is that two time frames overlap when both these conditions are true: * the first starts before the other ends * the first ends after the other starts In SQL, these can be expressed easily: ``` SELECT r.* FROM rep_reuniao r WHERE @UserBegin <= r.DATE_END AND @UserEnd >= r.DATE_BEGIN; ``` Note that this can return multiple rows, when multiple meetings occur during the specified timeframe. If you want true/false or 0/1, then use aggregation and `case`: ``` SELECT (case when count(*) = 0 then 'false' else 'true' end) as HasOverlappingRooms FROM rep_reuniao r WHERE @UserBegin <= r.DATE_END AND @UserEnd >= r.DATE_BEGIN; ```
SQL - Query for a meeting room
[ "", "mysql", "sql", "sql-server", "t-sql", "" ]
I have two identical tables A and B. Identical in the sense they both have same type of columns having the same name and data type. A is having 100 columns and B is having 101 columns. Only difference between them is B is having one extra column. Now I am tryuing to insert the columns from A to B . If both tables have same no of columns I can use the following insert statement. ``` insert into B select * from A; ``` since both the tables having different number of columns i have to write all the columns names in the insert statements. ``` insert into B (col1,........col100) select col1....col100 from A; ``` Now My question is .. is there any way we can write the statement in a simpler way like below so that time and no of lines in the code can be saved. ``` insert into B (exclude column 101) select * from A; ```
provided the last column of b is the column which differs from a columns, and both a and b columns are in the same order, and last column of b can be null: ``` INSERT INTO b SELECT *,NULL FROM a; ``` If not, put any valid value instead of NULL.
Since this is tagged sql-server, I'm guessing you're using SSMS to write your queries.. set your options correctly... see below image. ![ssms option screen](https://i.stack.imgur.com/hCoFh.jpg) Then switch to text output mode (CTRL-T) ``` SELECT * FROM TABLENAME WHERE 1=2 ``` Copy and paste the csv column list from here... and mix and match the ordering of the columns as you deem necessary. ![column list as csv](https://i.stack.imgur.com/A1Y4A.jpg)
Inserting rows into table with only one column left and with out using column names
[ "", "sql", "sql-server", "" ]
This is my `SQL Server` table data ``` id Name 1 Active 2 On-Hold 3 Closed 4 Cancelled 5 Active 6 On-Hold 7 Closed 8 Cancelled 9 Active 10 On-Hold 11 Closed 12 Cancelled ``` How can I query the table so that the results look like: ``` Name Id Active 1,5,9 On-Hold 2,6,10 Closed 3,7,11 Cancelled 4,8,12 ```
Use `XML PATH` ``` SELECT distinct t1.Name, SUBSTRING(( SELECT ',' + cast(t2.Id as varchar(30)) FROM Status t2 WHERE t2.Name = t1.Name FOR XML PATH('')), 2, 1000000) as [Id List] FROM Status t1 ``` Just ensure you swap your real table name in
``` drop table #t create table #t(id int,name varchar(20)) insert into #t values(1,'Active'), (2,'On-Hold'), (3 ,'Closed'), (4,'Cancelled'), (5,'Active'), (6,'On-Hold'), (7,'Closed'), (8,'Cancelled'), (9,'Active'), (10,'On-Hold'), (11,'Closed'), (12,'Cancelled') select distinct name, stuff( ( select ',' + cast(id as varchar(10)) from #t t1 where t1.name=t2.name for xml path(''),type).value('.','varchar(max)'),1,1,'') from #t t2 group by t2.name,t2.id ``` `#SEE DEMO`
Format data in sql server
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "sql-server-2008-r2", "" ]
I'm working with two tables: * INVENTORY which has fields for CODE (primary key), QTY\_AVAILABLE and LOCATION * INVOICE\_HISTORY which has fields for INVOICE\_NUM, ITEMCODE (corresponds with CODE above) and QTY\_SOLD I need to generate a report which includes the item's code, number of times the item has been purchased, quantity available in stock and location. So far, I've been trying this ``` SELECT CODE, SALES, QTY_AVAILABLE, LOCATION FROM INVENTORY JOIN ( SELECT ITEMCODE, SUM(QTY_SOLD) AS SALES, FROM INVOICE_HISTORY GROUP BY ITEMCODE ) ON (CODE = ITEMCODE)) ``` But it isn't working. Can anyone help me understand what I'm doing wrong? Thank you.
Try this ``` SELECT i.CODE, SUM(QTY_SOLD) SALES, i.QTY_AVAILABLE, i.LOCATION FROM INVENTORY i JOIN INVOICE_HISTORY ih ON i.CODE = ih.ITEMCODE GROUP BY i.CODE, i.QTY_AVAILABLE, i.LOCATION ```
try this one ``` SELECT CODE, SALES, QTY_AVAILABLE, LOCATION FROM INVENTORY Inv INNER JOIN INVENTORY INVOICE_HISTORY InvHis on Inv.CODE = InvHis.ITEMCODE ```
SQL Server SELECT in JOIN
[ "", "sql", "sql-server", "sql-server-2008", "join", "" ]
I'm having a very hard time framing this query. The error is - *Conversion failed when converting date from character string.* It is a generic query to work for all tables. All was working good, but i had to add a date parameter. **SQL:** ``` DECLARE @qryUpd nvarchar(MAX), @params nvarchar(MAX), @TableName nvarchar(50), @ColName nvarchar(50),@Id nvarchar(50),@ModifiedBy nvarchar(50), @ModifiedDate datetime set @qry = 'update '+@TableName+' set Deleted=1, ModifiedBy='+''''+@Id+''', ModifiedDate='+''''+@ModifiedDate+''''+' where '+@Colname+'='+''''+ @Id+'''' execute sp_executesql @qry ``` **C#:** ``` public void test(int id, int userid) { sqlCommand cmd = new sqlcommand("TestFunc",con); cmd.commandtype = commandtype.storedprocedure; cmd.parameters.addwithvalue("@TableName","tblArea"); cmd.parameters.addwithvalue("@ColName","AreaId"); cmd.parameters.addwithvalue("@Id",id); cmd.parameters.addwithvalue("@ModifiedBy",userid); cmd.parameters.addwithvalue("@ModifiedDate",system.datetime.now.tostring()); } ```
With the exception of the dynamic table name and the column, you can at least partially [parameterize](http://technet.microsoft.com/en-us/library/ms175170.aspx) your query, which will take away the headache of escaping quotes, and also give some protection against Sql Injection (since the table and column are still vulnerable and need to be validated): ``` DECLARE @qry AS NVARCHAR(MAX); set @qry = N'update '+@TableName + ' set Deleted=1, ModifiedBy=@ModifiedBy, ModifiedDate=@ModifiedDate where ' + @Colname + ' = @Id'; exec sp_executesql @qry, N'@Id nvarchar(50), @ModifiedBy nvarchar(50), @ModifiedDate datetime', @Id = @Id, @ModifiedBy = @ModifiedBy, @ModifiedDate = @ModifiedDate; ``` [SqlFiddle here](http://sqlfiddle.com/#!6/e4474/8) You can then bind your .Net parameters for `@Id, @ModifiedBy and @ModifiedDate` directly with the native types
You are passing the parameters in the wrong way. Use this: ``` DECLARE @ParmDefinition nvarchar(500); set @ParmDefinition = '@Id int, @ModifiedDate datetime' set @qry = 'update '+@TableName+' set Deleted=1, ModifiedBy=@Id, ModifiedDate=@ModifiedDate where ' + @Colname + '=@Id' execute sp_executesql @qry, @ParmDefinition, @ModifiedDate, @Id ``` Also pass parameter as `DateTime` instead of `string` by removing the `ToString`
Pass date parameter to dynamic sql
[ "", "sql", "" ]
I have two tables, I want to grab common columns between these two tables for a specific date, I also want these grouped: ``` SELECT uv.keywordid, uv.competitionid FROM unique_views as uv JOIN signups AS s ON s.keywordid=uv.keywordid AND s.competitionid=uv.competitionid WHERE uv.dt_created > "2014-02-27" AND s.dt_created > "2014-02-27" GROUP BY uv.keywordid, uv.competitionid ORDER BY uv.keywordid, uv.competitionid ``` Both keywordid and competitionid are indexed in the two tables. Both tables have around 11k records at this point, and most entries have the same keywordid and competitionid. This query is locking my tables for ~20 seconds. What could I do to improve speed?
So you want distinct pairs of keywordid, competitionid having records in both table created after 2014-02-27. Use DISTINCT then, not GROUP BY. Here is a method guaranteeing to select the distinct tuples from each table before doing the join. So this should be fast: ``` select keywordid, competitionid from (select distinct keywordid, competitionid from unique_views where dt_created > "2014-02-27") uv inner join (select distinct keywordid, competitionid from signups where dt_created > "2014-02-27") s using (keywordid, competitionid) order by keywordid, competitionid; ```
Twenty seconds seems like a long time. I would suggest rewriting the query as an `exists` query. This eliminates the aggregation and allows for some other optimizations. ``` select s.keywordid, s.competitionid from signups s where s.dt_created > '2014-02-27' and exists (select 1 from unique_views uv where s.keywordid = uv.keywordid and s.competitionid = uv.competitionid and uv.dt_created > '2014-02-27' ); ``` This assumes that there are no duplicate `keywordid`, `competitionid` pairs in signups. If that is possible, then use this version: ``` select s.keywordid, s.competitionid from (select distinct s.keywordid, s.competitionid from signups s where s.dt_created > '2014-02-27' ) s where exists (select 1 from unique_views uv where s.keywordid = uv.keywordid and s.competitionid = uv.competitionid and uv.dt_created > '2014-02-27' ); ``` Next, create the following two indexes: ``` create index signups_dt_created_keywordid_competitionid on signups(dt_created, keywordid, competitionid); create index unique_views_keywordid_competitionid_dt_created on unique_views(keywordid, competitionid, dt_created); ``` These indexes actually "cover" the query, so the indexes can be used for the processing rather than the original data pages.
SELECT with JOIN taking too long
[ "", "mysql", "sql", "" ]
I am creating an app which queries an online database. I am using a mySql db hosted on a free web server. Can I create web services on free servers? Also can I query a database from my app directly without using web services?
as RED\_ has stated, you need Http POST to query your database. For the answer for free web server is Yes you can. You can create some PHP file to execute and query specific functions Such as select and display list on your android App. As far as i know, You are NOT able to query database directly and you need gateway to do this. what i mean gateway is some files that bridging your app and the server's db. This gateway can send and receive JSON or XML objects then your android app can consume it. Some good tutorial to begin with: <http://www.codeproject.com/Articles/267023/Send-and-receive-json-between-android-and-php> or call webservice in android <http://androidexample.com/WEBSERVICE/index.php?view=examplecloud&scat=26> if you need more detailed description, just create another question thread. hope this helps.
From what I understand you need a .php file on the server that you will Http post to. That will talk to the MySQL database and return the information. This may not be 100% accurate but I tried it on a small local server and it worked. You cannot directly talk to a MYSQL database from my understanding. What else have you tried so far?
querying online database of free server via android app
[ "", "android", "sql", "database", "" ]
I am building a query which should list all columns in a table along with a data sample for each column name. For example, if I had a table called 'person' with the columns of 'first', 'last' and 'age', I would need it listed like this: ``` table_name column_name column_value person person_id 443 person first john person last smith person age 48 ``` The goal of this query is to actually list all columns in any given table - I have this working. I want to have a data value listed in the third column (column\_value) so I can check the front end of applications against the backend and ensure I am actually dealing with the correct column. This table may have hundreds or thousands of records, but I am just trying to pull one matching record. The average table columns I am working with in any given table is 200+, so I can't do this by hand. Here is the query I am using to pull all columns from a table: ``` SELECT table_name , COLUMN_NAME , '' AS column_value FROM information_schema.COLUMNS WHERE TABLE_NAME LIKE '%person%' ORDER BY TABLE_NAME ``` This query returns the following results (notice column\_value is empty): ``` table_name column_name column_value person person_id person first person last person age ``` I will use the unique primary key to identify a single person record to pull from - this will ensure I don't have repeated column names (ex. person\_id/first/last/age repeating for each time a person is pulled.) The question is, how do I turn this into a list of columns along with the value of a specific selected record in the 'column\_value' column?? ## UPDATE Just to clarify, I am basically trying to turn this type of query & output: ``` SELECT * FROM person WHERE person_id = 443; ** outputs this: person_id first last age 443 john smith 48 ``` Into this: ``` table_name column_name column_value person person_id 443 person first john person last smith person age 48 ```
You could try something like this ``` DECLARE @SchemaName sysname DECLARE @TableName sysname DECLARE @Colname sysname DECLARE @SQL_Template nvarchar(max) DECLARE @SQL nvarchar(max) SET @SchemaName = 'Sales' SET @TableName = 'Customer' CREATE TABLE ##Results(Colname sysname, colvalue varchar(256)) SET @SQL_Template = 'INSERT INTO ##Results(Colname,colvalue ) SELECT top 1 ''@Colname '' as Colname,' + ' CONVERT(nvarchar(256),@Colname) AS ColValue' + ' FROM ['+@SchemaName+'].['+@TableName+']' DECLARE COL_CURSOR CURSOR STATIC FOR SELECT c.name FROM sys.schemas s JOIN sys.tables t ON t.schema_id = s.schema_id JOIN sys.columns c ON c.object_id = t.object_id WHERE s.name = 'Sales' AND t.name = 'Customer' OPEN COL_CURSOR FETCH NEXT FROM COL_CURSOR INTO @Colname WHILE @@FETCH_STATUS =0 BEGIN SET @SQL = REPLACE(@SQL_Template,'@ColName',@ColName) PRINT @SQL EXEC sp_executesql @SQL FETCH NEXT FROM COL_CURSOR INTO @Colname END CLOSE COL_CURSOR DEALLOCATE COL_CURSOR SELECT * FROM ##Results DROP TABLE ##Results ```
I have created this procedure which will take Table Name as Parameter and returns the information you have asked for. ``` CREATE PROCEDURE usp_GetTableFormat_And_SampleDate @TableName NVARCHAR(128) AS BEGIN SET NOCOUNT ON; DECLARE @ColsCasted NVARCHAR(MAX); DECLARE @Cols NVARCHAR(MAX); SELECT @TableName = t.name , @ColsCasted = STUFF(( SELECT ', CAST(' + QUOTENAME(sc.name) + ' AS NVARCHAR(1000)) AS ' + QUOTENAME(sc.name) FROM sys.tables st INNER JOIN sys.columns sc ON st.object_id = sc.object_id WHERE st.name = @TableName AND st.object_id = t.object_id FOR XML PATH(''),TYPE) .value('.','NVARCHAR(MAX)'),1 ,2 ,'') FROM sys.tables t WHERE t.name = @TableName GROUP BY t.name, t.object_id SELECT @TableName = t.name , @Cols = STUFF(( SELECT ', ' + QUOTENAME(sc.name) FROM sys.tables st INNER JOIN sys.columns sc ON st.object_id = sc.object_id WHERE st.name = @TableName AND st.object_id = t.object_id FOR XML PATH(''),TYPE) .value('.','NVARCHAR(MAX)'),1 ,2 ,'') FROM sys.tables t WHERE t.name = @TableName GROUP BY t.name, t.object_id; DECLARE @Sql NVARCHAR(MAX); SET @Sql = N'WITH CTE AS ( SELECT * FROM ( SELECT TOP 1 ' + @ColsCasted + N' FROM ' + QUOTENAME(@TableName) + N') Q UNPIVOT ( VALS for N IN (' + @Cols + '))up ) SELECT TableName , ColName AS ColumnName , VALS AS SampleData FROM CTE C INNER JOIN (SELECT sc.Name AS ColName, st.Name AS TableName FROM sys.tables st INNER JOIN sys.columns sc ON st.object_id = sc.object_id WHERE st.name = '''+ @TableName + N''' ) Q2 ON C.N = Q2.ColName' EXECUTE sp_executesql @Sql END ``` **Test** I have a table called `'Department'` in my database ``` EXECUTE usp_GetTableFormat_And_SampleDate 'Department' ``` **Result Set** ``` ╔════════════╦════════════╦════════════╗ ║ TableName ║ ColumnName ║ SampleData ║ ╠════════════╬════════════╬════════════╣ ║ Department ║ DeptID ║ 1 ║ ║ Department ║ Department ║ HR ║ ╚════════════╩════════════╩════════════╝ ```
List columns in a table with associated values for one record
[ "", "sql", "sql-server", "" ]
Please take a look at my table structure as below: ![enter image description here](https://i.stack.imgur.com/LvtDd.png) The data in **Area\_PrimaryTable** and **Area\_SecondaryTable** will be mapped to `Table_TableID`. I'm wondering how to get the status ID for both table 100 and 111 when I'm querying `AreaID` = A101? ``` Select * from Area where AreaID = A101 ``` **UPDATES:** I've found my way to build such query but not sure if there is any better way? Please advise as below: ``` Select * from Table where TableID in ( Select PrimaryTable from Area where AreaID = 'A101' union Select SecondaryTable from Area where AreaID = 'A101' ) ```
That is the kind of example where `JOIN`s are especially useful. Look that up, since `JOIN`s are something that ends up being used very, very often when working with SQL. They basically allow you to get information from several related tables as part of the same record. So there you want to get information from two rows of Table, their relationship being that they are linked to the same row in Area. And I will consider you just want to get both status. That can be solved with the following code : ``` SELECT t1.Status, t2.Status FROM Area AS a JOIN Table AS t1 ON t1.TableID = a.PrimaryTable JOIN Table AS t2 ON t2.TableId = a.SecondaryTable WHERE AreaID = 'A101' ``` Note that while using `SELECT *` is OK in my book when one experiments, I believe it should be avoided in production code in favour of explicitely naming the columns you want to get information from. *Edit: despite the apparent differences, note that my code proposal has the same behaviour as Xtoxico's. Both proposals are equivalent as far as I know.*
Try with this.... ``` SELECT t1.status as StatusPrimary, t2.status as StatusSecondary FROM Area as a, table as t1, table as t2 WHERE a.primaryTable=t1.TableID and a.secondaryTable=t2.TableId and a.AreaID = A101; ``` Your Sample return two rows, and my code return in the same row the information
SQL How to construct a query that two columns from one table depending on another table?
[ "", "mysql", "sql", "sql-server", "" ]
my `MasterSales` table looks like this ``` SalesDate | Category | Total ----------------------------- 1/1/2000 01 100 1/1/2000 02 110 1/2/2000 01 80 1/2/2000 03 20 ``` and my `Category` table looks like this > ``` > ID | Name > ---------- > 01 | A > 02 | B > 03 | C > 04 | D > ``` my query looks like this: ``` SELECT m.SalesDate, c.Name, SUM(ISNULL(m.Total,0)) AS TotalSales FROM MasterSales m LEFT JOIN Category c ON c.ID = m.Category WHERE m.SalesDate BETWEEN '1/1/2000' AND '1/2/2000' ``` the result I want is like this: ``` SalesDate | Name | TotalSales ------------------------------ 1/1/2000 A 100 1/1/2000 B 110 1/1/2000 C 0 1/1/2000 D 0 1/2/2000 A 80 1/2/2000 B 0 1/2/2000 C 20 1/2/2000 D 0 ``` but the result I get looks this: ``` SalesDate | Name | TotalSales ------------------------------ 1/1/2000 A 100 1/1/2000 B 110 1/2/2000 A 80 1/2/2000 C 20 ``` I already tried using `RIGHT JOIN` instead of `LEFT JOIN` and switching the table on `FROM` clause but the result is still the same. can anyone help explain to me why it won't work properly? P.S. : I'm using SQL Server 2005 (if it matters)
and here is my answer ``` WITH MasterSales (SalesDate, Category, Total) AS ( SELECT '1/1/2000','01',100 UNION SELECT '1/1/2000','02',110 UNION SELECT '1/2/2000','01',80 UNION SELECT '1/2/2000','03',20 ), Category (ID, Name) AS ( SELECT '01','A' UNION SELECT '02','B' UNION SELECT '03','C' UNION SELECT '04','D' ), getDates AS ( SELECT DISTINCT SalesDate FROM MasterSales WHERE SalesDate BETWEEN '1/1/2000' AND '1/2/2000' ) SELECT gD.SalesDate, C.Name, SUM(ISNULL(MS.Total,0)) AS TotalSales FROM getDates AS gD CROSS JOIN Category AS C LEFT JOIN MasterSales AS MS ON MS.Category = C.ID AND MS.SalesDate = gD.SalesDate GROUP BY gD.SalesDate, C.Name ```
Try this. This covers all dates where you have **MasterSales**. In other words, this fills in the missing categories for the day. However, you're looking to fill in missing dates as well, you'll need to create a **date** control table.. look into **[recursive cte](https://stackoverflow.com/questions/20677472/sql-find-missing-date-ranges/20678491#20678491)** for the date table. ``` ;with control_table as ( select distinct SalesDate, ID from MasterSales ms cross join Category c where ms.SalesDate between '1/1/2000' AND '1/2/2000' ) select ct.SalesDate, c.Name as Category, coalesce(sum(ms.Total),0) as Total from control_table ct inner join Category c on ct.ID = c.ID left join MasterSales ms on ct.SalesDate = ms.SalesDate and ct.ID = ms.Category group by ct.SalesDate, c.Name order by ct.SalesDate asc, c.Name asc ``` **[FIDDLE](http://sqlfiddle.com/#!3/e5ba4/23)**
why won't this query work properly?
[ "", "sql", "sql-server", "left-join", "right-join", "" ]
I am trying to create a new table in SQL developer. I need to have a CHECK constraint on for Clinic\_City, however it will not work. Can anybody help me? ``` CREATE TABLE TravelClinics (Clinic_Number number(3) PRIMARY KEY, Clinic_Street varchar2(20) NOT NULL, Clinic_City varchar2(10) NOT NULL, Clinic_County varchar2(15) NOT NULL, Clinic_Postcode varchar2(7) NOT NULL, Clinic_Tel varchar2(11) NOT NULL, Clinic_Fax varchar2(11) NOT NULL CONSTRAINT CHECK(Clinic_City IN ('LONDON', 'BIRMINGHAM', 'MANCHESTER', 'LEEDS', 'GLASGOW', 'EDINBURGH'))) ```
In Oracle, you need to attach the constraint to the proper column, and give it a name, like this: ``` CREATE TABLE TravelClinics ( Clinic_Number number(3) PRIMARY KEY , Clinic_Street varchar2(20) NOT NULL , Clinic_City varchar2(10) NOT NULL CONSTRAINT Valid_City CHECK(Clinic_City IN ('LONDON', 'BIRMINGHAM', 'MANCHESTER', 'LEEDS', 'GLASGOW', 'EDINBURGH')) , Clinic_County varchar2(15) NOT NULL , Clinic_Postcode varchar2(7) NOT NULL , Clinic_Tel varchar2(11) NOT NULL , Clinic_Fax varchar2(11) NOT NULL ) ``` [Demo on sqlfiddle.](http://sqlfiddle.com/#!4/cdbaf) A better approach would be creating a separate table for cities, inserting six rows into it, and referencing that table from your `TravelClinics` table. This would reduce the amount of information that needs to be duplicated. It would also immensely simplify adding new cities, because it would be a data operation, rather than a schema operation.
Try this. You are missing comma after defining `Clinic_Fax` column ``` CREATE TABLE TravelClinics (Clinic_Number number(3) PRIMARY KEY, Clinic_Street varchar2(20) NOT NULL, Clinic_City varchar2(10) NOT NULL, Clinic_County varchar2(15) NOT NULL, Clinic_Postcode varchar2(7) NOT NULL, Clinic_Tel varchar2(11) NOT NULL, Clinic_Fax varchar2(11) NOT NULL, CONSTRAINT pk CHECK(Clinic_City IN ('LONDON', 'BIRMINGHAM', 'MANCHESTER', 'LEEDS', 'GLASGOW', 'EDINBURGH'))) ``` [**SQL Fiddle Demo**](http://www.sqlfiddle.com/#!4/df36c)
Cannot Create Table in SQL with CHECK constraint
[ "", "sql", "oracle", "oracle-sqldeveloper", "" ]
I've a table which has a column named as last\_update and another named as next\_update where both are in the 'date' format. The problem now is how could I make the date value in the next\_update is equal to the date value in the last\_update plus a designed period for the same ID. I planned to program something like ``` value of column 'next_update' = value of column 'last_update' + designed period for the same ID ``` How could I program it in sql language for postgresql. Appreciate any help and thank you in advance.
You can do something like this: ``` UPDATE myTable SET next_update = last_update + someInteger ``` See: <http://wiki.postgresql.org/wiki/Working_with_Dates_and_Times_in_PostgreSQL#Because_the_difference_of_two_DATES_is_an_INTEGER.2C_this_difference_may_be_added.2C_subtracted.2C_divided.2C_multiplied.2C_or_even_modulo_.28.25.29> To do this for a specific row, you would simply do: ``` UPDATE myTable SET next_update = last_update + someInteger WHERE id = desiredID ```
Is this what you mean? ``` update t set next_update = last_update + interval 'x' day; ``` Or if you want something variable: ``` update t set new_update = last_update + interval '1' day * x; ``` Where "x" can be another column or constant or `case` expression.
Column Value based on Referring Column
[ "", "sql", "postgresql", "" ]
Hi I have this piece of code: ``` insert into table1(column1, column2) select distinct id, somevalue from table1 where id in (select id from table2) ``` which basically is adding all the id's from table2 to table1 column1, while other columns in table1 share same values for all rows. Now the problem is that I have to check whether an id from table2 already exists in table1 and only insert if it does not. How can I achieve that?
I think you are trying to do something like this... ``` INSERT INTO table1(column1, column2) SELECT DISTINCT t2.id, t2.somevalue FROM table2 t2 LEFT JOIN table1 t1 ON t2.id = t1.id where t1.id IS NULL ```
``` INSERT INTO TABLE1(COLUMN1, COLUMN2) SELECT distinct id, value FROM Table2 WHERE id NOT IN (SELECT ID from table1) ```
MSSQL - Insert a row into a table if the row does not already exist in the table
[ "", "sql", "sql-server", "" ]
Why am I having this error? If you looked at the screenshot, you will see the database. It only happens when I am connected to two database engines. It only detects the databases from database engine below not the one highlighted. Is there a way to use my database other than closing the application and opening just one database engine? ![enter image description here](https://i.stack.imgur.com/wMwQV.png)
Just highlight the database engine you want to use first then click ![New Query](https://i.stack.imgur.com/IVkbs.png) and that's it.
You should be able to look at which server you're currently linked to at the bottom right corner of the window. Most likely it's linked to another server. You can select the server you want and then click on New Query and then just copy and paste the code.
Database does not exist. Make sure that the name is entered correctly
[ "", "sql", "sql-server-2008", "" ]
I need to restrict an (support) user from viewing columns in a table (other users should have full access to this table). So I granted access to only the columns I specified via "`GRANT SELECT ON dbo.TestTable (FirstCol, SecondCol, ThirdCol) TO HR_Intern;`" But when I am running a "`SELECT * FROM dbo.TestTable;`" i got an Access Denied Error for every other column in the table. The user is doing customer support using the MSSQL Management Studio directly on the database and the errors won't allow the user to edit the data. Is it possible to just display the columns the user have access to and ignoring every denied column? Thanks for your help :)
Better to create a `VIEW` and provide the users access to it. In the `VIEW` only those columns these users can see should be part of `SELECT` statement.
As pointed out by others, you need to replace `*` by an explicit select list. In case you are worried about having to specify things twice, here is a query to retrieve the list of permitted columns from metadata. If you like, you can use its result set to generate (part of) the select list for the query on `TestTable`. ``` SELECT c.name FROM sys.columns c INNER JOIN sys.database_permissions p ON p.class = 1 AND p.major_id = c.object_id AND p.minor_id = c.column_id AND p.state = 'G' AND p.grantee_principal_id = DATABASE_PRINCIPAL_ID('HR_Intern') WHERE c.object_id = OBJECT_ID('dbo.TestTable') ``` Replace `DATABASE_PRINCIPAL_ID('HR_Intern')` by `DATABASE_PRINCIPAL_ID()` to get metadata for the currently active user. The query is still pretty crude; it disregards table-wide grants, and all denies. You may want to experiment with that a bit.
Ignore denied columns in "select * from"
[ "", "sql", "sql-server", "access-denied", "" ]
I am trying to change the `id` (Primary Key, AUTO\_INCREMENT) of every row in a table using this: ``` UPDATE foo SET id = id + 4; ``` But it's giving me: ``` Duplicate row for primary key value 5 ``` Or something along the lines of that. How can I fix this? Is there a query to update the rows in backward order, so this won't happen? Thanks.
This is the solution to do it in a single SQL statement. ``` UPDATE foo F INNER JOIN (SELECT id FROM foo ORDER BY id DESC) F2 ON F.id = F2.id SET F.id = F.id + 4 ```
This may not be the most efficient way to do this. But you could try making a select, sort by id from biggest to smallest, and modify it one by one. Once again, it is inefficient but it gets the job done.
How to update MySQL rows in specific order?
[ "", "mysql", "sql", "" ]
Hoping someone can help with the following I have a table with 3 columns ID, Application, NextVersion This looks like the following ``` ID1, WindowsXP, ID2, ID2, Windows7, ID3 ID3, Windows8 , NULL ``` What is best way of returning my table of results as follows. ``` ID1, WindowsXP, Windows7, ID2, Windows7, Windows8 ID3, Windows8 , NULL ``` Thanks in advance
you can do the simple join but the second one, after some changes gives you opportunity to limit the result only to children starting with certain parent id (and all descentant). I do not know how complex your real query will be If you only need the join then shame on you, sql basics! ``` SELECT m.id, m.name, y.name FROM mytable AS m LEFT JOIN mytable AS y ON m.parent = y.id ``` and something more cpmlocated ``` CREATE TABLE MyTable ( id INT, name VARCHAR(50), parent INT ) go INSERT INTO MyTable (id, name, parent) VALUES (1, 'WindowsXP', 2), (2, 'Windows7', 3), (3, 'Windows8', NULL) go WITH x AS (SELECT * FROM MyTable) SELECT m.id, m.name, x.name FROM MyTable as m LEFT JOIN x ON m.parent = x.id go ```
replace yourtable with your table name and execute this ``` SELECT a.id, a.[Application], y.[Application] FROM yourtable AS a LEFT JOIN yourtable AS b ON a.NextVersion = b.id ```
How to link result from one row to another on same table
[ "", "mysql", "sql", "" ]
For example, I have the following example SQL document in `SQL[ANSI]` mode: ``` create table title_price as select title, price from frumble group by title, price order by title; select *, count(*) from frumble group by title, price order by title; ``` Any ideas on how to capitalize the keywords, like `select`, `from`, `group`, `by`, `table`, `as`, etc? They are already shown in blue font in my editor.
Here is my attempt at solution (assuming you want to upcase `MySQL` keywords) ``` (defun point-in-comment () (let ((syn (syntax-ppss))) (and (nth 8 syn) (not (nth 3 syn))))) (defun my-capitalize-all-mysql-keywords () (interactive) (require 'sql) (save-excursion (dolist (keywords sql-mode-mysql-font-lock-keywords) (goto-char (point-min)) (while (re-search-forward (car keywords) nil t) (unless (point-in-comment) (goto-char (match-beginning 0)) (upcase-word 1)))))) ``` After evaluating this function, just do `M-x``my-capitalize-all-mysql-keywords``RET`. The advantage of this solution is that it picks up the keywords from Emacs `sql-mode`, you do not need to specify them. Also I assumed you meant you wanted to `upcase` the words
I've written [`sql-upcase.el`](https://www.emacswiki.org/emacs/SqlUpcase) for upper-casing keywords and function names in `sql-mode` and/or `sql-interative-mode`. It provides `sql-upcase-region` and `sql-upcase-buffer` commands for processing pre-existing SQL, but differs significantly from the other solutions in that it also provides a `sql-upcase-mode` minor mode which processes text automatically as it is inserted. This means (a) that SQL keywords are upcased as you type them, and (b) that you can paste SQL into a `sql-mode` buffer, and *all* of the keywords will automatically be upcased. This will hopefully Just Work for all the SQL products supported by Emacs, as it uses the keyword regexps defined for the buffer's `sql-product`. This was initially inspired by the use of font-lock keywords in Douglas La Rocca's function for upper-casing all PostgreSQL keywords in a buffer, which can be found on the [EmacsWiki](https://www.emacswiki.org/emacs/SqlMode) (which also bears similarities to the method used in user2053036's accepted [answer](https://stackoverflow.com/a/22094733/324105) here). Follow the link for more details.
Emacs: How to capitalize all keywords (example in SQL)
[ "", "sql", "emacs", "" ]
I want a SQL query which can search for a certain word, but only in lowercase. I am trying `select * from employees where name like LOWER('%smith%')`however, this does not bring back any results. However, when I do `select * from employees where name like '%smith%'` it returns employees with name SMITH and smith....in my case i just want it to return where employee names are lower-cased. Thanks in advance!
Your default collation is likely set to a case-insensitive option. Put simply this causes all character comparison to be performed as if the sides of the comparison are of the same case, regardless of whether this is true or not. You may use a `COLLATE` statement following the column name in the `WHERE` clause: ``` select * from employees where name COLLATE Latin1_General_CS_AS like '%smith%' ``` However, if you have a significant need for case-sensitive comparison you may wish to change the default collation of your database as explicitly marking collation is very verbose, as you can see.
Normally, in order to query a table in a case sensitive way, you would designate the column (or the entire database) with a case sensitive collation. For performance reasons, that will be the preferred approach if you will be regularly performing case sensitive queries against this column. However, in a pinch, you can specify the query collation on the fly. This comes at the cost of a more expensive query plan, so regard it as a last resort. For example: ``` SELECT * FROM Employees WHERE EmployeeName COLLATE SQL_Latin1_General_CP1_CS_AS LIKE '%smith%' ``` There are a variety of case sensitive collations, so you will typically want to choose the one that is closest to the standard case insensitive collation you are using. You can check that by running these statements to check the configured collation: ``` --Server level SELECT SERVERPROPERTY('COLLATION') --Database level SELECT DATABASEPROPERTYEX('DatabaseName', 'Collation') --Column level USE DatabaseName GO SELECT name, collation_name FROM sys.columns WHERE OBJECT_ID IN (SELECT OBJECT_ID FROM sys.objects WHERE type = 'U' AND name = 'TableName') AND name = 'ColumnName' ``` Most likely these will all return the same value. For example, SQL\_Latin1\_General\_CP1\_CI\_AS, in which case the collation you would use for case sensitive queries would be SQL\_Latin1\_General\_CP1\_CS\_AS. Depending on the result you get, you will want to select the related case-sensitive collation [from this list](http://msdn.microsoft.com/en-us/library/ms144250%28v=sql.105%29.aspx).
SQL query to search for lowercase strings
[ "", "sql", "sql-server", "" ]
``` SELECT "items".* FROM "items" INNER JOIN item_mods ON item_mods.item_id = items.id INNER JOIN mods ON mods.id = item_mods.mod_id AND item_mods.mod_id = 3 WHERE (items.player_id = '1') GROUP BY items.id, item_mods.primary_value ORDER BY item_mods.primary_value DESC NULLS LAST, items.created_at DESC LIMIT 100 ``` This query is currently taking around 7 seconds. I have about 550k records on the items table, ~2.5million records on the `item_mods` table and about 800 records on the mods table. I have quite a few indexes but I am not sure if I am using the right ones. So if you were to optimize this query what would you recommend? Here is the explain analysis. <http://explain.depesz.com/s/aiYH> ``` "Limit (cost=107274.88..107275.13 rows=100 width=554) (actual time=6648.872..6648.888 rows=100 loops=1)" " -> Sort (cost=107274.88..107419.24 rows=57745 width=554) (actual time=6648.870..6648.879 rows=100 loops=1)" " Sort Key: item_mods.primary_value, items.created_at" " Sort Method: top-N heapsort Memory: 103kB" " -> Group (cost=104634.82..105067.91 rows=57745 width=554) (actual time=6358.348..6529.342 rows=57498 loops=1)" " -> Sort (cost=104634.82..104779.18 rows=57745 width=554) (actual time=6358.344..6423.184 rows=57498 loops=1)" " Sort Key: items.id, item_mods.primary_value" " Sort Method: external sort Disk: 25624kB" " -> Nested Loop (cost=23182.35..71248.94 rows=57745 width=554) (actual time=3339.625..6127.659 rows=57498 loops=1)" " -> Index Scan using mods_pkey on mods (cost=0.00..8.27 rows=1 width=4) (actual time=0.323..0.324 rows=1 loops=1)" " Index Cond: (id = 3)" " -> Merge Join (cost=23182.35..70663.22 rows=57745 width=558) (actual time=3339.298..6108.202 rows=57498 loops=1)" " Merge Cond: (items.id = item_mods.item_id)" " -> Index Scan using items_pkey on items (cost=0.00..45112.64 rows=543004 width=550) (actual time=3.190..2575.715 rows=543024 loops=1)" " Filter: (player_id = 1)" " -> Materialize (cost=23182.33..23471.20 rows=57774 width=12) (actual time=3336.099..3388.810 rows=57547 loops=1)" " -> Sort (cost=23182.33..23326.76 rows=57774 width=12) (actual time=3336.095..3370.179 rows=57547 loops=1)" " Sort Key: item_mods.item_id" " Sort Method: external sort Disk: 1240kB" " -> Bitmap Heap Scan on item_mods (cost=1084.27..17622.45 rows=57774 width=12) (actual time=31.728..3263.762 rows=57547 loops=1)" " Recheck Cond: (mod_id = 3)" " -> Bitmap Index Scan on primary_value_mod_id_desc (cost=0.00..1069.83 rows=57774 width=0) (actual time=29.565..29.565 rows=57547 loops=1)" " Index Cond: (mod_id = 3)" "Total runtime: 6652.100 ms" ``` **UPDATE** I have modified the query as suggested. I was using the GROUP BY to only select 1 item per item ID but I guess distinct works as well. Here is the new query and explain, it still takes too long. The idea of the query is to find all items player '1' has with the item modifier '3' ordered by the modifier with the highest primary value. ``` SELECT DISTINCT("items".id), "item_mods".primary_value, "items".created_at FROM "items" INNER JOIN item_mods ON item_mods.item_id = items.id INNER JOIN mods ON mods.id = item_mods.mod_id AND item_mods.mod_id = 3 WHERE (items.player_id = '1') ORDER BY item_mods.primary_value DESC NULLS LAST, items.created_at DESC LIMIT 100 ``` The explain <http://explain.depesz.com/s/t4Zq> ``` "Limit (cost=73737.59..73738.59 rows=100 width=16) (actual time=6450.253..6450.344 rows=100 loops=1)" " -> Unique (cost=73737.59..74315.04 rows=57745 width=16) (actual time=6450.248..6450.316 rows=100 loops=1)" " -> Sort (cost=73737.59..73881.95 rows=57745 width=16) (actual time=6450.242..6450.272 rows=100 loops=1)" " Sort Key: item_mods.primary_value, items.created_at, items.id" " Sort Method: external merge Disk: 1456kB" " -> Hash Join (cost=46944.77..68183.71 rows=57745 width=16) (actual time=3018.769..6342.109 rows=57498 loops=1)" " Hash Cond: (item_mods.item_id = items.id)" " -> Nested Loop (cost=1084.27..18208.45 rows=57774 width=8) (actual time=15.911..3219.086 rows=57547 loops=1)" " -> Index Scan using mods_pkey on mods (cost=0.00..8.27 rows=1 width=4) (actual time=0.486..0.489 rows=1 loops=1)" " Index Cond: (id = 3)" " -> Bitmap Heap Scan on item_mods (cost=1084.27..17622.45 rows=57774 width=12) (actual time=15.416..3197.257 rows=57547 loops=1)" " Recheck Cond: (mod_id = 3)" " -> Bitmap Index Scan on primary_value_mod_id_desc (cost=0.00..1069.83 rows=57774 width=0) (actual time=13.517..13.517 rows=57547 loops=1)" " Index Cond: (mod_id = 3)" " -> Hash (cost=36420.95..36420.95 rows=543004 width=12) (actual time=2987.089..2987.089 rows=543024 loops=1)" " Buckets: 4096 Batches: 32 Memory Usage: 811kB" " -> Seq Scan on items (cost=0.00..36420.95 rows=543004 width=12) (actual time=0.012..2825.650 rows=543024 loops=1)" " Filter: (player_id = 1)" "Total runtime: 6457.586 ms" ``` **UPDATE 2** Ok, I think I'm almost there. This query takes 6 secs and produces what I want ``` SELECT "items".id, item_mods.primary_value FROM "items" INNER JOIN item_mods ON item_mods.item_id = items.id AND item_mods.mod_id = 36 WHERE (items.player_id = '1') ORDER BY item_mods.primary_value DESC, item_mods.id DESC LIMIT 100 ``` But this query takes 9ms! Note the difference in ORDER BY. But I need them ordered by the most recent first. I have an index on (item\_mods.primary\_value DESC, item\_mods.id DESC) but it dosen't seem to be using it? ``` SELECT "items".id, item_mods.primary_value FROM "items" INNER JOIN item_mods ON item_mods.item_id = items.id AND item_mods.mod_id = 36 WHERE (items.player_id = '1') ORDER BY item_mods.primary_value DESC LIMIT 100 ```
I fixed it by adding the index (mod\_id, primary\_value desc, id desc) to the item\_mods table. The query now runs in 10-15ms
I am assuming that you are using the Postgres "feature" that you can group by the primary/unique key in a table and then select all columns from that table. Otherwise, `select *` would not make sense in an aggregation query. ``` SELECT "items".* FROM "items" INNER JOIN item_mods ON item_mods.item_id = items.id INNER JOIN mods ON mods.id = item_mods.mod_id AND item_mods.mod_id = 3 WHERE (items.player_id = '1') GROUP BY items.id, item_mods.primary_value ORDER BY item_mods.primary_value DESC NULLS LAST, items.created_at DESC LIMIT 100; ``` The following indexes should help this query: ``` items(player_id, id) item_mods(item_id, mod_id); mods(id); ```
How can I speed up this slow query with the right indexes?
[ "", "sql", "postgresql", "" ]
I have two tables: Table\_1: ``` | ID | numero | apagado | 1 | 23 | 0 | 2 | 56 | 0 | 3 | 156 | 0 | 4 | 48 | 0 ``` Table\_2: ``` | ID_Table_1 | data_inspecao | | 1 | 2014-01-03 | | 2 | 2014-01-08 | | 1 | 2014-02-20 | | 4 | 2014-01-06 | | 2 | 2014-01-23 | ``` I want to get the most recent 'data\_inspecao' for all the 'numero'. In my example I want the following result: ``` | ID_Table_1 | data_inspecao | | 1 | 2014-02-20 | | 2 | 2014-01-23 | | 4 | 2014-01-06 | ``` I have the following SELECT statement, but if I change 'DESC' for 'ASC' the result is the same, I did not get the most recent 'data\_inspecao': ``` SELECT e.numero, i.data_inspecao FROM table_1 e INNER JOIN table_2 i ON i.ID_Table_1 = e.numero WHERE e.numero = 6 AND e.apagado = 0 GROUP BY e.numero ORDER BY i.data_inspecao DESC ``` Any help is appreciated
using your example with added `MAX` aggregation ``` SELECT e.numero, Max(i.data_inspecao) as MostRecentDate FROM table_1 e INNER JOIN table_2 i ON i.ID_Table_1 = e.numero GROUP BY e.numero ORDER BY e.numero ```
``` SELECT i.ID_Table_1, MAX(i.data_inspecao ) FROM table_1 e INNER JOIN table_2 i ON i.ID_Table_1 = e.ID GROUP BY i.ID_Table_1 ```
MySQL SELECT two tables
[ "", "mysql", "sql", "inner-join", "" ]
I`ve been struggling with this for a while Address is visible for: ``` <select id='c_address_rule' name='address_rule' /> <option value='2'>Everybody</option> <option value='0'>Nobody</option> <option value='1'>Registered </option> </select> ``` I have this select element and ``` $address_rule=($result[0]['address_rule']); ``` And I can not think of a way to set `selected="selected"` to the option that have the `value == $address_rule` Any advice will be appreciated.
Try this **jQuery** ``` $(document).ready(function() { $("#c_address_rule").val($result[0]['address_rule']); }) ``` **PHP** ``` <select id='c_address_rule' name='address_rule' /> <option value='2' <?php echo ($result[0]['address_rule'] == 2 ) ? selected="selected" : "" ?> >Everybody</option> <option value='0' <?php echo ($result[0]['address_rule'] == 0 ) ? selected="selected" : "" ?> >Nobody</option> <option value='1' <?php echo ($result[0]['address_rule'] == 1 ) ? selected="selected" : "" ?> >Registered </option> </select> ```
selected = "selected" means that you that value is selected. when you load the form the where ever you have written selected = "selected" you will get that value
How to set default value for HTML select element that is submited by sql query?
[ "", "sql", "html", "" ]
I have a table in with a start date and an end date ``` Table A id | begin | end ------------------------ 1 | 15-03-2014 | 06-05-2014 2 | 03-04-2014 | 31-04-2014 ``` And I need a result that selects the weeks beetween those dates: ``` Result should return the rows: id | month | begin | end --------------------------------------------- 1 | 03/03/2014 | 15-03-2014 | 06-05-2014 1 | 04/03/2014 | 15-03-2014 | 06-05-2014 1 | 05/03/2014 | 15-03-2014 | 06-05-2014 1 | 06/03/2014 | 15-03-2014 | 06-05-2014 1 | 01/04/2014 | 03-04-2014 | 31-04-2014 ... 2 | 01/04/2014 | 03-04-2014 | 31-04-2014 2 | 02/04/2014 | 03-04-2014 | 31-04-2014 2 | 03/04/2014 | 03-04-2014 | 31-04-2014 2 | 04/04/2014 | 03-04-2014 | 31-04-2014 2 | 05/04/2014 | 03-04-2014 | 31-04-2014 ``` Or for months: ``` id | month | begin | end --------------------------------------------- 1 | 03/2014 | 15-03-2014 | 06-05-2014 1 | 04/2014 | 15-03-2014 | 06-05-2014 1 | 05/2014 | 15-03-2014 | 06-05-2014 2 | 04/2014 | 03-04-2014 | 31-04-2014 ``` The selects for weeks/months can be seperate not all in one!
``` with i (id, month, begin_, end_) as ( select id, trunc(begin_, 'mm') month, begin_, end_ from a union all select id, add_months(month, 1) month, begin_, end_ from i where add_months(month, 1) < end_ ) select id, to_char(month, 'mm/yyyy') month, begin_, end_ from i order by id, month; ```
``` SQL> with t (id, begin#, end#) 2 as 3 ( 4 select 1, to_date('15-03-2014','DD-MM-YYYY'), to_date('06-05-2014','DD-MM-YYYY') from dual 5 union all 6 select 2, to_date('03-04-2014','DD-MM-YYYY'), to_date('30-04-2014','DD-MM-YYYY') from dual 7 ) 8 , 9 t1 (id, step#, begin#, end#) as 10 ( 11 select id, begin#, begin#, end# from t 12 union all 13 select id, step#+1, begin#, end# from t1 where step# < end# 14 ) 15 select * from t1 16 order by id, step# 17 / ID STEP# BEGIN# END# ---- -------- -------- -------- 1 15.03.14 15.03.14 06.05.14 1 16.03.14 15.03.14 06.05.14 1 17.03.14 15.03.14 06.05.14 1 18.03.14 15.03.14 06.05.14 ............................ 1 02.05.14 15.03.14 06.05.14 1 03.05.14 15.03.14 06.05.14 1 04.05.14 15.03.14 06.05.14 1 05.05.14 15.03.14 06.05.14 1 06.05.14 15.03.14 06.05.14 2 03.04.14 03.04.14 30.04.14 2 04.04.14 03.04.14 30.04.14 2 05.04.14 03.04.14 30.04.14 2 06.04.14 03.04.14 30.04.14 2 07.04.14 03.04.14 30.04.14 ............................ 2 27.04.14 03.04.14 30.04.14 2 28.04.14 03.04.14 30.04.14 2 29.04.14 03.04.14 30.04.14 2 30.04.14 03.04.14 30.04.14 SQL> with t (id, begin#, end#) 2 as 3 ( 4 select 1, to_date('15-03-2014','DD-MM-YYYY'), to_date('06-05-2014','DD-MM-YYYY') from dual 5 union all 6 select 2, to_date('03-04-2014','DD-MM-YYYY'), to_date('30-04-2014','DD-MM-YYYY') from dual 7 ) 8 , 9 t1 (id, step#, begin#, end#) as 10 ( 11 select id, begin#, begin#, end# from t 12 union all 13 select id, step#+1, begin#, end# from t1 where step# < end# 14 ) 15 select unique id, to_char(trunc(step#,'MM'),'MM/YYYY') step#, begin#, end# from t1 16 order by id, step# 17 / ID STEP# BEGIN# END# ---- ------- -------- -------- 1 03/2014 15.03.14 06.05.14 1 04/2014 15.03.14 06.05.14 1 05/2014 15.03.14 06.05.14 2 04/2014 03.04.14 30.04.14 ```
oracle sql statement get months between two dates
[ "", "sql", "oracle", "date", "" ]
I have a table with 2 columns, `EXECUTION_TIMESTAMP`, `EFFECTIVE_DATE` as dates. I want the difference between the two columns expressed as a fractional day. e.g. 1.5 days. Is the following the only way to do it? Surely there must be a better way? ``` select EXECUTION_TIMESTAMP - EFFECTIVE_DATE, extract(day from (EXECUTION_TIMESTAMP - EFFECTIVE_DATE)) +(extract(hour from (EXECUTION_TIMESTAMP - EFFECTIVE_DATE)))/24 +extract(minute from (EXECUTION_TIMESTAMP - EFFECTIVE_DATE))/1400 +extract(second from (EXECUTION_TIMESTAMP - EFFECTIVE_DATE))/86400 from a_table ```
Try just to subtract the two values like ``` SELECT sysdate - (sysdate -dbms_random.value(1,10)) FROM dual; ```
take a look at this example: ``` SELECT ROUND(to_date('02/26/2014 05:00:15 PM','MM/DD/YYYY HH:MI:SS AM') - to_date('02/25/2014 06:00:15 AM','MM/DD/YYYY HH:MI:SS AM'),1) Days FROM DUAL; ``` It will Give output `1.5`.
Fractional day difference in Oracle
[ "", "sql", "oracle", "" ]
Okay so I need to make an SQL query that will perform the same as this MySQL query ``` SELECT DISTINCT userID FROM gallery ORDER BY userID ASC LIMIT 0, 3 ``` The SQL query that I have now looks like this, but it won't do the same job because of the second row still having the `userID` ``` SELECT DISTINCT TOP 3 userID FROM gallery WHERE imageID NOT IN ( SELECT TOP 1 imageID FROM gallery ORDER BY userID ) ORDER BY userID; ``` The table looks something like this ``` imageID userID albumID --------------------------- 1 user1 2008 2 user1 2008 3 user1 2008 4 user1 Wild Fees 5 user1 Wild Fees 6 user1 Wild Fees 7 user1 Wild Fees 8 user1 Billfish 15000 9 user1 Billfish 15000 10 user1 Billfish 15000 11 user1 Billfish 15000 12 user1 2009 13 user1 2009 14 user1 2009 15 user2 New Album Name 001 16 user2 New Album Name 001 17 user2 New Album Name 001 18 user2 New Album Name 001 19 user2 New Album Name 001 20 user2 New Album Name 001 21 user2 New Album Name 001 22 user2 New Album Name 001 23 user2 New Album Name 001 24 user2 New Album Name 001 25 user2 New Album Name 001 26 user3 qweee 27 user3 qweee 28 user3 qweee 29 user3 qweee ``` === EDIT === The expected output of this specific SQL is ``` |userID| -------- |user2 | |user3 | ```
Use the new offset functions in 2012: ``` SELECT DISTINCT userID FROM gallery ORDER BY userID ASC OFFSET 0 ROWS FETCH NEXT 3 ROWS ONLY; ```
I have not used MYSQL, so not sure whether your MYSQL query skips the 1st row or not. PLease double check that. And if you really want to take out the top user id, you can try this one. SELECT DISTINCT TOP 3 userID FROM gallery WHERE userID NOT IN ( SELECT TOP 1 userID FROM gallery ORDER BY userID ) ORDER BY userID; Thanks
SQL Server 2012 LIMITing query
[ "", "sql", "sql-server-2012", "limit", "" ]
I am developing a collaborative writing website called Fablelane, where people can create stories. Now, the system (when people create stories) detects the language of the story, and saves that in the database. On the front page (<http://fablelane.com>), there are 3 "Noteworthy stories". Right now, it just picks the 3 highest voted stories, with a simple `ORDER BY` clause. But if there's a story that's written in the user's language (let's just say that's Danish - `da_DK`), then I want the first of those 3 stories to be the one that matches the language of the user, and the rest of the 2 in English. So in other words: * Get the highest voted story that's written in the user's language. + If it exists, display that as the first story. * Then get the 3 (or 2 if the highest voted story matching the language was found) highest voted stories written in English, and display those. Does that make sense? If not, here's a few use cases. **Use case 1: A Danish user visiting the site, but where there are no Danish stories** * Noteworthy story #1: The highest voted story written in English. * Noteworthy story #2: The second highest voted story written in English. * Noteworthy story #3: The third highest voted story written in English. **Use case 2: A Danish user visiting the site, where there's at least one Danish story available** * Noteworthy story #1: The highest voted story written in Danish. * Noteworthy story #2: The highest voted story written in English. * Noteworthy story #3: The second highest voted story written in English. **Use case 3: An English user visiting the site** This is exactly the same as *Use case 1*. --- **Edit 1** Let's just say (to make it easier) that each story's language is stored in a `VARCHAR(MAX)` column called `Language`, and that the story's table is called `Story`. The way I fetch the user's language is through the `Request.UserLanguages` property in ASP .NET MVC. In other words, it's a string. To make it easier in the example, let's assume that it matches perfectly with the values in the `Language` column of the `Story`.
Without knowing anything about your schema... I think one way to accomplish this might be to produce two result sets and union them and get the top 3 from that. It might not be the best way though and I'm sure the query can be written better (and probably with better performance). Something like this maybe: ``` // first some sample data declare @stories table (story varchar(20), lang char(2), votes int) insert @stories values ('Story 1', 'DK', 1),('Story 2', 'DK', 3), ('Story 3', 'EN', 4),('Story 4', 'EN', 3), ('Story 5', 'EN', 2),('Story 6', 'EN', 1) // and the query select top 3 * from ( (select top 1 *, 1 as ranking from @stories where lang = 'DK' order by votes desc) union (select top 3 *, 2 as ranking from @stories where lang = 'EN' order by votes desc) ) x order by ranking, votes desc ``` Sample [SQL Fiddle](http://www.sqlfiddle.com/#!6/982df/1). This is of course not adapted to your specific situation, but it should give you an idea of a way that could work.
Here is a method for doing what you want: ``` select top 3 s.* from ((select top 1 s.*, 1 as inlang from stories s where s.lang = @UserLang order by votes desc ) union all (select s.*, 0 as inlang from stories s where s.lang = 'English' ) ) s order by inlang desc, votes desc; ``` I suspect, though, that you haven't taken into account the fact that all the highest voted stories could all be in the users language.
Order by some row matching a specific condition, and then another condition for the rest?
[ "", "sql", "sql-server", "" ]
Could you please help me with one SQL query? Table : Students ``` Id | Name | Date of birth 1 Will 1991-02-10 2 James 1981-01-20 3 Sam 1991-02-10 ``` I need to find pairs of students who has same Date of birth. However, **we are not allowed to use GROUP BY**, so simply grouping and counting records is not a solution. I have been trying to do it with JOIN, however with no success. Your help is greatly appreciated!
You can use a self join on the table, joining on the `date_of_birth` column: ``` select s1.name, s2.name from students s1 join students s2 on s1.date_of_birth = s2.date_of_birth and s1.name < s2.name; ``` As wildplasser and dasblinkenlight pointed out the `<` operator (or `>`) is better than a `<>` because when using `<>` in the join condition, the combination Will/Sam will be reported twice. Another way of removing duplicate those duplicates is to use a distinct query: ``` select distinct greatest(s1.name, s2.name), least(s1.name, s2.name) from students s1 join students s2 on s1.date_of_birth = s2.date_of_birth and s1.name <> s2.name; ``` (although eliminating the duplicates in the join condition is almost certainly more efficient)
This query reports all students who have a non-unique birthdate. ``` SELECT * FROM students s WHERE EXISTS ( SELECT * FROM students ex WHERE ex.dob = st.dob AND ex.name <> st.name ) ORDER BY dob ; ```
Finding pairs of repeating entries
[ "", "sql", "group-by", "duplicates", "" ]
There is 3 table. ``` items_table: iid ititle itext tag_list_table tid tname; tag_ref_table iid tid //foreign key for tag_list_table ``` **per record in tag\_list\_table have one or more tags.** ``` for example: tag_list_table: iid=1,itltle="this is title";itext="this is full text"; iid=2,itltle="this is title2";itext="this is full text2"; tag_list_table tid=1 tname="red" tid=2 tname="green" tid=3 tname="yellow" tid=4 tname="orange" tag_ref_table iid=1 tid=1 iid=1 tid=2 iid=1 tid=3 iid=2 tid=4 ``` I want to have result like this: row1: 1- this is title this is full text red,green,yellow row2: 2- this is title2 this is full text2 orange --- I tried these: ``` SELECT i.ititle,i.itext,t.tname FROM tag_ref_table as i LEFT JOIN tag_ref_table t WHERE tid=iid LEFT JOIN tag_list_table r WHERE tid=??????.... ```
If the "red,green,yellow" values are concatenated into a single string, in MySQL you could use GROUP\_CONCAT function like this: ``` SELECT i.*, group_concat(l.tname SEPARATOR ',') as names FROM items_table as i JOIN tag_ref_table r ON r.iid=i.iid JOIN tag_list_table l ON r.tid= l.tid GROUP BY i.iid ``` Documentation: <http://dev.mysql.com/doc/refman/5.7/en/group-by-functions.html#function_group-concat> Further examples: <http://www.giombetti.com/2013/06/06/mysql-group_concat/>
You can use **[CONCAT()](http://dev.mysql.com/doc/refman/5.1/en/string-functions.html#function_concat)** and **[GROUP\_CONCAT()](https://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html#function_group-concat)** in your select: ``` SELECT CONCAT(t1.iid,'- ',t1.ititle,' ',t1.itext,' ',t1.tname) FROM (SELECT i.iid,i.ititle,i.itext,GROUP_CONCAT(r.tname) tname FROM items_table i INNER JOIN tag_ref_table t ON i.iid = t.iid INNER JOIN tag_list_table r ON t.tid = r.tid GROUP BY i.ititle,i.itext,i.iid)t1; ``` you can see complete answer in: ## [SQL Fiddle](http://sqlfiddle.com/#!2/5f7847/22)
nested select to fetch date in SQL
[ "", "mysql", "sql", "" ]
Let's say I have a table that looks like the following: ``` ID | EntityType | Foo | Bar ---------------------------- 1 | Business | test | test 2 | Family | welp | testing 3 | Individual | hmm | 100 4 | Family | test | test 5 | Business | welp | testing 6 | Individual | hmm | 100 ``` This table is fairly large, and there are random (fairly infrequent) instances of "Business" in the EntityType column. A query like ``` SELECT TOP 500 * FROM Records WHERE EntityType='Business' ORDER BY ID DESC ``` works perfectly for grabbing the first set of Businesses, now how would I page backwards and get the previous set of 500 records which meet my criteria? I understand I could look at records between IDs, but there is no guarantee on what ID that would be, for example it wouldn't just be the last ID of the previous query minus 500 because the Business EntityType is so infrequent. I've also looked at some paging models but I'm not sure how I can integrate them while keeping my WHERE clause just how it is (only accepting EntityType of Business) and guaranteeing 500 records (I've used one that "pages" back 500 records, and only shows about 18 businesses because they're within the 500 total records returned). I appreciate any help on this matter!
``` select * from ( select top 500 * from ( select top 1000 * FROM Records WHERE EntityType='Business' ORDER BY ID DESC ) x order by id ) y order by id desc ``` Innermost query - take the top 1000, to get page 2 and page 1 results 2nd level query - take the page 2 records from the first query outermost - reorder the results
I believe what you need is called paging. There is great article on paging on CodeGuru (I think it was mentioned here before): <http://www.codeguru.com/csharp/.net/net_data/article.php/c19611/Paging-in-SQL-Server-2005.htm> I think there you will find everything you need
SQL Query to select certain number of records with WHERE clause
[ "", "sql", "sql-server", "" ]
I have a table in my script that logs transactions and each transaction is given a unique ID number( a auto increment) and I would like to select the highest transaction ID row and delete it. I had something like ``` SELECT * FROM trans WHERE (Select MAX(tranID)FROM Trans); ``` To get the information and ``` DELETE FROM Trans WHERE (SELECT MAX(trainID)); ``` To delete it. Now these work but they select everything and delete every entry. Any ideas?
[MySQL supports a `LIMIT` for `DELETE` queries](https://dev.mysql.com/doc/refman/5.0/en/delete.html). Rather than attempting to get the `MAX(trainID)` which you have found leads to errors when done with a subquery, specify the `ORDER BY` and `LIMIT`. ``` DELETE FROM Trans ORDER BY trainID DESC LIMIT 1 ``` It should alternatively work to join against a subquery instead of trying to use it in the `WHERE`: ``` DELETE t.* FROM Trans t INNER JOIN (SELECT MAX(trainID) maxt FROM Trans) tmax ON t.trainID = tmax.maxt ``` [Here are both version in action](http://sqlfiddle.com/#!2/67ea0/1) Yours deletes all rows because `(SELECT MAX(trainID))` evaluates as a boolean `TRUE`, which matches all rows
> Now these work but they select everything and delete every entry. In this query : ``` DELETE FROM Trans WHERE (SELECT MAX(trainID)); ``` After process will result to : ``` DELETE FROM Trans WHERE (true); ``` hence this is equal to => `DELETE FROM Trans` and thats why all your table data is deleted!
MYSQL Select MAX Value and Delete
[ "", "mysql", "sql", "" ]
I am looking to write a stament that gets all booked towns and counts them So the town name is displayed alongside the number of times it appears in the table. ``` SELECT COUNT(town) AS Town_Bookings FROM booking b, customer c WHERE b.cid = c.cid; ``` Currently this just displays the number of towns that are in the table, whereas I wanted it to count each town and display the name I tried ``` SELECT town, COUNT(town) AS Town_Bookings FROM booking b, customer c WHERE b.cid = c.cid; ``` however this result in the error ``` Query failed: ERROR: column "c.town" must appear in the GROUP BY clause or be used in an aggregate function ```
You need to include columns in `GROUP BY` Clause if they are not part of aggregate function. Also I would specify it as join. ``` SELECT town ,COUNT(*) AS Town_Bookings FROM booking b INNER JOIN customer c ON b.cid = c.cid GROUP BY town ```
Add a group by clause, like the error message tells you : ``` SELECT town, COUNT(town) AS Town_Bookings FROM booking b, customer c WHERE b.cid = c.cid GROUP BY town; ``` Also write your join explicitly. Often it makes no difference, but by writing it clearly, you have full control and don't let the query optimiser guess what join you intended. ``` SELECT town, COUNT(town) AS Town_Bookings FROM booking b inner join customer c on b.cid = c.cid Group by town; ```
SQL error with count and displaying name
[ "", "sql", "database", "" ]
I have 2 many to many tables, and a table to join them. **Officer** * id * name **Report** * id * performanceDate * title **report\_officer** * officer\_id * report\_id **I want to select all officers that haven't ever been associated with a report or who has not been associated with a report within a certain timeframe.** So far I have tried the following (below doesn't work for me!): ``` SELECT * FROM Officer LEFT JOIN report_officer ON Officer.id = report_officer.officer_id LEFT JOIN Report ON Report.id = report_officer.report_id WHERE (performanceDate IS NULL OR performanceDate < "2014-03-23 00:00:00" OR performanceDate > "2014-04-01 00:00:00" ) ``` My left join query works only when the officer has ONLY been associated with a report within a certain timeframe, but fails once they have multiple reports. Result: ``` +------------+-----------------+ | officer_id | performanceDate | +------------+-----------------+ | 130 | NULL | # good | 134 | 2014-03-02 | # bad - officer_id 134 has a performanceDate | 134 | 2014-03-09 | # on 2014-3-30, I do not want this in the results. | 134 | 2014-03-16 | # | 135 | 2014-03-02 | # good +------------+-----------------+ ``` **SQL Fiddle:** <http://sqlfiddle.com/#!2/1bf72/3> <- in the sql fiddle, please refer to the 'name' field for which columns I am looking to have returned. Any ideas on how to make this work? Ideally I would like to make this as simple as possible to work with my ORM. I am using doctrine and would prefer not to start using totally custom code (so if it can be done with only joins, that would be great). I though have a bad feeling I need a sub-query.
Thanks for everyone's help on the problem. My final solution was to use `GROUP BY` and `HAVING` clause. @Iserni, I did not need to `SELECT` an officer with 0 reports in the timeframe, I was able to `SELECT` all the officers with reports outside the timeframe or officers with null reports using `HAVING`. Here is my final code: ``` SELECT Officer.id AS OfficerID, Officer.name, Report.id AS ReportID, max(performanceDate) as maxPerfDate FROM Officer LEFT JOIN report_officer ON Officer.id = report_officer.officer_id LEFT JOIN Report ON Report.id = report_officer.report_id GROUP BY Officer.id HAVING maxPerfDate is null OR maxPerfDate < "2014-03-23 00:00:00" OR maxPerfDate > "2014-04-01 00:00:00"; ``` The benefit of this is that I can utilize the performanceDate to report when the last time the officer filed a report or report that he never created a report. All the other solutions that were suggested removed the ability to retrieve the valuable information on when the *last time the officer created a report*.
``` SELECT Officer.*, Report.performanceDate FROM Officer LEFT JOIN report_officer ON Officer.id = report_officer.officer_id LEFT JOIN Report ON Report.id = report_officer.report_id AND (performanceDate > "2014-03-23 00:00:00" AND performanceDate < "2014-04-01 00:00:00") WHERE Report.id IS NULL ``` You only want to join rows within a specific date range, so you must move the constraint into the `on` clause of the join and inverse the constraint. If you want to remove duplicates, you could try a `group by`: ``` SELECT Officer.id, MAX(Report.performanceDate) FROM Officer LEFT JOIN report_officer ON Officer.id = report_officer.officer_id LEFT JOIN Report ON Report.id = report_officer.report_id AND (performanceDate > "2014-03-23 00:00:00" AND performanceDate < "2014-04-01 00:00:00") WHERE Report.id IS NULL GROUP BY Officer.id ``` but you have to decide on which date you want to get, if there are multiple performance dates in your requested date range (or you could use `GROUP_CONCAT` to gather all dates). ## Update Actually I am relatively sure, that what you want to achieve is not possible with `LEFT JOIN`s at all... What always works are subquery solutions: ``` SELECT Officer.id as OfficerID, Officer.name, Report.id as ReportID, Report.performanceDate FROM Officer LEFT JOIN report_officer ON Officer.id = report_officer.officer_id LEFT JOIN Report ON Report.id = report_officer.report_id WHERE Report.id IS NULL OR NOT EXISTS ( SELECT * FROM report_officer INNER JOIN Report ON report_id = Report.id WHERE officer_id = Officer.id AND performanceDate > "2014-03-23 00:00:00" AND performanceDate < "2014-04-01 00:00:00" ) ``` but these are not that performant... This one looks if there are reports which should prohibit outputting the row.
SQL SELECT data between two related tables NOT within a certain date range
[ "", "mysql", "sql", "doctrine-orm", "" ]
Scenario : I am working on Users Accounts where Users add amount to there account (Credit) and they withdraw their desire amount from their account (Debit), all is going correct but when User credit or debit on same dates it gives me wrong result (Balance). here refno is reference of user. here is my Query ``` declare @startdate date='2013-01-02', @enddate date='2013-01-12'; With summary(id,refno,transdate,cr,dr,balance) as ( select id, RefNo, Cast(TransDate as Varchar), cr, dr, (cr-dr)+( Select ISNULL(Sum(l.Cr-l.Dr) ,0) From Ledger l Where l.TransDate<Ledger.TransDate and refno=001 ) as balance from Ledger ), openingbalance(id,refno,transdate,cr,dr,balance) As ( select top 1 '' id,'OPENING BAL','','','',balance from summary where transdate<@startdate order by transdate desc ) select * from openingbalance union Select * From summary where transdate between @startdate and @enddate and refno=001 order by transdate ``` ![Here is Result of my Query](https://i.stack.imgur.com/mB3bq.png)
The issue is because when you query for the previous balance you are only looking at records that have a `transdate` earlier than the current record, so this means any records that have the same date will not be included. The solution here would be to use a more unique sequential value, in your example you could use the ID value as the sequential identifier instead. However, ID values are not always the best for ensuring sequence. I would recommend extending your `transdate` column to use a more precise value and include the time of the transactions. Seconds would likely be enough precision if you can guarantee that there will never be multiple transactions made within a given second, but whatever you decide you need to be confident there will not be any duplicates. --- In an attempt to provide a code change solution that will work with your existing data you can try the following, which uses the `id` value to determine if a record is prior to the current record: Change the following line: ``` Where l.TransDate<Ledger.TransDate and refno=001 ) as balance ``` to this: ``` Where l.ID<Ledger.ID and refno=001 ) as balance ```
If you are using SQL 2012 or above, then instead of ``` SELECT id, RefNo, TransDate,cr, dr, (cr-dr) + (Select ISNULL(Sum(l.Cr-l.Dr) ,0) FROM Ledger l WHERE Cast(l.TransDate as datetime) < Cast(Ledger.TransDate as datetime) AND refno=001) as balance from Ledger ``` Use: ``` SELECT id, RefNo, TransDate, cr, dr, SUM(cr- dr) OVER(ORDER BY TransDate ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS balance ```
Debit, Credit not showing correct result on Same Date
[ "", "sql", "sql-server", "database", "sql-server-2008", "sql-server-2008-r2", "" ]
I have to get a specific output format from my tables. Let's say I have a simple table with 2 columns **name** and **value**. table **T1** ``` +---------------+------------------+ | Name | Value | +---------------+------------------+ | stuff1 | 1 | | stuff1 | 1 | | stuff2 | 2 | | stuff3 | 1 | | stuff2 | 4 | | stuff2 | 2 | | stuff3 | 4 | +---------------+------------------+ ``` I know the values are in the interval 1-4. I group it by name and value and count number of the same rows as **Number** and get the following table: table **T2** ``` +---------------+------------------+--------+ | Name | Value | Number | +---------------+------------------+--------+ | stuff1 | 1 | 2 | | stuff2 | 2 | 2 | | stuff3 | 1 | 1 | | stuff3 | 4 | 1 | +---------------+------------------+--------+ ``` **Here is the part when I need your help! What should I do if I want to get these format?** table **T3** ``` +---------------+------------------+--------+ | Name | Value | Number | +---------------+------------------+--------+ | stuff1 | 1 | 2 | | stuff1 | 2 | 0 | | stuff1 | 3 | 0 | | stuff1 | 4 | 0 | | stuff2 | 1 | 0 | | stuff2 | 2 | 2 | | stuff2 | 3 | 0 | | stuff2 | 4 | 0 | | stuff3 | 1 | 1 | | stuff3 | 2 | 0 | | stuff3 | 3 | 0 | | stuff3 | 4 | 1 | +---------------+------------------+--------+ ``` Thanks for any suggestions!
You start with a `cross join` to generate all possible combinations and then left-join in the results from your existing query: ``` select n.name, v.value, coalesce(nv.cnt, 0) as "Number" from (select distinct name from table t) n cross join (select distinct value from table t) v left outer join (select name, value, count(*) as cnt from table t group by name, value ) nv on nv.name = n.name and nv.value = v.value; ```
Variation on the theme. Differences between Gordon Linoff and Owen existing answers. 1. I prefer GROUP BY to get the Names rather than a DISTINCT. This may have better performance in a case like this. (See [Rob Farley's still relevant article](http://msmvps.com/blogs/robfarley/archive/2007/03/24/group-by-v-distinct-group-by-wins.aspx).) 2. I explode the subqueries into a series of CTEs for clarity. 3. I use table T2 as the question now labels the group results set instead of showing that as as subquery. ``` WITH PossibleValue AS ( SELECT 1 Value UNION ALL SELECT Value + 1 FROM PossibleValue WHERE Value < 4 ), Name AS ( SELECT Name FROM T1 GROUP BY Name ), NameValue AS ( SELECT Name ,Value FROM Name CROSS JOIN PossibleValue ) SELECT nv.Name ,nv.Value ,ISNULL(T2.Number,0) Number FROM NameValue nv LEFT JOIN T2 ON nv.Name = T2.Name AND nv.Value = T2.Value ```
I need a specific output
[ "", "sql", "sql-server", "t-sql", "" ]
In my Java Web application I use Postgresql and some data tables are filled automatically in server. In the database I have a STATUS table like below: ![enter image description here](https://i.stack.imgur.com/t8nHd.png) I want to select the data related to a vehicle between selected dates and where the vehicle stayed connected. Simply I want to select the data which are green in the above table which means I exactly want the data when firstly connected=true and the data when connected=false after the last connected=true. I tried to write a sql statement but I couldn't get the desired data. What I did was: ``` select *from status where vehicleId='redcar' and date >= '2014-02-28 00:00:00' and date <= '2014-02-28 23:59:59' and ... ``` How could I get the requested data?
``` WITH cte AS ( SELECT statusId, vehicleId, connected, date, (connected <> LAG(connected) OVER (PARTITION BY vehicleId ORDER BY date) ) AS status_changed FROM status WHERE vehicleId = 'redcar' AND date >= DATE '2014-02-28' AND date < DATE '2014-02-28' + interval '1 day' ) SELECT statusId, vehicleId, connected, date FROM cte WHERE status_changed OR connected AND status_changed IS NULL ORDER BY date ; ``` Tested at **[SQL-Fiddle](http://sqlfiddle.com/#!15/0072d/24)**
You can do this with a [window function](http://www.postgresql.org/docs/9.3/static/tutorial-window.html). Something like this: ``` with status_changes as( select id, vehicleid, connected, connected != lag(connected) over (partition by vehicleid order by date asc) as has_changed from STATUS ) select id, vehicleid, connected, date from status_changes where has_changed = true and vehicle = 'redcar' and date between '2014-02-28 00:00:00' and '2014-02-28 23:59:59'; ``` * **lag(status)** will return the status that preceded it (within our partition). * **partition by vehicleid** (with an order by date) ensures that our *lag()* returns rows of the same vehicleid. See the list of [window functions](http://www.postgresql.org/docs/9.3/static/functions-window.html) for more on *lag()* and related functions.
Postgresql Select data between selected dates and status
[ "", "sql", "hibernate", "postgresql", "" ]
Basically my select statement returns below: ``` ID Status 100 1 100 2 101 1 ``` What i'm looking for is to return if a ID having status as 1 and if the same ID has another status ID as 2 then exclude both In Short results as below: ``` ID Status 101 1 ``` Thanks in advance !
The following query returns ID values that occur only once. ``` SELECT ID FROM t GROUP BY ID HAVING COUNT(*) = 1 ``` It should be sufficient for the sample data you provided. If there are other cases then let me know. [SQL Fiddle](http://www.sqlfiddle.com/#!2/97a2cf/1)
You gonna need subquery and `NOT IN` here. The following would work if you have column `status` as `INT` datatype ``` SELECT * FROM table WHERE status = 1 AND ID NOT IN ( SELECT ID FROM table WHERE status = 2 ); ```
SQL query exclude rows if having two different values
[ "", "sql", "group-by", "" ]
I have the following SQL, ``` SELECT U.ID AS UserID ,U.LoginName AS Name ,PV.Time AS [Time] ,PV.URL AS [Url] ,P.Name AS [Product 1 Name] ,P.SKU AS [Product 1 SKU] ,P.ModelNumber AS [Product 1 Model Number] ,P2.Name AS [Product 2 Name] ,P2.SKU AS [Product 2 SKU] ,P2.ModelNumber AS [Product 2 Model Number] ,ISNULL((select TOP 1 Value from AttributesValues AV INNER JOIN Attributes A ON AV.AttributeID = A.ID WHERE AV.ProductID = P.ID AND A.RetailerID = 23 AND A.CODE = 'metadata-department'),'') AS Department ,ISNULL((select TOP 1 Value from AttributesValues AV INNER JOIN Attributes A ON AV.AttributeID = A.ID WHERE AV.ProductID = P.ID AND A.RetailerID = 23 AND A.CODE = 'metadata-group'),'') AS Department ,ISNULL((select TOP 1 Value from AttributesValues AV INNER JOIN Attributes A ON AV.AttributeID = A.ID WHERE AV.ProductID = P.ID AND A.RetailerID = 23 AND A.CODE = 'metadata-subgroup'),'') AS SubGroup ,C.Browser AS [Browser] ,C.BrowserVersion AS [Browser Version] ,C.IPAddress AS [IP Address] ,C.UserAgent AS [User Agent] ,C.UserLanguage AS [User Language] ,PV.PageType AS [Page Type] FROM PageVisits PV INNER JOIN Users U ON U.ID = PV.UserID INNER JOIN Clients C ON C.ID = PV.ClientID INNER JOIN Products P ON P.id = PV.P1ID LEFT OUTER JOIN Products P2 ON P2.id = PV.P2ID WHERE P.RetailerID = 23 AND PV.UserID IS NOT NULL AND PV.PageType IN ('Product','Compare Products') AND PV.[Time] >= '{0}' AND PV.[Time] < '{1}' ``` The problem is this sub-query, ``` ,ISNULL((select TOP 1 Value from AttributesValues AV INNER JOIN Attributes A ON AV.AttributeID = A.ID WHERE AV.ProductID = P.ID AND A.RetailerID = 23 AND A.CODE = 'metadata-department'),'') AS Department ``` I used here 3 times. The only difference is the `A.CODE =`. How can I improve this
Always use `ORDER BY` with `TOP`. Otherwise you will get indeterminate results, depending on how the SQL engine will fetch the related rows. If the `TOP 1` are not really needed, i.e. if the subqueries would always return 1 (or 0) rows anyway, without the TOP, then you could use `OUTER APPLY` with `MAX()`: ``` SELECT --- ,COALESCE(X.Department, '') AS Department ,COALESCE(X.[Group], '') AS [Group] ,COALESCE(X.SubGroup, '') AS SubGroup --- FROM PageVisits PV INNER JOIN Users U ON U.ID = PV.UserID INNER JOIN Clients C ON C.ID = PV.ClientID INNER JOIN Products P ON P.id = PV.P1ID LEFT OUTER JOIN Products P2 ON P2.id = PV.P2ID OUTER APPLY ( SELECT MAX(CASE WHEN A.CODE = 'metadata-department' THEN AV.Value END) AS Department ,MAX(CASE WHEN A.CODE = 'metadata-group' THEN AV.Value END) AS [Group] ,MAX(CASE WHEN A.CODE = 'metadata-subqroup' THEN AV.Value END) AS SubGroup FROM AttributesValues AV INNER JOIN Attributes A ON AV.AttributeID = A.ID WHERE AV.ProductID = P.ID AND A.RetailerID = 23 ) X WHERE --- ; ```
You should first try to build indexes for the query: ``` create index on Attributes(RetailerId, Code, Id); create index on AttributeValues(AttributeId, ProductId, Value); ``` The processing for the subquery should then just be a couple of index lookups.
Reuse the SUB Query Columns in SQL
[ "", "sql", "sql-server-2012", "" ]
I am facing an error while grouping the following statement. Here is my code ``` DECLARE @avg_volume int SELECT @avg_volume=ISNULL(AVG(Total_Volume),0) FROM (SELECT station_id, DATEPART(YEAR,date_time) AS YEAR, DATEPART(MONTH,date_time) AS MONTH, CONVERT(date,date_time) AS DATE, DATENAME(weekday,date_time) AS weekday, sum(volume) AS Total_volume FROM rvtcs_data_aggregated_hourly GROUP BY station_id, CONVERT(date,date_time), DATEPART(MONTH,date_time), DATEPART(YEAR,date_time), DATENAME(weekday,date_time)) GROUP BY station_id, CONVERT(date,date_time), DATEPART(MONTH,date_time), DATEPART(YEAR,date_time), DATENA ME(weekday,date_time) ORDER BY DATEPART(YEAR,date_time), DATEPART(MONTH,date_time), CONVERT(date,date_time) SELECT @avg_volume ``` My inner query will return ``` station_id YEAR MONTH DATE weekday Total_volume 7 2013 2 2013-02-21 Thursday 192 7 2013 2 2013-02-27 Wednesday 2699 7 2013 2 2013-02-28 Thursday 196 2 2013 3 2013-03-07 Thursday 192 7 2013 3 2013-03-07 Thursday 192 ``` My primary table is : ``` station_id date_time volume 7 2013-02-21 00:00:00.000 96 7 2013-02-21 01:00:00.000 96 7 2013-02-27 00:00:00.000 356 7 2013-02-27 00:00:00.000 410 7 2013-02-27 00:00:00.000 471 7 2013-02-27 00:00:00.000 530 7 2013-02-27 00:00:00.000 338 7 2013-02-27 00:00:00.000 211 7 2013-02-27 00:00:00.000 159 7 2013-02-27 00:00:00.000 128 7 2013-02-27 00:00:00.000 96 7 2013-02-28 00:00:00.000 96 7 2013-02-28 01:00:00.000 100 7 2013-03-07 00:00:00.000 96 2 2013-03-07 00:00:00.000 96 2 2013-03-07 01:00:00.000 96 7 2013-03-07 01:00:00.000 96 ``` **My desired output is:** ``` station id year month weekday average_volume 7 2013 2 Thursday 194 ```
There are 3 issues in query you have posted: 1. As pointed by everyone correct identifier DATENA ME to `DATENAME`. 2. When a inner query in defined in `From` clause of an outer query it is actually a `'Derived Table'`.Their scope of existence is the outer query. As soon as the outer query is finished, the derived table is gone. So we need to specify the query that defines the derived table within parentheses, followed by the AS clause and the derived table name. (Below I have written it as `'T'`) 3. Now as we are referring to columns from a derived table in outer query only the columns defined in it can be accessed. ( So I have corrected the outer query columns) Write as: ``` --DECLARE @avg_volume int SELECT ISNULL(AVG(Total_Volume),0) as average_volume, station_id, MONTH, YEAR, weekday FROM (SELECT station_id, DATEPART(YEAR,date_time) AS YEAR, DATEPART(MONTH,date_time) AS MONTH, CONVERT(date,date_time) AS DATE, DATENAME(weekday,date_time) AS weekday, sum(volume) AS Total_volume FROM rvtcs_data_aggregated_hourly GROUP BY station_id, CONVERT(date,date_time), DATEPART(MONTH,date_time), DATEPART(YEAR,date_time), DATENAME(weekday,date_time)) AS T WHERE WEEKDAY = 'Thursday' AND MONTH=2 GROUP BY station_id, MONTH, YEAR, weekday ORDER BY YEAR, MONTH ``` SQL FIDDLE: <http://sqlfiddle.com/#!3/6217d/10>
There is a space in your code `DATENAME` ``` DATENA ME(weekday,date_time) ``` Write as ``` DATENAME(weekday,date_time) ``` ------------------------Another point-------------------------- you must use name for for your subquery like ``` SELECT t1.yourColumn ( select yourColumn FROM tableABC ) t1 <you not assign name here to newly subquery created table e.g t1> ``` Also use column aliases like ``` SELECT t1.aliasName ( select yourColumn as [aliasName] FROM tableABC ) t1 ORDER BY t1.aliasName ```
Incorrect syntax near the keyword 'GROUP' IN SQL while taking average
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have a table with ~2 million records in and I need to add a new nonclustered index to a uniqueidentifier to improve query performance. Will adding a nonclustered index lock the table or otherwise degrade performance significantly while it's being applied? There's lots of information out there about the benefits/pitfalls of indexing, but I can't find anything that tells me that happens *during* an indexing operation I'm running SQL Server 2008 R2 (on Windows Server 2008 if that's important) EDIT: It's the Enterprise Edition
In Enterprise Edition, you gain the ability to do online index operations. It looks something like this: ``` create index MyIDX on MyTable (MyColumn) with (online = on) ``` Note that the operation does still take some locks during the process (at the beginning and end, IIRC), but doesn't lock the table for the duration of the index creation. If you're concerned, fire up an extended events session in a non-production environment and trace what locks are created and how long they exist for while creating your index. Update: The [documentation](http://technet.microsoft.com/en-us/library/ms188783.aspx) has a pretty good exposition about what locks are held when for both online and offline operations.
For those of us **not** running "Expensive Edition" (Enterprise), the answer is thus: > An offline index operation that creates a nonclustered index acquires > a Shared (S) lock on the table. This prevents updates to the > underlying table but allows read operations, such as SELECT > statements. So basically it renders the target table "read only" while the index is built. This may or may not be a problem for your overlaying applications -- check with your dev teams and users! PS: The question of whether or not, or why, to *apply* such an index, is an entirely different conversation. The SQL community and its legion of professional bloggers & SMEs are your friends.
Will adding a nonclustered index lock my table?
[ "", "sql", "sql-server", "sql-server-2008-r2", "indexing", "" ]
I'm trying to do a simple LEFT JOIN with tables with 2 IDs - basically an ID and Sub-ID. Each row has an ID, but not necessarily a Sub-ID. When a Sub-ID exists, I want to join based on that, if not join on the ID. I'd imagine something like ``` SELECT ... FROM tablename a LEFT JOIN tablename b ON CASE WHEN SUB_ID IS NOT NULL THEN a.SUB_ID = b.SUB_ID ELSE a.ID = b.ID END AND a.otherfield = b.otherfield ``` But I couldn't get anything like this to work, so instead I had to do 2 queries with a UNION (one that joined on SUB\_ID WHERE SUB\_ID IS NOT NULL and another that joined on ID WHERE SUB\_ID IS NULL.) It worked but I can't imagine there isn't a way to do it. If it helps, my ID and SUB\_ID values look like this: ``` ID SUB_ID 10000 NULL 10001 NULL 10001 10001-3 10001 10001-5 10014 NULL ``` Any suggestions on how to achieve this without doing a UNION? Thanks in advance!!
We can use `COALESCE` for this purpose: ``` SELECT ... FROM tablename a LEFT JOIN tablename b ON COALESCE(a.SUB_ID,a.ID) = COALESCE(b.SUB_ID,b.ID) ``` `COALESCE` returns value of first not null parameter from left. Here is the code at [SQL Fiddle](http://www.sqlfiddle.com/#!2/ff75a9/2)
This should work for you. ``` SELECT ... FROM tablename a LEFT JOIN tablename b ON ((b.SUB_ID IS NOT NULL AND a.SUB_ID = b.SUB_ID) OR (a.ID = b.ID)) AND a.otherfield = b.otherfield ```
JOIN ON clause with CASE statement depending on if field is NULL?
[ "", "sql", "sql-server", "sql-server-2008", "" ]
Suppose I have the following table (actually this is result of the query) ![enter image description here](https://i.stack.imgur.com/aAaGq.jpg) Wht I want is to come up with the following table ![enter image description here](https://i.stack.imgur.com/rzVNQ.jpg) It assigns zero to amount for each code value except for the first one. Any strategies to do this? Thanks in advance
Does this help? ``` declare @tab table( code int, amount int) insert into @tab values (1,100), (1,100), (1,100), (2,500), (2,500), (3,1000); select code, case when rn=1 then amount else 0 end amount from (select *, row_number()Over(partition by code Order by amount) RN from @tab)t ```
You could use `row_number()` to identify the first row for each `code`. From there on, it's just a simple `case` expression: ``` SELECT code, CASE rn WHEN 1 THEN amount ELSE 0 END AS amount FROM (SELECT code, amount, ROW_NUMBER() OVER (PARTITION BY code ORDER BY amount) AS rn FROM my_table) ```
Selecting only first part of Amount
[ "", "sql", "sql-server", "" ]
For example, how can I create a one-to-many relationship between tables `InsuranceCo` and `Vehicle`, where the primary keys of each are `InsuranceCo.id` and `Vehicle.licencePlate`? My attempt in creating the one-to-many relationship using a foreign key is this: ``` CREATE TABLE InsuranceCo ( id int PRIMARY KEY, phone int ) CREATE TABLE Vehicle ( licencePlate CHAR(10) PRIMARY KEY REFERENCES InsuranceCo(id), year int ) ``` Will this work? If not, how can I create the one-to-many relationship when keys have different types?
This isn't how one-to-many relationships work at all. You don't just link two ids together, that is how one-to-one relations work (and no, those can't be done with different types, the values actually have to be the same). For one-to-many relationships, you need a separate value to reference the other table with. You have to add a column -- for example insuranceCoId -- into the Vehicle table. Then any vehicle can have the id of the insurance company right there in the table. So data in might look like this: ``` InsuranceCo: id phone 1 800-744-2932 2 488-382-9332 Vehicle LicencePlate insuranceCoId year 435yte 1 1995 328teo 1 2006 fd8tew 2 2008 ``` As you can see, *one* insurance company is associated with *many* vehicles now.
I'm assuming that in the one-to-many relationship, InsuranceCo will have a multiplicity of 1 and Vehicle will have a multiplicity of \* (many). In this case, you'll want to create an additional column on the Vehicle table of type int called InsuranceCoId, which will be a foreign key reference to the InsuranceCo table. You can then create said foreign key constraint on the Vehicle table itself: ``` ALTER TABLE Vehicle ADD CONSTRAINT FK_Vehicle_InsuranceCo FOREIGN KEY (InsuranceCoId) REFERENCES InsuranceCo (id) ``` Now, when you add vehicles to the system, you can add associated insurance company references. The above will address the immediate question you have. However, I believe your database design could be improved by adding an InsurancePolicy table that will create a many-to-many relationship between Vehicle and InsuranceCo, tied together with information specific to a policy (such as premium, deductible, etc).
One-to-many SQL relationship on keys with different types
[ "", "sql", "" ]
I want to check that all values in column c1 of a certain table, say t1, are the same. I do this with: ``` SELECT count(*)=1 FROM ( SELECT c1 FROM t1 GROUP BY c1) AS t1_grouped; ``` is this the "right thing" to do, or is there some aggregate function or some other language construct for achieving the same (possibly with better performance even)?
Well, based on @Bohemian's [answer](https://stackoverflow.com/a/22063644/1593077), it seems I can improve my query like so: ``` SELECT count(DISTINCT c1)=1 FROM t1; ``` Much better!
This will tell you how many *different* values you have: ``` select count(distinct c1) from t1 ``` If that number is `1`, then they are all the same.
SQL idiom for checking whether values in a column are uniform
[ "", "sql", "idioms", "" ]
``` SELECT (CASE WHEN ymd BETWEEN CURRENT_DATE -4 AND CURRENT_DATE -1 THEN '3day total' WHEN ymd BETWEEN CURRENT_DATE -11 AND CURRENT_DATE -1 THEN '10day total' WHEN ymd BETWEEN CURRENT_DATE -31 AND CURRENT_DATE -1 THEN '30day total' END) AS 'Period', SUM(cost) cost FROM table WHERE ymd BETWEEN CURRENT_DATE -31 AND CURRENT_DATE -1 GROUP BY 1 ``` The result actually seems to give me Period buckets of days 1-3, days 4-10 and days 11-30. I believe this is because there are overlapping conditions and SQL stops processing the CASE statement as soon as the first condition is met. What I want are the totals for each bucket (ie 3 day sum, 10 day sum and 30 day sum). Is there a way to do this without adding additional fields? PS - the syntax is a bit different then traditional sql because it's vsql (vertica).
If you want to keep this as three rows with one column, here is a way: ``` SELECT which, sum(CASE WHEN ymd BETWEEN CURRENT_DATE - diff AND CURRENT_DATE -1 THEN cost else 0 end) as cost FROM table t cross join (select '3day total' as which, 4 as diff union all select '10day total', 11 union all select '30day total', 31 ) w WHERE ymd BETWEEN CURRENT_DATE -31 AND CURRENT_DATE -1 GROUP BY which; ``` EDIT: The `cross join` works by creating three rows for each row in the original table. Each of these rows is considered for each group -- so overlaps are not an issue. The groups are defined by their name and the `diff` value. In other words, if the data started with one row that was 7 days ago: ``` ymd cost CURRENT_DATE - 7 $10 ``` Then this gets multiplied into 3 by the `cross join`: ``` which diff ymd cost 3day total 4 CURRENT_DATE - 7 $10 10day total 11 CURRENT_DATE - 7 $10 30day total 31 CURRENT_DATE - 7 $10 ``` Then, when these are aggregated by `which`, the correct values are calculated: ``` which Total 3day total $0 10day total $10 30day total $10 ```
Make three totals instead of one total, and use the condition to determine where to count each record instead of trying to determine a period for a single total: ``` select sum(case when ymd between CURRENT_DATE - 4 and CURRENT_DATE - 1 then cost else 0 end) as '3day total', sum(case when ymd BETWEEN CURRENT_DATE - 11 and CURRENT_DATE - 5 then cost else 0 end) as '10day total', sum(case when ymd BETWEEN CURRENT_DATE - 31 and CURRENT_DATE - 12 then cost else 0 end) as '30day total' from table and ymd between CURRENT_DATE -31 and CURRENT_DATE -1 group by 1 ``` Note: I'm not sure which date ranges you want to count where, so I made them so that they don't overlap, as that makes most sense if you have using `between`. If you still want them to overlap then you can just make a comparison instead of using `between` as any values later than `CURRENT_DATE - 1` are already filtered out. ### Edit: To get the result in rows, you can make a union between selects: ``` select '3day total' as period, sum(cost) as cost from table where ymd between CURRENT_DATE - 4 and CURRENT_DATE - 1 union all select '10day total', sum(cost) from table where ymd BETWEEN CURRENT_DATE - 11 and CURRENT_DATE - 5 union all select '30day total', sum(cost) from table where ymd BETWEEN CURRENT_DATE - 31 and CURRENT_DATE - 12 ```
Dealing with overlapping conditions in CASE statement
[ "", "sql", "vertica", "vsql", "" ]
Sorry for the very fuzzy question but my problem is that I have three different tables. One table containing user information, one for the users posts and one for the users likes. I want to select data from the tables containing the users data and posts but only return a post that the user have not liked and not is posted by the user itself. I have tried to use different combinations of `JOINS` but with no success. For example i want to select the rows for the user with id = 1. ``` Table users: +----+----------+ | id | username | +----+----------+ | 1 | A | | 2 | B | | 3 | C | +----+----------+ Table posts: +----+---------+ | id | user_id | +----+---------+ | 1 | 1 | | 2 | 1 | | 3 | 2 | | 4 | 3 | | 5 | 2 | | 6 | 3 | +----+---------+ Table likes: +----+---------+---------+ | id | post_id | user_id | +----+---------+---------+ | 1 | 3 | 2 | | 2 | 3 | 1 | | 3 | 4 | 1 | | 4 | 1 | 3 | +----+---------+---------+ Result wantend: +---------+----------+ | post_id | username | +---------+----------+ | 5 | B | | 6 | C | +---------+----------+ ``` A problem I run into is that my query also returns `post_id: 3` because `user_id: 2` have liked the post. I hope that you understands my question. Thanks in Advance! /Andreas
Here is an approach to the query that uses `not exists` for the `likes`: ``` select p.id as post_id, u.username from posts p join users u on p.user_id = u.id where not exists (select 1 from likes l where l.post_id = p.id and l.user_id = 1 ) and u.id <> 1; ```
I think your data model isn't quite right, if "liking" a post adds a use to the "posts" table. However, to answer your original question, you can exclude "liked" posts this way: ``` SELECT p.post_id, p.user_id FROM post p LEFT JOIN likes l ON p.post_id = l.post_id WHERE l.post_id IS NULL; ```
MySQL - Exclude all rows from one table if match on another table
[ "", "mysql", "sql", "" ]
I have already searched SO but found no answer to my question. My question is if I use the query below I get correct count which is 90: ``` select count(distinct account_id) from FactCustomerAccount f join DimDate d on f.date_id = d.datekey -- 90 ``` But when I group by CalendarYear as below I am missing 12 counts. The query and output is below: ``` select CalendarYear,count(distinct account_id) as accountCount from FactCustomerAccount f join DimDate d on f.date_id = d.datekey group by CalendarYear output: CalendarYear accountCount 2005 10 2006 26 2007 49 2008 63 2009 65 2010 78 ``` I am not sure why I am missing 12 counts. To debug I run following query if I have missing date\_id in FactCustomerAccount but found no missing keys: ``` select distinct f.date_id from FactCustomerAccount f where f.date_id not in (select DateKey from dimdate d) ``` I am using SQL Server 2008 R2. Can anyone please suggest what could be the reason for missing 12 counts? Thanks in advance. EDIT ONE: I did not quite understand reason/answer given to my question in the 2 replies so I would like to add 2 queries below using AdventureWorksDW2008R2 where no count is missing: ``` select count (distinct EmployeeKey) from FactSalesQuota f join dimdate d on f.DateKey = d.DateKey -- out: 17 select d.CalendarYear, count (distinct EmployeeKey) as Employecount from FactSalesQuota f join dimdate d on f.DateKey = d.DateKey group by d.CalendarYear -- out: -- CalendarYear Employecount -- 2005 10 -- 2006 14 -- 2007 17 -- 2008 17 ``` So please correct me what I am missing.
You aren't missing 12. It could be that some accounts didn't have activities in the final years.
Your queries are very different: The first: ``` select count(distinct account_id) from FactCustomerAccount f join DimDate d on f.date_id = d.datekey ``` Return a count of different accounts (over all years), so if you have an account\_id present in two years, you have 1 (count) returned. The second: Grouped by CalendarYear so if you have an account\_id in two different years, this information goes in two different rows. ``` select CalendarYear,count(distinct account_id) as accountCount from FactCustomerAccount f join DimDate d on f.date_id = d.datekey group by CalendarYear ``` **EDIT** I try to explain better: I suppose this data set of order couple: (year, account\_id) ``` `2008 10` `2009 10` `2010 10` `2010 12` ``` If you run two upper queries you have: ``` `2` ``` and ``` `2008 1` `2009 1` `2010 2` ``` because exist two different account\_id (10 and 12) and only in the last year (2010) account\_ids 10 and 12 have written their rows. But if you have this data set: ``` `2008 10` `2009 10` `2009 12` `2010 12` ``` You'll have: First query result: `2` Second query result: `2008 1` `2009 2` `2010 1`
distinct count with group by
[ "", "sql", "sql-server", "sql-server-2008", "" ]
``` The code originates from the following post: ``` [What SQL can I use to retrieve counts from my payments data?](https://stackoverflow.com/questions/21633961/what-sql-can-i-use-to-retrieve-counts-from-my-payments-data) I am working with a large amount of data, and this code (in MS-ACCESS JET SQL) currently takes about 10-15 minutes to query (when it doesn't crash). I want to increase the speed in which my query runs, which is why I am looking to utilize the pass through query approach. Currently, the back end database I use is SQL Server (2012), which I use an ODBC connection to link to. I am under the impression that writing your SQL in the back end database's language allows the pass through query to perform more efficiently. I believe that T-SQL would be the language. I have done the research, and there really is no "JET-SQL to T-SQL converter". Therefore, I kindly ask if anyone has any suggestions on how the following code reads and looks in T-SQL. ``` TRANSFORM Nz(First(CountOfStudents),0) AS n SELECT YearNumber, MonthNumber, School FROM ( SELECT YearNumber, MonthNumber, School, [School Service Type], COUNT(*) AS CountOfStudents FROM ( SELECT DISTINCT mtr.YearNumber, mtr.MonthNumber, pym.[Student ID], pym.School, pym.[School Service Type] FROM MonthsToReport AS mtr INNER JOIN PaymentsYearMonth AS pym ON mtr.YYYYMM>=pym.StartYYYYMM AND mtr.YYYYMM<=pym.EndYYYYMM ) AS distinctQuery GROUP BY YearNumber, MonthNumber, School, [School Service Type] ) AS countQuery GROUP BY YearNumber, MonthNumber, School PIVOT [School Service Type] ``` As always, thank you for your time.
Converting the Access SQL crosstab query to T-SQL is only part of the process. In order to push all of the processing onto the SQL Server we also need to make sure that all of the dependencies are on the SQL Server, too. (For example, a T-SQL query running on the SQL Server won't be able to pull data from a saved query in Access.) Following the steps from the [previous question](https://stackoverflow.com/q/21633961/2144390), we have our table named [Payments] on the SQL Server: ``` Payment Row Student ID School School Service Type PaymentStartDate PaymentEndDate ----------- ---------- ------ ------------------- ---------------- -------------- 1 001 ABC ED 2010-01-02 2012-02-04 2 001 ABC ED 2010-01-02 2010-01-05 3 001 ABC ED 2010-04-02 2010-05-05 4 001 DEF EZ 2010-01-02 2012-02-04 5 001 RR 2012-02-02 2012-02-03 6 002 ABC ED 2010-02-02 2011-02-03 7 002 ABC EZ 2010-02-02 2010-06-03 8 002 GHI ED 2011-02-04 2012-02-04 9 003 ABC ED 2011-02-02 2012-02-03 10 003 DEF ED 2010-01-02 2010-08-03 11 003 RR 2011-02-02 2011-02-03 12 004 RR 2011-02-02 2011-02-03 13 005 GHI ED 2010-08-02 2011-02-04 14 006 GHI ED 2010-08-02 2010-08-02 ``` We create the [PaymentsYearMonth] view in SQL Server ``` CREATE VIEW PaymentsYearMonth AS SELECT [Student ID], School, [School Service Type], (Year(PaymentStartDate) * 100) + Month(PaymentStartDate) AS StartYYYYMM, (Year(PaymentEndDate) * 100) + Month(PaymentEndDate) AS EndYYYYMM FROM Payments ``` The SQL Server also needs to have copies of our [MonthNumbers] table ``` MonthNumber ----------- 1 2 3 4 5 6 7 8 9 10 11 12 ``` and our [YearNumbers] table ``` YearNumber ---------- 2009 2010 2011 2012 2013 ``` So now we can create the [MonthsToReport] view. T-SQL doesn't have `DMin()` and `DMax()` functions, so we need to change the query slightly ``` CREATE VIEW MonthsToReport AS SELECT yn.YearNumber, mn.MonthNumber, (yn.YearNumber * 100) + mn.MonthNumber AS YYYYMM FROM YearNumbers AS yn, MonthNumbers AS mn WHERE ((yn.YearNumber * 100) + mn.MonthNumber)>=(SELECT MIN(StartYYYYMM) FROM PaymentsYearMonth) AND ((yn.YearNumber * 100) + mn.MonthNumber)<=(SELECT MAX(EndYYYYMM) FROM PaymentsYearMonth) ``` The SQL query to count the distinct rows is exactly the same, so let's create a view for that so when we do the PIVOT in the next step it will be easier to see what's going on ``` CREATE VIEW DistinctCountsByMonth AS SELECT YearNumber, MonthNumber, School, [School Service Type], COUNT(*) AS CountOfStudents FROM ( SELECT DISTINCT mtr.YearNumber, mtr.MonthNumber, pym.[Student ID], pym.School, pym.[School Service Type] FROM MonthsToReport AS mtr INNER JOIN PaymentsYearMonth AS pym ON mtr.YYYYMM>=pym.StartYYYYMM AND mtr.YYYYMM<=pym.EndYYYYMM ) AS distinctQuery GROUP BY YearNumber, MonthNumber, School, [School Service Type] ``` Now, if we were doing a crosstab query in Access it would simply be ``` TRANSFORM First(CountOfStudents) AS n SELECT YearNumber, MonthNumber, School FROM DistinctCountsByMonth GROUP BY YearNumber, MonthNumber, School PIVOT [School Service Type] ``` but the PIVOT clause in T-SQL requires that we give it the actual list of column names (as opposed to Access, which can generate the column names automatically). So we'll create a stored procedure on the SQL Server that builds the list of column names, constructs the SQL statement, and executes it: ``` CREATE PROCEDURE DistinctPaymentsCrosstab AS BEGIN SET NOCOUNT ON; DECLARE @ColumnList AS NVARCHAR(MAX), @SQL AS NVARCHAR(MAX) -- build the list of column names based on the current contents of the table -- e.g., '[ED],[EZ],[RR]' -- required by PIVOT ... IN below -- ref: https://stackoverflow.com/a/14797796/2144390 SET @ColumnList = STUFF( ( SELECT DISTINCT ',' + QUOTENAME([School Service Type]) FROM [DistinctCountsByMonth] FOR XML PATH(''), TYPE ).value('.', 'NVARCHAR(MAX)'), 1, 1, '') SET @SQL = ' WITH rollup AS ( SELECT [School Service Type], YearNumber, MonthNumber, School, SUM(CountOfStudents) AS n FROM [DistinctCountsByMonth] GROUP BY [School Service Type], YearNumber, MonthNumber, School ) SELECT * FROM rollup PIVOT (SUM([n]) FOR [School Service Type] IN (' + @ColumnList + ')) AS Results' EXECUTE(@SQL) END ``` Now we can call that stored procedure from Access by using a pass-through query ![PassThroughQuery.png](https://i.stack.imgur.com/3izhC.png) returning ![Results.png](https://i.stack.imgur.com/hpO4a.png)
[See Here](http://www.codeproject.com/Articles/21371/SQL-Server-Profiler-Step-by-Step) on creating indexes in SSMS to optimize your most-queried data. (This will increase speed) [See Here](http://www.mssqltips.com/sqlservertip/2023/microsoft-access-pass-through-queries-to-sql-server-in-detail/) regarding pass-through queries. Both links are very detailed and should help you get to where you need to be and allow you to learn something along the way.
How does the following code read and look in T-SQL?
[ "", "sql", "database", "t-sql", "ms-access", "jet", "" ]
I've created `table1` with columns `a,b,c,d` which has data in it. `table2` is basically the same as `table1` it has different columns order + additional column i.e. `a,e,d,b,c` with no data. how can I copy the data from `table1` into `table2` note that column `a` is an `id` and i wish the number will stay the same. this is what I've already tried: ``` insert into table2 select (a,d,b,c) from table1 ``` this resulted in `column "a" is of type bigint but expression is of type record` ``` insert into table2 (a,d,b,c) values(select a,d,b,c from table1) ``` didn't work either `syntax error at or near "select"` ``` insert into table2 (a,e,d,b,c) values(select a,NULL,d,b,c from table1) ``` got the error: `INSERT has more target columns than expressions`
Specify the column names you are inserting, but do not use `values` when defining the select. ``` insert into table2(a,d,b,c) select a, d, b, c from table1 ```
You can copy data between tables in postgreSQL by using: `INSERT INTO [Tablename]([columns]) SELECT [Columns] FROM [Table to copy form];` Which in your case would look like: `INSERT INTO table2(a,b,c,d) SELECT a,b,c,d FROM table1;` You can also easily create an empty table with the same table structure so that copying to it is easy, by using the command: `CREATE TABLE [New Table] AS [Old Table] WITH NO DATA;` And then running the `INSERT` command from before. If you simply want an identical copy you can run: `CREATE TABLE [New Table] as [Old Table];` You can read more about copying data between tables in this article which I wrote for dataschool: <https://dataschool.com/learn/copying-data-between-tables> You can read more about the insert command in the documentation here: <https://www.postgresql.org/docs/9.2/sql-insert.html>
Postgres copy data between tables
[ "", "sql", "postgresql", "" ]
I have a query which returns a column of values: ``` ?????? ?? ?????? ??????? ?.?..xlsx 1028-13055 Single Patient Focus Wave 3.sav 2.xlsx 2011 BBQ (13Dec2013).sav 2014 Health IT Purchasing Intentions Survey Results.xlsx 2014 Safety Training and Safety Professionals Survey.sav ``` How do I count the number of occurrences based on the file extensions? In the above example, we have three rows for the xlsx extension and two for the sav extension?
Try this ``` SELECT extension, Count(*) AS ExtensionCount FROM (SELECT RIGHT(name, Charindex('.', Reverse(name)) - 1) AS Extension FROM files) t GROUP BY extension ``` SQL fiddle <http://sqlfiddle.com/#!6/27269/5>
Try this: ``` SELECT RIGHT(yourcolumnname,CHARINDEX('.', Reverse(yourcolumnname)) -1) FROM yourtable ``` to isolate just the extension part and then count the occurrences
SQL: How to count occurrences based on file extension
[ "", "sql", "sql-server", "" ]
Here is code ``` DECLARE @List TABLE (n nvarchar(50)); INSERT INTO @list (n) VALUES ('A'),('B'),('C'),('D'),('E'); SELECT (t1.n ) + (t2.n) + (t3.n) + (t4.n) +(t5.n) FROM @list AS t1 JOIN @list AS t2 ON 1=1 JOIN @list AS t3 ON 1=1 JOIN @list AS t4 ON 1=1 JOIN @list AS t5 ON 1=1 WHERE not t1.n = t2.n and not t1.n = t3.n and not t1.n = t4.n and not t1.n = t5.n and not t2.n = t3.n and not t2.n = t4.n and not t2.n = t5.n and not t3.n = t4.n and not t3.n = t5.n and not t4.n = t5.n ``` I need to simplify the where clause like **(where T1.n,T2.n,T3.n,.T4.n,T5.n is not equl to each other)** Thanks in advance
To create combinations (or is it permutations?), you can use a CTE. eg: ``` ;with cte as ( select n from @list union all select convert(nvarchar(50),L.n+ cte.n) from @List L inner join cte on charindex(l.n,cte.n)=0 where LEN(l.n)<5 ) select * from cte where LEN(n)=5 ```
You can check how many distinct values you have: ``` where ( select count(distinct n) from (values (t1.n), (t2.n), (t3.n), (t4.n), (t5.n)) as ns (n) ) = 5 ``` If you have less than 5 distinct values, some of the values were equal. [SQL Fiddle](http://www.sqlfiddle.com/#!6/c4041/2)
Is there any short and simple way to write the lengthy where clause
[ "", "sql", "sql-server", "" ]