Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I don't know if the title makes any sense, but here is the situation: See 2 tables below **People:** (BTW, if it helps to know, this table will never have >1000 rows) ``` +----+---------+-------------------+---------+ | id | name | address | city_id | +----+---------+-------------------+---------+ | 1 | person1 | some address | 123 | | 2 | person2 | another address | 542 | | 3 | person3 | different address | 623 | +----+---------+-------------------+---------+ ``` **Cities:** (this one may contain all cities with states(and addl. column for country) around the globe) ``` +-----+-------+--------+ | id | city | state | +-----+-------+--------+ | 123 | city1 | state1 | | 542 | city2 | state1 | | 623 | city3 | state2 | +-----+-------+--------+ ``` To start, I know only `people.id`. Using this I need to find all people that belong to same `state` (**not same `city`**). For example, if I have `people.id=1`, I need to get all people from the **state** that person1 (people.id = 1) belongs to: **Output:** ``` +----+---------+-----------------+---------+ | id | name | address | city_id | +----+---------+-----------------+---------+ | 1 | person1 | some address | 123 | /*Both the people are from state1*/ | 2 | person2 | another address | 542 | +----+---------+-----------------+---------+ ``` I'm able to achieve this in two queries: A variable `$state` storing output of `SELECT c.state from people p INNER JOIN cities c ON p.city_id=c.id where p.id=<my input>;` and then another query `SELECT a.* FROM`people`a INNER JOIN`cities`b ON a.city_id=b.id WHERE b.state=$state` Is there a more efficient way to achieve this with a single JOIN? I tried combining the two queries to `SELECT` with JOIN within a JOIN(in subquery) which doesn't feel right somehow. **P.S: *I'm not looking for recommendations on normalization or other changes to schema. All that is already in consideration for another development branch for later upgrade.***
You can try below query- ``` SELECT p2.* FROM people p JOIN cities c ON p.city_id=c.id JOIN cities c2 ON c.state=c2.state JOIN people p2 ON p2.city_id=c2.id WHERE p.id=<my input>; ``` Note: For performance id and city\_id in people table and id and state in cities table should be indexed. Also for more optimization you should use state\_id instead of state for join and for this you have to create state\_id field in your table.
You might try this. ``` select * from people where city_id in( select city from cities c inner join( select c.state from people p left join cities c on c.city = p.city_id where p.id = '1' ) s on s.state = c.state ) ```
SQL select addresses from a state in single query from 2 tables
[ "", "mysql", "sql", "performance", "join", "" ]
I have a database with several tables. ``` Film (filmID, title, filmCatagory) FilmCast (filmID, filmStarID, filmStarName) FilmStar (filmStarID, filmStarName, birthplace). ``` The Film entity states all information about the films, FilmCast links the two tables together as a way to show which film star stars in what particular film, FilmStar states information about all of the stars. I need to translate the following question into an SQL query: List the unique numbers and names of all stars that have appeared in at least one comedy. I understand that a join will be used, but am unsure how this query will work with the three tables.
You could use a SELECT query with EXISTS: ``` SELECT filmStarID, filmStarName, birthplace FROM FilmStar WHERE EXISTS ( SELECT * FROM Film INNER JOIN FilmCast ON Film.FilmID = FilmCast.FilmID WHERE Film.filmCatagory='Comedy' AND FilmCast.filmStarID = FilmStar.filmStarID ) ```
Find all of the film stars who have appeared in comedies and then GROUP BY. ``` SELECT s.filmStarID, s.filmStarName, COUNT(*) FROM FilmStar s INNER JOIN FilmCast c ON c.filmStarID = s.filmStarID INNER JOIN Film f ON f.filmID = c.filmID WHERE f.filmCategory = 'Comedy' GROUP BY s.filmStarID, s.filmStarName ```
Unique numbers & names of stars that have appeared in at least one comedy
[ "", "sql", "database", "oracle", "performance", "relational-database", "" ]
``` SELECT Lname + ' ' + Ename as Name, DateOfBirth, DATEDIFF(hour,dateOfBirth,GETDATE())/8766 AS Age FROM EmpTBL WHERE DATEADD( Year, DATEPART( Year, GETDATE()) - DATEPART( Year, DateOfBirth), DateOfBirth) BETWEEN CONVERT( DATE, GETDATE()) AND CONVERT( DATE, GETDATE() + 30); ``` The above query is what i use when i was testing inside the SQL it seems to work but when i add it to my ASP.net project the result is different, i already add "Convert(date, getdate()) " [How to select date without time in SQL](https://stackoverflow.com/questions/5125609/how-to-select-date-without-time-in-sql) Output from my project [![Output from my project](https://i.stack.imgur.com/MtPIn.png)](https://i.stack.imgur.com/MtPIn.png) Output from my SQL [![Output from my SQL](https://i.stack.imgur.com/kwAlI.png)](https://i.stack.imgur.com/kwAlI.png)
change your query to ``` select CONVERT(VARCHAR(10),dateOfBirth,101) FROM EmpTBL WHERE <> ```
When we fetch date from database to asp.net it's in Date-Time format not in Date so you can convert it to custom date as you want: ``` <%# Convert.ToDateTime(Eval("DateOfBirth")).ToString("yyyy/MM/dd")%> ``` You can use any [Format specifier](http://www.mikesdotnetting.com/article/23/date-formatting-in-c)/[Custom Date and Time Format String](https://msdn.microsoft.com/en-us/library/8kb3ddd4(v=vs.110).aspx) according to your requirement.
SQL Select Query returning DATE without TIME inside SQL but i when i run my ASP project it keeps adding "12:00:00 AM"
[ "", "sql", "asp.net", "" ]
I'm having some issues to complete a SQL statement in SQL Server 2008. My 'query1' is the following: ``` SELECT [Vc_MONTH], [Vc_STATE], [Vc_PRODUCT], SUM ([TOTAL]) as Total_Units, SUM ([OPEN]) as Open_Units FROM [test].[dbo].[Tbl_Summary] GROUP BY [Vc_MONTH], [Vc_REGION], [Vc_PRODUCT], ``` This query selects Month, Region, Product, Sum of Total Units and Sum of Open Units. I already group by Month, Region and Product. (I have plenty more lines) This query works. What I need is another 'query2' that groups by (ALL) the months listed on the table and then an union of this two selects. At the end I need something like this query1 ``` |MONTH | STATE | PRODUCT | TOTAL | OPEN | |:-----|:------|:--------|:------|:-----| |JAN | CA | PENCIL | 200 | 160 | |JAN | FL | BOOK | 300 | 280 | |FEB | CA | PENCIL | 180 | 150 | |FEB | FL | PENCIL | 250 | 100 | |MAR | CA | BOOK | 250 | 100 | |MAR | FL | BOOK | 100 | 50 | ``` query2 - This is what I need ``` |MONTH | STATE | PRODUCT | TOTAL | OPEN | |:-----|:------|:--------|:------|:-----| |JAN | CA | PENCIL | 200 | 160 | |JAN | FL | BOOK | 300 | 280 | |FEB | CA | PENCIL | 180 | 150 | |FEB | FL | PENCIL | 250 | 100 | |MAR | CA | BOOK | 250 | 100 | |MAR | FL | BOOK | 100 | 50 | ``` UNION ``` |ALL | CA | PENCIL | 380 | 310 | |ALL | CA | BOOK | 250 | 100 | |ALL | FL | PENCIL | 250 | 100 | |ALL | FL | BOOK | 400 | 330 | ``` Thanks in advance, Luis
so you already have query 1: ``` SELECT [Vc_MONTH], [Vc_STATE], [Vc_PRODUCT], SUM ([TOTAL]) as Total_Units, SUM ([OPEN]) as Open_Units FROM [test].[dbo].[Tbl_Summary] GROUP BY [Vc_MONTH], [Vc_STATE], [Vc_PRODUCT] ``` next you need to GROUP BY Month and Product correct? However, you need to specify a value in the 'Vc\_STATE' column so that result sets from the two queries return the same columns. ``` UNION SELECT [Vc_MONTH], 'ALL STATES', [Vc_PRODUCT], SUM ([TOTAL]) as Total_Units, SUM ([OPEN]) as Open_Units FROM [test].[dbo].[Tbl_Summary] GROUP BY [Vc_MONTH], [Vc_PRODUCT] ```
I think you should just use `grouping sets`. Much simpler query and no `union`: ``` SELECT (CASE WHEN GROUPING([Vc_MONTH]) = 1 THEN 'ALL' ELSE [Vc_MONTH] END) as [Vc_MONTH], [Vc_STATE], [Vc_PRODUCT], SUM ([TOTAL]) as Total_Units, SUM ([OPEN]) as Open_Units FROM [test].[dbo].[Tbl_Summary] GROUP BY GROUPING SETS (([Vc_MONTH], [Vc_REGION], [Vc_PRODUCT]), ([Vc_REGION], [Vc_PRODUCT]) ); ```
Select and group by - calculated field
[ "", "sql", "sql-server", "" ]
**Table**: `l_test1` ``` CREATE TABLE l_test1 ( Cola VARCHAR(10) ); ``` **Table**: `l_test2` ``` CREATE TABLE l_test2 ( Cola VARCHAR(20) ); ``` **Insertion**: ``` INSERT INTO l_test1 VALUES('1'); INSERT INTO l_test1 VALUES('12'); INSERT INTO l_test1 VALUES('123'); INSERT INTO l_test1 VALUES('1234'); INSERT INTO l_test2 VALUES('991234567890'); INSERT INTO l_test2 VALUES('9912345678901'); INSERT INTO l_test2 VALUES('99123456789012'); INSERT INTO l_test2 VALUES('123991234567890'); INSERT INTO l_test2 VALUES('981234567890'); INSERT INTO l_test2 VALUES('1234991234567890'); INSERT INTO l_test2 VALUES('1981234567890'); ``` **Note**: Now I want to remove the starting and ending numbers of table `l_test2` which are matched with the numbers present in the table `l_test1`. For example: In the above table I have `1,12,123,1234` values in the table `l_test1`. Now I want to remove the values of table `l_test2` which are matched with these numbers. The second record in the table `l_test2` match the value `1` in the table `l_test1` so it should be remove. After updating all the values the table `l_test2` should looks like: **Expected Result**: ``` Cola --------------------------- 991234567890 991234567890 991234567890 991234567890 981234567890 991234567890 981234567890 ```
Use the `STUFF`: **[LiveDemo](https://data.stackexchange.com/stackoverflow/query/372926)** ``` WITH cte1 AS ( SELECT t2.Cola, MAX(t1.Cola) AS r FROM #l_test2 t2 JOIN #l_test1 t1 ON t2.Cola LIKE t1.Cola + '%' GROUP BY t2.Cola ), cte2 AS ( SELECT t2.Cola, MAX(t1.Cola) AS r FROM #l_test2 t2 JOIN #l_test1 t1 ON t2.Cola LIKE '%' + t1.Cola GROUP BY t2.Cola ), cte3 AS ( SELECT Cola, STUFF(Cola, 1, LEN(r), '') AS sanitized FROM cte1 UNION ALL SELECT Cola, STUFF(Cola, LEN(Cola) - LEN(r) + 1, LEN(r), '') AS sanitized FROM cte2 ) SELECT sanitized FROM cte3 UNION ALL SELECT Cola FROM #l_test2 t WHERE NOT EXISTS (SELECT 1 FROM cte3 c3 WHERE c3.Cola = t.Cola); ``` I break this into parts for readability: 1. cte1 - remove prefixes 2. cte2 - remove suffixes 3. cte3 - combine sanitized 4. final - get rows that weren't sanitized Feel free to combine my solution into more concise manner ;)
You can use the following query that employs [**`PATINDEX`**](https://msdn.microsoft.com/en-us/library/ms188395.aspx): ``` SELECT DISTINCT Cola, SUBSTRING(Cola, left_index + 1, right_index - left_index - 1) AS sanitized_Cola FROM ( SELECT MAX(CASE WHEN t3.left_match = 1 THEN LEN(t1.Cola) ELSE 0 END) OVER (PARTITION BY t2.Cola) AS left_index, MIN(CASE WHEN right_match = 0 THEN LEN(t2.Cola)+1 ELSE right_match END) OVER (PARTITION BY t2.Cola) AS right_index, t2.Cola FROM l_test2 AS t2 CROSS JOIN l_test1 AS t1 CROSS APPLY (SELECT PATINDEX(t1.Cola + '%', t2.Cola) AS left_match) AS t3 CROSS APPLY (SELECT PATINDEX('%' + t1.Cola, t2.Cola) AS right_match) AS t4 ) AS q ``` The idea is to find the *biggest matching pattern* (if any) left-wise as well as right-wise. Then use the indexes of those matches in `SUBSTRING` to get the sanitized string. [**Demo here**](http://sqlfiddle.com/#!6/dfb61/1)
SQL Server 2008 R2: Update table values which matched with another table
[ "", "sql", "sql-server", "sql-server-2008-r2", "" ]
I am fetching records from two different databases and keeping them in two separate files suppose `File1.txt` and `File2.txt` . I want to compare these two files as we do in case of sql minus operator. Suppose if File1.txt contains data like ``` COL1|COL2|COL3 A1|A2|A3 B1|B2|B3 ``` and File2.txt contains data like ``` COL1|COL2|COL3 A1|A2|A3 C1|C2|C3 ``` I want to do something like File1.txt minus File2.txt so that I will get `B1|B2|B3` as output and if File2.txt minus File1.txt I will get `C1|C2|C3`. How can I achieve this in shell scripting. Please Help
Try to use `grep` like this: ``` grep -F -x -v -f file2 file1 ``` Note that: ``` -F FILE takes patterns from FILE -x match whole line -v show non-matching ```
This is what `comm` is for ``` comm -23 file1 file2 # b... comm -13 file1 file2 # c... ``` Check your `comm` man page The input needs to be sorted. The final answer might be a little more complicated.
Is there any utility in unix like minus operator in sql
[ "", "sql", "shell", "unix", "" ]
In oracle...How can I know my table 'A' is belongs to which user !
This query it's work with me ``` select table_name,owner from all_tables where table_name like '%CONTINENT%' ```
``` SELECT * FROM All_Objects WHERE object_name = 'Your_table' AND object_type = 'TABLE'; ``` you could also query all\_tables ``` SELECT * FROM all_tables WHERE Table_name = 'Your_table' ``` [All tables](http://docs.oracle.com/cd/B28359_01/server.111/b28320/statviews_2105.htm) shows the tables accessible to your user. DBA\_tables shows all the tables in the database but requires elevated permissions usually associated with the DBA role.
Table belongs to which user
[ "", "sql", "database", "oracle", "" ]
This feels like a stupid question since it should have a simple answer, but I just can't find it. I have a table looking like this: ``` |-----|---| | a | b | |-----|---| | ALA | 2 | | ASP | 1 | | SER | 1 | | VAL | 2 | |-----|---| ``` What I need is to get the two rows with the maximum value (I don't know the values in advance) which means that my example above shoud give: ``` |-----|--------| | a | max(b) | |-----|--------| | ALA | 2 | | VAL | 2 | |-----|--------| ``` I'm trying ``` SELECT a, max(b) FROM table; ``` but it only gives the first maximum row: ``` |-----|--------| | a | max(b) | |-----|--------| | ALA | 2 | |-----|--------| ``` What do I miss?
``` SELECT a,b FROM table WHERE b = (SELECT MAX(b) FROM table) ```
If you use SQL Server, you can use `TOP WITH TIES`, which should be more efficient than subquery or `RANK`. ``` SELECT TOP(1) WITH TIES a, b FROM YourTable ORDER BY b DESC; ``` Other databases may also have similar option.
SQL max() only returns 1 row if column has several max values
[ "", "mysql", "sql", "max", "" ]
I have an Access Form that contains a ListBox that gets data from the following query: ``` SELECT tblFUNDS.MorningsStar_Fund_Name, tblFUNDS.ISIN, tblFUNDS.RDR AS [Retro Frei], tblMorningstar_Data.[DEU Tax Transparence], tblMorningstar_Data.[Distribution Status], tblISIN_Country_Table.Country FROM (tblFUNDS INNER JOIN tblISIN_Country_Table ON tblFUNDS.ISIN = tblISIN_Country_Table.ISIN) INNER JOIN tblMorningstar_Data ON (tblFUNDS.Fund_Selection = tblMorningstar_Data.Fund_Selection) AND (tblFUNDS.ISIN = tblMorningstar_Data.ISIN) GROUP BY tblFUNDS.MorningsStar_Fund_Name, tblFUNDS.ISIN, tblFUNDS.RDR, tblMorningstar_Data.[DEU Tax Transparence], tblMorningstar_Data.[Distribution Status], tblISIN_Country_Table.Country, tblFUNDS.Fund_Selection HAVING (((tblFUNDS.RDR) Like Nz([Forms]![frmMain]![ddnRDR],'*')) AND ((tblMorningstar_Data.[DEU Tax Transparence]) Like Nz([Forms]![frmMain]![ddnTax],'*')) AND ((tblMorningstar_Data.[Distribution Status]) Like Nz([Forms]![frmMain]![ddnDistribution],'*')) AND ((tblISIN_Country_Table.Country) Like Nz([Forms]![frmMain]![ddnCountry].[Text],'*')) AND ((tblFUNDS.Fund_Selection)=0)); ``` I have set up the various controls referenced by the query to run the same SQL statement above on the `_AfterUpdate` event of clicking the various dropdown fields. They all execute, which I can tell by a) the list box updating and b) by setting break points. **The issues is this:** When I change the value of the dropdown field for country for example, it filters by country. If I then set the dropdown field for Tax, it fileters for Tax, but ignores the value set in the country dropdown control (and all values in the other dropdowns as well). **My question:** Why does this happen and how can I get it to filter based on the values of ALL the dropdown fields at once?
Like in comment, try: ``` SELECT tblFUNDS.MorningsStar_Fund_Name, tblFUNDS.ISIN, tblFUNDS.RDR AS [Retro Frei], tblMorningstar_Data.[DEU Tax Transparence], tblMorningstar_Data.[Distribution Status], tblISIN_Country_Table.Country FROM (tblFUNDS INNER JOIN tblISIN_Country_Table ON tblFUNDS.ISIN = tblISIN_Country_Table.ISIN) INNER JOIN tblMorningstar_Data ON (tblFUNDS.Fund_Selection = tblMorningstar_Data.Fund_Selection) AND (tblFUNDS.ISIN = tblMorningstar_Data.ISIN) WHERE (((tblFUNDS.RDR) Like Nz([Forms]![frmMain]![ddnRDR],'*')) AND ((tblMorningstar_Data.[DEU Tax Transparence]) Like Nz([Forms]![frmMain]![ddnTax],'*')) AND ((tblMorningstar_Data.[Distribution Status]) Like Nz([Forms]![frmMain]![ddnDistribution],'*')) AND ((tblISIN_Country_Table.Country) Like Nz([Forms]![frmMain]![ddnCountry].[Text],'*')) AND ((tblFUNDS.Fund_Selection)=0)) GROUP BY tblFUNDS.MorningsStar_Fund_Name, tblFUNDS.ISIN, tblFUNDS.RDR, tblMorningstar_Data.[DEU Tax Transparence], tblMorningstar_Data.[Distribution Status], tblISIN_Country_Table.Country, tblFUNDS.Fund_Selection; ```
Glad to hear you've solved your problem. To elaborate my comment I will place the code to change the controlsource of the listbox here anyway. Maybe someone else will find it useful one day. Any comments are also welcomed. ``` Public Function Rapport_query() Dim sqlTax As String Dim sqlRDR As String Dim sql As String Dim selectQuery As String Dim whereStatement As String Dim i As Integer Dim i1 As Integer Dim i2 As Integer 'set counter (because if the filter is not the first there should be an "AND" operator before the filter. i = 0 'check if the combobox is empty, if it's not use the input as input for you where statement If Not (IsNull(Forms!frmMain!ddnRDR)) Then i1 = i + 1 sqlRDR = " tblFUNDS.RDR LIKE " & Chr(34) & Forms!frmMain!ddnRDR & Chr(34) i = i + 1 End If If Not (IsNull(Forms!frmMain!ddnTax)) Then i2 = i + 1 If i2 > i1 And i > 0 Then sqlTax = " AND tblMorningstar_Data.[DEU Tax Transparence] LIKE " & Chr(34) & Forms!frmMain!ddnTax & Chr(34) Else sqlTax = "tblMorningstar_Data.[DEU Tax Transparence] LIKE " & Chr(34) & Forms!frmMain!ddnTax & Chr(34) End If i = i + 1 End If 'if the lenght is 0, there are no filters. Else fill the where statement string If Len(sqlRDR & sqlTax) = 0 Then whereStatement = "" Else whereStatement = "WHERE " & sqlRDR & sqlTax End If 'set the select query selectQuery = "SELECT tblFUNDS.MorningsStar_Fund_Name, tblFUNDS.ISIN, tblFUNDS.RDR AS [Retro Frei]," & _ "tblMorningstar_Data.[DEU Tax Transparence], tblMorningstar_Data.[Distribution Status]," & _ "tblISIN_Country_Table.Country " & _ "FROM (tblFUNDS INNER JOIN tblISIN_Country_Table ON tblFUNDS.ISIN = tblISIN_Country_Table.ISIN) " & _ "INNER JOIN tblMorningstar_Data ON (tblFUNDS.Fund_Selection = tblMorningstar_Data.Fund_Selection) " & _ "AND (tblFUNDS.ISIN = tblMorningstar_Data.ISIN) " & _ "GROUP BY tblFUNDS.MorningsStar_Fund_Name, tblFUNDS.ISIN, tblFUNDS.RDR," & _ "tblMorningstar_Data.[DEU Tax Transparence], tblMorningstar_Data.[Distribution Status]," & _ "tblISIN_Country_Table.Country , tblFUNDS.Fund_Selection" 'combine the select query with the variable where statement sql = selectQuery & whereStatement 'set the listbox controlsource Forms!frmMain.ListBox.ControlSource = sql End Function ```
SQL Query ignores controls that it refrences
[ "", "sql", "ms-access", "ms-access-2010", "" ]
Can someone tell me how to use the output of a `CASE` statement as input in the next `CASE` statement in t-SQL? **exampleTable** ``` id | itemNo | samplesLeft ------------------------- 1 | 001 | 104 2 | 003 | 53 3 | 002 | 193 4 | 001 | 32 ``` I would like a query which returns all of this, as well as a boolean telling whether or not it was a factory born error. An error is considered factory born, if less than ten samples has been used. My idea was to first use `CASE` to make a column, showing how many samples has been used, like such: **Query** ``` SELECT id, itemNo, samplesLeft, CASE WHEN itemNo= 001 THEN 200 - samplesLeft WHEN itemNo= 002 THEN 300 - samplesLeft WHEN itemNo= 003 THEN 400 - samplesLeft ELSE 100 - samplesLeft END AS samplesUsed FROM exampleTable ``` Then I would like to use the samplesUsed as input in another `CASE`, to make a boolean called `factoryError`. Stating TRUE if `samplesUsed < 10`. How to do this?
You can do this with a sub query like, ``` select x.*, case when samplesUsed < 10 then 'Error' else 'right' end as status from ( select t.*, CASE WHEN itemNo = 001 THEN 200 - samplesLeft WHEN itemNo = 002 THEN 300 - samplesLeft WHEN itemNo = 003 THEN 400 - samplesLeft ELSE 100 - samplesLeft END AS samplesUsed from tbl ) x ```
you can use this: ``` SELECT id , itemNo , samplesLeft , CASE WHEN ( CASE WHEN itemNo = '001' THEN 200 - samplesLeft WHEN itemNo = '002' THEN 300 - samplesLeft WHEN itemNo = '003' THEN 400 - samplesLeft ELSE 100 - samplesLeft END ) < 10 THEN 'factory born error' ELSE 'ok' END AS samplesUsed FROM exampleTable ``` test: [![enter image description here](https://i.stack.imgur.com/u4Qi2.jpg)](https://i.stack.imgur.com/u4Qi2.jpg)
Using output from CASE in a new CASE (SQL)
[ "", "sql", "t-sql", "case", "" ]
I'm working with a MySQL database, trying to grab statistics on the number of users *in a department* who have completed a certain task within a timeline. My problem is this: certain users are doing tasks multiple times. I was able to construct a query which returns the number of done tasks and total users per group, but I need to only count one "task" per user. For this reason, I'm getting results like "150% of [department] has completed the task" when only one person has completed enough to fill the requirement for their whole department. Here is the existing query: ``` SELECT total.department, total_count, IFNULL(done, 0) as done_count, ROUND((IFNULL(done, 0) / total_count)*100, 2) as percent FROM (SELECT department, COUNT(*) total_count FROM agents GROUP BY department) total LEFT JOIN (SELECT a.department as department, COUNT(*) as done FROM agents a, tasks p WHERE p.task_responses_id IS NOT NULL AND (p.agent1_id = a.id OR p.agent2_id = a.id) GROUP BY a.department) done ON done.department = total.department; ``` Which returns a table like this (department names sanitized): ``` +------------------+-------------+------------+---------+ | department | total_count | done_count | percent | +------------------+-------------+------------+---------+ | a | 2 | 0 | 0.00 | | b | 10 | 1 | 10.00 | | c | 2 | 0 | 0.00 | | d | 1 | 0 | 0.00 | | e | 2 | 2 | 100.00 | | f | 1 | 0 | 0.00 | | g | 3 | 6 | 200.00 | | h | 4 | 0 | 0.00 | | i | 4 | 1 | 25.00 | +------------------+-------------+------------+---------+ ``` As you can see, the department "g" has done\_count > total\_count due to one person in that department doing it multiple times. I need to take the task table which looks like this: ``` +-----+----------------+-----------+-----------+-----------------------+---------------------+------+ | id | reservation_id | agent1_id | agent2_id | task_responses_id | last_contact | dnc | +-----+----------------+-----------+-----------+-----------------------+---------------------+------+ | 128 | 6457633 | 9 | NULL | 24 | 2015-10-06 00:00:00 | 1 | | 130 | 6799659 | 10 | NULL | 25 | 2015-10-06 00:00:00 | NULL | | 145 | 7004981 | 36 | NULL | 28 | 2015-10-08 00:00:00 | NULL | | 150 | 7091836 | 36 | NULL | 29 | 2015-10-08 00:00:00 | NULL | | 152 | 7128330 | 36 | NULL | 30 | 2015-10-08 00:00:00 | NULL | | 155 | 7165876 | 16 | NULL | 35 | 2015-10-08 00:00:00 | NULL | | 166 | 7308234 | 36 | NULL | 31 | 2015-10-08 00:00:00 | NULL | | 171 | 7333373 | 36 | NULL | 33 | 2015-10-08 00:00:00 | NULL | | 173 | 7408857 | 37 | NULL | 34 | 2015-10-08 00:00:00 | NULL | +-----+----------------+-----------+-----------+-----------------------+---------------------+------+ ``` And if we've already retrieved a row for a given agent id, I want to not grab any others for their ID. Thank you so much for your help! I'm happy to clarify any questions you may have.
I think this can be achieved by replacing the "count(\*)" in the 3rd line with "count(distinct a.id)" This way, if the same agent id exists more than once, it will be counted only once. So the query would look like this: ``` SELECT total.department, total_count, IFNULL(done, 0) as done_count, ROUND((IFNULL(done, 0) / total_count)*100, 2) as percent FROM (SELECT department, COUNT(*) total_count FROM agents GROUP BY department) total LEFT JOIN (SELECT a.department as department, COUNT(distinct a.id) as done FROM agents a, tasks p WHERE p.task_responses_id IS NOT NULL AND (p.agent1_id = a.id OR p.agent2_id = a.id) GROUP BY a.department) done ON done.department = total.department; ```
To calculate the number of agents per department in the same query as the number who have completed a task, you could use a sub-query in the select list, but this will not perform as well. Instead I recommend the following, which is more complex but will perform optimally: ``` SELECT d.department, count(*) as dept_count, sum(d.done) as done_count FROM (SELECT *, (CASE WHEN EXISTS( SELECT * FROM tasks WHERE (agents.id = tasks.agent1_id OR agents.id = tasks.agent2_id) AND tasks.task_responses_id IS NOT NULL ) THEN 1 ELSE 0 END ) as done FROM agents ) as d GROUP BY department; ``` This version uses an inner query over the agents table that adds a "done" column which has the value 1 if that agent qualifies, 0 otherwise. The outer query counts all rows, but also sums up the number of 1's to get the done\_count.
MySQL return first row related to a given id
[ "", "mysql", "sql", "" ]
Here is a sample of what I have in my table (SQL Server): ``` Start End 2014-08-31 2014-09-01 2014-09-09 2014-09-11 2014-09-11 2014-09-26 2014-09-15 2014-09-23 2014-09-16 2014-09-22 2014-09-17 2014-09-19 2014-09-17 2014-09-26 2014-09-17 2014-10-03 2014-09-17 2014-09-26 2014-09-17 2014-09-18 2014-09-18 2014-10-04 2014-09-18 2014-09-19 2014-09-18 2014-09-19 2014-09-19 2014-09-20 ``` This is a Leave request table. I need to get the total count per days. The output should be like this. ``` 2014-08-31 == 1 2014-09-01 == 1 2014-09-09 == 1 2014-09-10 == 1 2014-09-11 == 2 2014-09-12 == 1 2014-09-13 == 1 2014-09-14 == 1 2014-09-15 == 2 2014-09-16 == 2 2014-09-17 == 8 2014-09-18 == 11 2014-09-19 == 11 2014-09-20 == 8 ```
I just want to share my resolution. ``` tb_LeaveRequest id StartDate EndDate 1 09/17/2014 09/18/2014 2 09/18/2014 10/04/2014 3 09/17/2014 10/03/2014 4 09/17/2014 09/26/2014 5 09/18/2014 09/19/2014 6 09/25/2014 09/25/2014 7 09/26/2014 09/26/2014 8 09/19/2014 09/20/2014 9 09/25/2014 09/25/2014 10 09/17/2014 09/19/2014 11 09/09/2014 09/11/2014 12 09/18/2014 09/19/2014 13 09/11/2014 09/26/2014 14 09/22/2014 09/23/2014 15 09/22/2014 09/23/2014 16 09/22/2014 09/22/2014 17 09/23/2014 09/23/2014 18 09/24/2014 09/24/2014 19 09/15/2014 09/23/2014 20 09/25/2014 09/25/2014 22 09/26/2014 09/26/2014 23 09/23/2014 09/23/2014 24 09/17/2014 09/26/2014 26 09/22/2014 09/22/2014 27 09/16/2014 09/22/2014 28 09/22/2014 09/22/2014 ``` Above is the data in tb\_LeaveRequest table Here is my query below: ``` --Create temp table for tb_LeaveRequest with IsProcssed Column CREATE TABLE #tmpLeaveRequest (id int, StartDate datetime, EndDate datetime, IsProcessed bit) --Insert tb_LeaveRequest in temp table insert INTO #tmpLeaveRequest(id, StartDate, EndDate, IsProcessed) Select id, StartDate, EndDate, 0 from tb_LeaveRequest Order by StartDate --Create temp table for out CREATE TABLE #tmpOutput (Dys datetime, Counts int) While (Select Count(*) From #tmpLeaveRequest Where IsProcessed = 0) > 0 Begin DECLARE @Id as int DECLARE @StartDate as datetime DECLARE @DateDiff INT = 0; DECLARE @cnt INT = 0; Declare @NewDate as datetime Select Top 1 @Id = Id, @StartDate = StartDate, @DateDiff = DATEDIFF(D, StartDate, EndDate) From #tmpLeaveRequest Where IsProcessed = 0 ---- WHILE (@cnt <= @DateDiff) BEGIN set @NewDate = DATEADD(day, @cnt, @StartDate) if exists(select * from #tmpOutput where Dys = @NewDate) Begin Update #tmpOutput Set Counts = Counts + 1 Where Dys = @NewDate End else BEGIN Insert into #tmpOutput (Dys, Counts) values (@NewDate, 1) end set @cnt = @cnt + 1 END --End Loop Update #tmpLeaveRequest Set IsProcessed = 1 Where Id = @Id End --End Loop --Output Select * From #tmpOutput Order BY Dys --Drop tmp Table DROP TABLE #tmpOutput DROP TABLE #tmpLeaveRequest ``` Out would be: ``` Dys Counts 09/09/2014 1 09/10/2014 1 09/11/2014 2 09/12/2014 1 09/13/2014 1 09/14/2014 1 09/15/2014 2 09/16/2014 3 09/17/2014 8 09/18/2014 11 09/19/2014 11 09/20/2014 8 09/21/2014 7 09/22/2014 12 09/23/2014 10 09/24/2014 6 09/25/2014 8 09/26/2014 7 09/27/2014 2 09/28/2014 2 09/29/2014 2 09/30/2014 2 10/01/2014 2 10/02/2014 2 10/03/2014 2 10/04/2014 1 ```
Using `LEFT JOIN`: [**SQL Fiddle**](http://sqlfiddle.com/#!6/3c800/1/0) ``` SELECT l.dt, COUNT(t.Start) AS cnt FROM Leave l LEFT JOIN tbl t ON dt BETWEEN Start AND [End] GROUP BY l.dt ``` Using `OUTER APPLY`: [**SQL Fiddle**](http://sqlfiddle.com/#!6/3c800/2/0) ``` SELECT l.dt, a.cnt FROM Leave l OUTER APPLY( SELECT COUNT(*) AS cnt FROM tbl WHERE dt BETWEEN Start AND [End] )a ```
Sql server Count Days within Date Range
[ "", "sql", "sql-server", "" ]
I am sorry if this duplicate question. Please let me know where I can find right question. I have a stored procedure where I can just make few minor modifications and I cannot change/update data in my Student table. We have this problem and I need to fix it. In below statement, sometimes student.FullName will have [NEXTLINE] in it then I need to replace it with '' ( empty string) else nothing return as it is. I tried various ways but getting error when I replace +student.FullName in THEN clause. Please let me know how can I do this. ``` CASE WHEN student.ID IS NULL THEN CASE WHEN student.Status = 0 THEN '<BOLDSTART>rejected<BOLDEND>.'+'[LINEBREAK]'+ student.FullName WHEN student.Status = 1 THEN ' <BOLDSTART>accepted<BOLDEND>.' END END ``` I want to add similar logic like below in above +student.FullName ``` IF (student.FullName LIKE '%[NEXTLINE]%') BEGIN SELECT REPLACE (student.FullName,'[NEXTLINE]','') END ELSE SELECT student.FullName ``` Thanks in advance **UPDATE** Thanks to D Stanley. I solved my problem like below ``` CASE WHEN student.ID IS NULL THEN WHEN student.Status = 0 THEN '<BOLDSTART>rejected<BOLDEND>.' + '[LINEBREAK]' + REPLACE (student.FullName,'[NEXTLINE]','') WHEN student.Status = 1 THEN ' <BOLDSTART>accepted<BOLDEND>.' END ```
You don't need to see if it contains the string. If it does not, `REPLACE` will just return the original string: ``` CASE WHEN student.ID IS NULL THEN CASE WHEN student.Status = 0 THEN '<BOLDSTART>rejected<BOLDEND>.'+'[LINEBREAK]'+ REPLACE(student.FullName,'[NEXTLINE]','') WHEN student.Status = 1 THEN ' <BOLDSTART>accepted<BOLDEND>.' END END ```
Is this what you want? replace ``` + student.FullName ``` for ``` + CASE WHEN student.FullName LIKE '%[NEXTLINE]%' THEN REPLACE (student.FullName,'[NEXTLINE]','') ELSE student.FullName END ```
SQL : IF ELSE with THEN of SELECT CASE
[ "", "sql", "sql-server-2008", "" ]
I have a static dataset that correlates a range of numbers to some metadata, e.g. ``` +--------+--------+-------+--------+----------------+ | Min | Max |Country|CardType| Issuing Bank | +--------+--------+-------+--------+----------------+ | 400011 | 400051 | USA |VISA | Bank of America| +--------+--------+-------+--------+----------------+ | 400052 | 400062 | UK |MAESTRO | HSBC | +--------+--------+-------+--------+----------------+ ``` I wish to lookup a the data for some arbitrary single value ``` SELECT * FROM SomeTable WHERE Min <= 400030 AND Max >= 400030 ``` I have about 200k of these range mappings, and am wondering the best table structure for SQL Server? A composite key doesn't seem correct due to the fact that most of the time, the value being looked up will be in between the two range values stored on disk. Similarly, only indexing the first column doesn't seem to be selective enough. I know that 200k rows is fairly insignificant, and I can get by with doing not much, but lets assume that the numbers of rows could be orders of magnitude greater.
If you *usually* search on both `min` and `max` then a compound key on `(min,max)` is appropriate. The engine will find all rows where `min` is less than X, then search within those result to find the rows where `max` is greater then `Y`. The index would also be useful if you do searches on `min` only, but would not be applicable if you do searches only on `max`.
You can index the first number and then do the lookup like this: ``` select t.*, (select top 1 s.country from static s where t.num >= s.firstnum order by s.firstnum ) country from sometable t; ``` Or use `outer apply`: ``` select t.*, s.country from sometable t outer apply (select top 1 s.country from static s where t.num >= s.firstnum order by s.firstnum ) s ``` This should take advantage of an index on `static(firstnum)` or `static(firstnum, country)`. This does not check against the second number. If that is important, use `outer apply` and do the check outside the subquery.
SQL Server Primary Key for a range lookup
[ "", "sql", "sql-server", "" ]
I have 2 tables as follow: ``` 1)passenger - with passenger_id,passenger_name and passenger_city 2)flight - with flight_id,flight_name and passenger_id. ``` The question is: List the passenger details with flight id, who has travelled in more than one flight. (This function will display the passenger details with their flight id's who has travelled in more than one flight.) I used this query: ``` select * from passenger_1038299 where passengerid in(select passengerid from flight_1038299 group by passengerid having count(passengerid)>1); ``` but it doesnt give me flight\_ids. please tell how to retrieve flight id as well. thanks and sorry for stupid question as new to sql.
Join the flight table to get the passenger's flights ``` select * from passenger_1038299 p join flight_1038299 f on f.passenger_id = p.passenger_id where p.passengerid in( select passengerid from flight_1038299 group by passengerid having count(passengerid)>1 ); ``` I like to use `exists` to check for multiples. With an index on `passenger_id` it may run faster than the query above. ``` select * from passenger_1038299 p join flight_1038299 f on f.passenger_id = p.passenger_id where exists ( select 1 from flight_1038299 f2 where f2.passenger_id = f.passenger_id and f2.flight_id <> f.flight_id ) ``` **Edit** Another way using the `count` window function: ``` select * from ( select *, count() over (partition by p.passenger_id) cnt from passenger_1038299 p join flight_1038299 f on f.passenger_id = p.passenger_id ) t where cnt > 1 ```
Another way with using analytic functions: ``` SELECT * FROM ( SELECT p.*, f.flight_id, count(*) OVER (PARTITION BY f.passenger_id ) As number_of_flights FROM passenger p JOIN flight f ON p.passenger_id = f.passenger_id ) WHERE number_of_flights > 1 ``` Demo: <http://sqlfiddle.com/#!4/dab21/11>
sql query- group by and then join
[ "", "sql", "oracle", "join", "group-by", "" ]
My table looks like this: ``` CREATE TABLE MyTable( Info varchar(50) null, Col1 int null, Col2 int null, Col3 int null, Col4 int null, Col5 int null ); ``` Is there a way?
Without using \* you can use `ABS`: ``` SELECT Info, ABS(Col1) as Col1... FROM MyTable ```
You can't manipulate individual fields when you're querying `*`, you'd have to handle each field individually and apply [`abs`](https://msdn.microsoft.com/en-us/library/ms189800.aspx) to it: ``` SELECT info, ABS(col1), ABS(col2), ABS(col3), ABS(col4), ABS(col5) FROM mytable ```
Select asterisk from table but if the value is negative turn into positive
[ "", "sql", "sql-server", "select", "" ]
Case: A table has a field with some XML code. ``` -- Some XML '<DTS:ConnectionManager DTS:refId="Package.ConnectionManagers[MTS]" DTS:CreationName="FLATFILE" DTS:DTSID="{296732CC-7D91-4E49-ACD4-384E03BC032E}" DTS:ObjectName="MTS"> <DTS:PropertyExpression DTS:Name="ConnectionString">@Something</DTS:PropertyExpression> <DTS:ObjectData> <DTS:ConnectionManager DTS:Format="Delimited" DTS:LocaleID="1033" DTS:HeaderRowDelimiter="_x000D__x000A_" DTS:ColumnNamesInFirstDataRow="True" DTS:RowDelimiter="" DTS:TextQualifier="_x0022_" DTS:CodePage="1252" DTS:ConnectionString="C:\Folder\\File.csv"> <DTS:FlatFileColumns> <DTS:FlatFileColumn DTS:ColumnType="Delimited" DTS:ColumnDelimiter="_x002C_" DTS:MaximumWidth="50" DTS:DataType="129" DTS:TextQualified="True" DTS:ObjectName="MC" DTS:DTSID="{E87E7707-B7F7-4EC6-A2CB-98AD637A3985}" DTS:CreationName="" /> <DTS:FlatFileColumn DTS:ColumnType="Delimited" DTS:ColumnDelimiter="_x002C_" DTS:DataType="6" DTS:TextQualified="True" DTS:ObjectName="PP" DTS:DTSID="{C7B97962-3B43-40C5-82B1-F6136906CD84}" DTS:CreationName="" /> </DTS:FlatFileColumns> </DTS:ConnectionManager> </DTS:ObjectData> </DTS:ConnectionManager>' -- Some more XML ``` Would like to pull out some information and store it as a tabular format. Desired output ``` CreationName ObjectName ConnectionString MaximumWidth DataType FieldName FLATFILE MTS C:\Folder\\File.csv 50 129 MC FLATFILE MTS C:\Folder\\File.csv NULL 6 PP ``` Explanation of connecting input with the output ``` CreationName - DTS:CreationName from DTS:ConnectionManager. i.e. FLATFILE ObjectName - DTS:ObjectName from DTS:ConnectionManager. i.e. MTS ConnectionString - DTS:ConnectionString from DTS:ObjectData\DTS:ConnectionManager. i.e. "C:\Folder\\File.csv" MaximumWidth - DTS:MaximumWidth from DTS:FlatFileColumns i.e. 50 -- NOTE: MaximumWidth might not always exist DataType - DTS:DataType from DTS:FlatFileColumns i.e. 129 FieldName - DTS:ObjectName from DTS:FlatFileColumns i.e. MC ``` Don't really have much experience with XML in SQL Server. (I'll be doing some of my own playing around and post it here if I get somewhere meaningful. :) ) UPDATED XML Example ``` <DTS:Executable xmlns:DTS="www.microsoft.com/SqlServer/Dts" DTS:refId="P" DTS:CreationDate="10/01/2015 12:00:00"> <DTS:ConnectionManagers> <DTS:ConnectionManager DTS:refId="Package.ConnectionManagers[FF]" DTS:CreationName="FLATFILE" DTS:DTSID="{123}" DTS:ObjectName="FF"> <DTS:ObjectData> <DTS:ConnectionManager DTS:Format="Delimited" DTS:LocaleID="1033" DTS:HeaderRowDelimiter="_x000D__x000A_" DTS:ColumnNamesInFirstDataRow="True" DTS:RowDelimiter="" DTS:TextQualifier="_x0022_" DTS:CodePage="1252" DTS:ConnectionString="Test.csv"> <DTS:FlatFileColumns> <DTS:FlatFileColumn DTS:ColumnType="Delimited" DTS:ColumnDelimiter="_x002C_" DTS:DataType="11" DTS:TextQualified="True" DTS:ObjectName="TestCN" DTS:DTSID="{012}" DTS:CreationName="" /> </DTS:FlatFileColumns> </DTS:ConnectionManager> </DTS:ObjectData> </DTS:ConnectionManager> <DTS:ConnectionManager DTS:refId="Package.ConnectionManagers[FF2]" DTS:CreationName="FLATFILE" DTS:DTSID="{123}" DTS:ObjectName="FF2"> <DTS:ObjectData> <DTS:ConnectionManager DTS:Format="Delimited" DTS:LocaleID="1033" DTS:HeaderRowDelimiter="_x000D__x000A_" DTS:ColumnNamesInFirstDataRow="True" DTS:RowDelimiter="" DTS:TextQualifier="_x0022_" DTS:CodePage="1252" DTS:ConnectionString="Test2.csv"> <DTS:FlatFileColumns> <DTS:FlatFileColumn DTS:ColumnType="Delimited" DTS:ColumnDelimiter="_x002C_" DTS:DataType="11" DTS:TextQualified="True" DTS:ObjectName="TestCN2" DTS:DTSID="{012}" DTS:CreationName="" /> </DTS:FlatFileColumns> </DTS:ConnectionManager> </DTS:ObjectData> </DTS:ConnectionManager> </DTS:ConnectionManagers> </DTS:Executable> ```
You are not declaring your namespace in your root element so I substituted that. This should be self extracting and run in anything I am guessing 2008 and higher, though I wrote it in 2014. Just pop it into SQL Server Management Studio: UPDATED 1:45 PM PST: Thanks to Shnugo for the simplification of the 'With XMLNamespaces'. ``` DECLARE @XML XML = ' <DTS:Executable xmlns:DTS="www.microsoft.com/SqlServer/Dts" DTS:refId="P" DTS:CreationDate="10/01/2015 12:00:00"> <DTS:ConnectionManagers> <DTS:ConnectionManager DTS:refId="Package.ConnectionManagers[FF]" DTS:CreationName="FLATFILE" DTS:DTSID="{123}" DTS:ObjectName="FF"> <DTS:ObjectData> <DTS:ConnectionManager DTS:Format="Delimited" DTS:LocaleID="1033" DTS:HeaderRowDelimiter="_x000D__x000A_" DTS:ColumnNamesInFirstDataRow="True" DTS:RowDelimiter="" DTS:TextQualifier="_x0022_" DTS:CodePage="1252" DTS:ConnectionString="Test.csv"> <DTS:FlatFileColumns> <DTS:FlatFileColumn DTS:ColumnType="Delimited" DTS:ColumnDelimiter="_x002C_" DTS:DataType="11" DTS:TextQualified="True" DTS:ObjectName="TestCN" DTS:DTSID="{012}" DTS:CreationName="" /> </DTS:FlatFileColumns> </DTS:ConnectionManager> </DTS:ObjectData> </DTS:ConnectionManager> </DTS:ConnectionManagers> </DTS:Executable>' ; WITH XMLNAMESPACES (N'www.microsoft.com/SqlServer/Dts' as DTS ) SELECT y.vals.query('.') AS NodesAsExtracted , x.vals.value('@DTS:CreationName', 'Varchar(255)') AS CreationName , x.vals.value('@DTS:ObjectName', 'Varchar(255)') AS ObjectName , y.vals.value('@DTS:ConnectionString', 'Varchar(255)') AS ConnectionString , x.vals.value('@DTS:ColumnType', 'Varchar(255)') AS ColumnType , x.vals.value('@DTS:MaximumWidth', 'Varchar(255)') AS MaximumWidth FROM @XML.nodes('/DTS:Executable/DTS:ConnectionManagers/DTS:ConnectionManager/DTS:ObjectData/DTS:ConnectionManager') AS y(vals) CROSS APPLY @XML.nodes('/DTS:Executable/DTS:ConnectionManagers/DTS:ConnectionManager/DTS:ObjectData/DTS:ConnectionManager/DTS:FlatFileColumns/DTS:FlatFileColumn') AS x(vals) /* The key piece is you are extracting data with a namespace, which makes things harder when querying. You need to repeat certain 'nodes' so there is a syntax for that called originally enough 'nodes' that breaks up a 3d object like xml into multiple bits I do one for the high level and one for the lower and then cross apply them which really is a whole world into itself I won't mention here It should be represented as a parent 'x' and the values found 'vals' I showed an example as is first when I query '('.')' which is everything in essence. My namespace declaration must match on the xml that exists and the declaration. more on nodes https://msdn.microsoft.com/en-us/library/ms188282.aspx more on query https://msdn.microsoft.com/en-us/library/ms191474.aspx more on value https://msdn.microsoft.com/en-us/library/ms178030.aspx */ ```
This is an enhancment to the answer of djangojazz. Don't accept this, it's just a copy (but you may vote up, if you like it ;-) ... By using WITH XMLNAMESPACES you can avoid the multiple declaration of the namespace: ``` WITH XMLNAMESPACES (N'http://DTS' as DTS ) SELECT x.vals.query('.') AS NodesAsExtracted , x.vals.value('@DTS:CreationName', 'Varchar(255)') AS CreationName , x.vals.value('@DTS:ObjectName', 'Varchar(255)') AS ObjectName , y.vals.value('@DTS:ConnectionString', 'Varchar(255)') AS ConnectionString , x.vals.value('@DTS:ColumnType', 'Varchar(255)') AS ColumnType , x.vals.value('@DTS:MaximumWidth', 'Varchar(255)') AS MaximumWidth from @XML.nodes('/DTS:ConnectionManager/DTS:ObjectData/DTS:ConnectionManager/DTS:FlatFileColumns/DTS:FlatFileColumn') AS x(vals) CROSS APPLY @XML.nodes('/DTS:ConnectionManager/DTS:ObjectData/DTS:ConnectionManager') AS y(vals) ```
T-SQL - XML to SQL Server Table
[ "", "sql", "sql-server", "xml", "t-sql", "sql-server-2012", "" ]
I am not sure, which method is better? ``` con.createStatement().execute("..."); ``` OR ``` Statement st = con.createStatement(); st.execute("..."); st.close(); ``` When using the first method, will be the statement create closed or it will remain open? I am sure that in the second method the statement will be closed, but in the first one? Anybody know?
Both `con.createStatement().execute("...");` and `Statement st = con.createStatement();` make same sense. However closing the statement is not performed with the first method. Closing the `ResultSet` or `Statement` is a good practice. You may get additional problems with not closed `Statements`. You can close the `Connection` instead, but it's not possible when using a connection pool. Additionally the second method is very useful when using the `PrepairedStatement`, because it can set parameters to the `PreparedStatement`.
Neither is good, although the second alternative is better than the first. You should always close a statement when you are done with (not sooner, but definitely not never). Your second alternative attempts to close the statement, but will actually fail to do that if `execute` throws an exception. To have a guaranteed close of your statement, you can use [try-with-resourc](http://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html)e (introduced in Java 7): ``` try (Statement statement = connection.createStatement) { statement.execute(); // Other things you want to with statement... } ``` If you are unfortunate to be stuck on Java 6 or earlier, you need to use a try/finally: ``` Statement statement = connection.createStatement(); try { statement.execute(); // Other things you want to with statement... } finally { statement.close(); } ```
Java closing Statement
[ "", "sql", "jdbc", "execute", "" ]
I have following table: ``` TableA x y ---------------------- 3 1 2 5 5 2 ``` Now i want to update the value in column x, where the value in column y is lowest, something like that: ``` UPDATE TableA SET x = 1 WHERE y = min(y); ``` Then the first row should be x = 1 and y = 1. But i get following error code: ``` Error Code: 1111. Invalid use of group function ``` How can i fix that?
If `y` is unique, then you can use `order by` and `limit`: ``` UPDATE TableA SET x = 1 ORDER BY Y ASC LIMIT 1; ``` If you have multiple rows with the minimum value, you can use `join`: ``` UPDATE TableA a JOIN (SELECT MIN(Y) as miny FROM TableA) m ON a.y = m.miny SET x = 1; ```
you can write your query as follow: ``` UPDATE TableA SET x = 1 WHERE y IN (select min(y) from TableA T2); ```
MySQL - Update X where Y is lowest
[ "", "mysql", "sql", "database", "" ]
I want to show a list of chat messages. For this I have a `SELECT` statement: ``` SELECT id, `from`, sent, message, recd FROM chat WHERE id IN( SELECT MAX(id) FROM chat WHERE `to` = ? GROUP BY `from` ) ORDER BY id DESC ``` The problem is, I want to select messages that `to` = maria and `from` = maria conditionally. That is, if `to` = maria I want to group by `from`, and if `from` = maria, I want to group by `to`. How can I change this `GROUP BY` dynamically?
I think all the proposed queries are fine - meaning they will return the expected results (one row for every different user that has sent or received a message from `'maria'`) - but efficiency may not be great. MySQL does not optimize well queries with `WHERE column IN (complex subquery)`. The optimizer has some improvements in 5.6 and 5.7 version but still it's much better to help it in getting a better plan and using indexes. Your question and comments are confusing though. it's not clear if you want one row per different user (recipient, sender) or just one row in the results. If you want the second, my suggestion is to add two indexes, on `(to)` and `(from)` if you haven't already (I assume that the table is InnoDB and the primary key is `(id)` so the `(to)` and `(from)` indexes are equivalent to `(to, id)` and `(from, id)` indexes, which are what you really need for the query.) and use the following query: ``` ( SELECT id, `from`, `to`, sent, message, recd FROM chat WHERE `to` = ? ORDER BY id DESC LIMIT 1 ) UNION ALL ( SELECT id, `from`, `to`, sent, message, recd FROM chat WHERE `from` = ? ORDER BY id DESC LIMIT 1 ) ORDER BY id DESC LIMIT 1 ; ``` which is equivalent (thank you @Thorsten Kettner) to: ``` SELECT id, `from`, `to`, sent, message, recd FROM chat WHERE ? IN (`to`, `from`) ORDER BY id DESC LIMIT 1 ; ``` Try this last version, too, and if it is equally efficient, prefer it. There is no reason to complicate the query, unless the efficiency is not good. With the given indexes, this last version will often use both of them, and the **[index merge union](https://dev.mysql.com/doc/refman/5.5/en/index-merge-optimization.html)** algorithm.
I would got for a UNION. ``` (SELECT id, 'from' as direction, m_from AS fromto, sent, message, recd FROM chat WHERE id IN (SELECT MAX(id) FROM chat WHERE to = 'Maria' GROUP BY m_from)) UNION (SELECT id, 'to' as direction, m_to AS fromto, sent, message, recd FROM chat WHERE id IN (SELECT MAX(id) FROM chat WHERE m_from = 'Maria' GROUP BY to)) ``` If you select MAX(id) an ORDER BY id statement is not necessary. You can add extra fields in the SELECT clause as I did with 'direction'.
GROUP BY different column conditionally
[ "", "mysql", "sql", "" ]
I have two tables: ``` Cust Sales Week 123 4 1/8/2015 123 3 1/22/2015 234 4 1/1/2015 ``` . ``` Week 1/1/2015 1/8/2015 1/15/2015 1/22/2015 ``` I want to combine them so that every Cust has every date and where there are no Sales it is filled with 0. ``` Cust Sales Week 123 4 1/1/2015 123 0 1/8/2015 123 0 1/15/2015 123 3 1/22/2015 234 4 1/1/2015 234 0 1/8/2015 234 0 1/15/2015 234 0 1/22/2015 ``` Is there a way I can 'select distinct(Cust)' and join them somehow?
First, generate the rows you want using a `cross join`. Then bring in the data you want using a `left join`: ``` select c.cust, w.week, coalesce(t.sales, 0) as sales from weeks w cross join (select distinct cust from t) c left join t on t.cust = c.cust and t.week = w.week; ```
You can `left join` on the dates table and use `isnull` on the sales column. Use an equivalent of `isnull` in Netezza. ``` select t1.cust, isnull(t1.sales,0), t2.week from daystable2 t2 left join salestable1 t1 on t1.week = t2.week ```
Zero Fill Data for Missing Dates
[ "", "sql", "netezza", "" ]
Here's my table: ``` +----------+--------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +----------+--------------+------+-----+---------+-------+ | CLID | int(11) | NO | MUL | NULL | | | calldate | datetime | YES | MUL | NULL | | | src | varchar(20) | YES | | NULL | | | dst | varchar(20) | YES | | NULL | | | billsec | varchar(10) | YES | | NULL | | | duration | time | YES | | NULL | | | debt | float | YES | MUL | NULL | | | region | varchar(255) | YES | | NULL | | | uniqueid | varchar(20) | NO | PRI | NULL | | +----------+--------------+------+-----+---------+-------+ ``` It has over 490,328 records. I have to update every single record in this table. Here's the query (I'm using mysqli in PHP): ``` UPDATE `All` SET `debt` = '$debt', `region` = '$region' WHERE `uniqueid` = $uid; ``` Unfortunately executing a single query takes ~1.4519529342651 seconds. Is there any way to speed it up?
1.45 seconds to update a single record is very bad performance, and highly unexpected. Your query is intending to update a single row using a primary key. You shouldn't notice the amount of time. However, the engine cannot use the primary key index for this query, because of the way the condition is expressed. Let's look at the query in more detail: ``` UPDATE `All` SET `debt` = '$debt', `region` = '$region' WHERE `uniqueid` = $uid; -------------------^ ``` Unless `$uid` has quotes *included* in the value, then it is being treated as something other than a string. So, if you are passing in "1", then you have an integer compared to a string. And this can prevent the use of indexes. Oops. My first suggestion is to fix your query by using parameters. Barring that, put single quotes in: ``` UPDATE `All` SET `debt` = '$debt', `region` = '$region' WHERE `uniqueid` = ''$uid''; ``` If `uniqueid` actually looks like a number, consider storing it as a number rather than a string. Of course, other things could be going wrong, such as: * Other queries could be locking the table. * You could be testing the query from a cold-start, and all subsequent queries will be blazing fast. * The table could have triggers that are painfully slow.
Edit: Before doing what I suggest I would definitely give Gordon Linoff's solution a try as it makes a lot of sense (missing quotes around the value passed on the where clause). Is your MySQL database is distant from your PHP server? That could explain poor performances. If you are having better performances when doing it directly on the phpMyAdmin interface, I would try passing multiple updates at the same time (see <http://php.net/manual/en/mysqli.quickstart.multiple-statement.php>). Like that (this is not actual code, it's just to give you an idea of what I mean here): ``` $i = 0; $query = ''; while (yourcondition){ if ($i < 50){ // Change values of your $dbt, $region and $uid varialbes here and concatenate the new statement to the existing $query var $query .='UPDATE `All` SET `debt` = '$debt', `region` = '$region' WHERE `uniqueid` = $uid; ' } else { // Execute your update on your mysqli object then empty your $query and set $i to 0 $mysqli->query($query); $query = ''; $i = 0; } $i++; } ``` Start with 50 and see if it improves the performances, if it does, increase it.
Speeding up a simple UPDATE query on a huge table
[ "", "mysql", "sql", "optimization", "query-optimization", "" ]
Let's say I have 2 tables like this : **Job Offers:** ``` +----+------------+------------+ | ID | Name | Categories | +----+------------+------------+ | 1 | Programmer | 1,2 | | 2 | Analyst | 3 | +----+------------+------------+ ``` **Categories:** ``` +----+-----------------+ | ID | Name | +----+-----------------+ | 1 | Programming | | 2 | Web Programming | | 3 | Analysis | +----+-----------------+ ``` We've got a string split that takes a string, a delimiter and returns a table, my problem is I'm really not sure how to integrate the table in my query to join the job offers and the categories. My thinking is that it would be something like this : ``` SELECT O.[ID] AS OfferID, O.[Name] AS OfferName, CAT.[CategoryName] AS CategoryName, CAT.[CategoryID] AS CategoryID FROM JobOffers AS O LEFT JOIN ( SELECT O.[ID] AS OfferID, C.[CategoryID] AS CategoryID, C.[Name] AS Name FROM ( SELECT * FROM [dbo].[Split](O.[Categories], ',') ) AS CJ LEFT JOIN [Categories] AS C ON C.CategoryID = CJ.items ) AS CAT ON CAT.OfferID = O.[ID] ``` Currently I have two errors saying: * `multi-part identifier O.[ID] cannot be bound` * `multi-part identifier O.[Categories] cannot be bound` * `incorrect syntax near AS` (last line) So clearly the problem is how I construct my subquery.
You can greatly simplify this to something like this. ``` SELECT O.[ID] AS OfferID, O.[Name] AS OfferName, c.[CategoryName] AS CategoryName, c.[CategoryID] AS CategoryID FROM JobOffers AS O outer apply [dbo].[Split](O.[Categories], ',') s left join Categories as C on c.CategoryID = s.Items ``` The concern I have is your splitter. If there is more than a single select statement the performance is going to suffer horribly. For a good explanation of various splitters available you can visit this article. <http://sqlperformance.com/2012/07/t-sql-queries/split-strings>
From SQL SERVER 2016 we can use sql inbuilt function STRING\_SPLIT as below : ``` SELECT * FROM JobOffers as j outer apply STRING_SPLIT(j.[Categories], ',') s left join dbo.Categories as c on c.CategoryID =s.value ```
String split column and join to another table
[ "", "sql", "sql-server", "" ]
I have two table in sql. First one the patient list, second one is their report. all patient's reports are in the report, just with id we can join them. Each patient has some reports (Maybe all the fields of a record is not filled). Now I want to make a report that get the last report of each patient but if some field are empty in the last record of that patient I should fill it with last filled record of that patients records. I have date in the table of reports. I want to do it for all patients. Here I will add a pic for one patient as an example [![enter image description here](https://i.stack.imgur.com/rruF0.jpg)](https://i.stack.imgur.com/rruF0.jpg) In the example above, I want just highlighted ones for this patient in the report. I have write this query but it give even when a filed in the last record is null while it has data in previous records. ``` SELECT patient.bartar_id,patient.bartar_enteringthesystem,patient.bartar_proviencename, patient.bartar_cityname,patient.bartar_coloplastrepname,patient.bartar_consultorname, patient.bartar_provienceofsurgeryname,patient.bartar_cityofsurgeryname, patient.bartar_surgeryhospitalname,patient.bartar_doctor,patient.bartar_patientstatusname, patient.bartar_ostomytypename, patient.bartar_ostomytimename, r.bartar_date,r.bartar_delay,r.bartar_nextcall,r.new_newcaller, r.bartar_brandname,r.bartar_pastename,r.bartar_bagname,r.bartar_accname, r.bartar_pastepermonth,r.bartar_bagepermonth,r.bartar_insuranceinfo, patient.bartar_deathhealeddate,patient.bartar_dateofseurgery FROM [Bartar_MSCRM].[dbo].[Filteredbartar_newpaitient] as patient JOIN (SELECT r.*, row_number() over (partition by r.bartar_patientname order by r.bartar_date desc) as seqnum FROM [Bartar_MSCRM].[dbo].[Filteredbartar_callcenterreport] as r where r.bartar_delay is not null ) r ON r.bartar_patientname = patient.bartar_newpaitientid and seqnum = 1 ORDER BY patient.bartar_id DESC ; ``` patient Table [![enter image description here](https://i.stack.imgur.com/8j0uM.jpg)](https://i.stack.imgur.com/8j0uM.jpg) Report Table [![enter image description here](https://i.stack.imgur.com/QoQBE.jpg)](https://i.stack.imgur.com/QoQBE.jpg) Join [![enter image description here](https://i.stack.imgur.com/j1EWm.jpg)](https://i.stack.imgur.com/j1EWm.jpg) Final Report What I want [![enter image description here](https://i.stack.imgur.com/TKDo1.jpg)](https://i.stack.imgur.com/TKDo1.jpg)
this is a sample, in your case you have to get the value of each column in a subquery (either in the join statement, or in the main select statement example: ``` inner join ( select distinct bartar_patientname ,(select top 1 bartar_pastePerMonth from [Bartar_MSCRM].[dbo].[Filteredbartar_callcenterreport] c2 where c2.bartar_patientname = cte.bartar_patientname and c2.bartar_pastePerMonth is not null order by c2.bartar_date desc) as bartar_date ,(select top 1 bartar_acc from [Bartar_MSCRM].[dbo].[Filteredbartar_callcenterreport] c2 where c2.bartar_patientname = cte.bartar_patientname and c2.bartar_acc is not null order by c2.bartar_date desc) as bartar_acc ,(select top 1 bartar_insuranceinfo from [Bartar_MSCRM].[dbo].[Filteredbartar_callcenterreport] c2 where c2.bartar_patientname = cte.bartar_patientname and c2.bartar_insuranceinfo is not null order by c2.bartar_date desc) as bartar_insuranceinfo ,(select top 1 bartar_brand from [Bartar_MSCRM].[dbo].[Filteredbartar_callcenterreport] c2 where c2.bartar_patientname = cte.bartar_patientname and c2.bartar_brand is not null order by c2.bartar_date desc) as bartar_brand from [Bartar_MSCRM].[dbo].[Filteredbartar_callcenterreport] cte ) r ``` Again, this is a sample of the solution.
Your script is fine as it looks, so I'll just place that on a temporary table for now, and do a per sequence query and filter it by "OR" afterwards. Please try the script below. ``` SELECT patient.bartar_id,patient.bartar_enteringthesystem,patient.bartar_proviencename, patient.bartar_cityname,patient.bartar_coloplastrepname,patient.bartar_consultorname, patient.bartar_provienceofsurgeryname,patient.bartar_cityofsurgeryname, patient.bartar_surgeryhospitalname,patient.bartar_doctor,patient.bartar_patientstatusname, patient.bartar_ostomytypename, patient.bartar_ostomytimename, r.bartar_date,r.bartar_delay,r.bartar_nextcall,r.new_newcaller, r.bartar_brandname,r.bartar_pastename,r.bartar_bagname,r.bartar_accname, r.bartar_pastepermonth,r.bartar_bagepermonth,r.bartar_insuranceinfo, patient.bartar_deathhealeddate,patient.bartar_dateofseurgery , ROW_NUMBER() OVER (PARTITION BY r.bartar_newpaitientid, r.bartar_pastepermonth ORDER BY r.bartar_date DESC) AS bartarpaste_sequence , ROW_NUMBER() OVER (PARTITION BY r.bartar_newpaitientid, r.bartar_acc ORDER BY r.bartar_date DESC) AS bartaracc_sequence , ROW_NUMBER() OVER (PARTITION BY r.bartar_newpaitientid, r.bartar_insuranceinfo ORDER BY r.bartar_date DESC) AS bartarins_sequence , ROW_NUMBER() OVER (PARTITION BY r.bartar_newpaitientid, r.bartar_brandname ORDER BY r.bartar_date DESC) AS bartarbrd_sequence INTO #tmpPatientReport FROM [Bartar_MSCRM].[dbo].[Filteredbartar_newpaitient] as patient JOIN (SELECT r.*, row_number() over (partition by r.bartar_patientname order by r.bartar_date desc) as seqnum FROM [Bartar_MSCRM].[dbo].[Filteredbartar_callcenterreport] as r where r.bartar_delay is not null ) r ON r.bartar_patientname = patient.bartar_newpaitientid and seqnum = 1 ORDER BY patient.bartar_id DESC; SELECT * FROM #tmpPatientReport WHERE bartarpaste_sequence = 1 OR bartaracc_sequence = 1 OR bartarins_sequence = 1 OR bartarbrd_sequence = 1 ```
Sql query with join and group by and
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have written a query that gives me a list of data and I need to select the top 5 pieces of data. For example ``` Num Name 5 a 4 b 4 c 2 d 1 e 1 f 1 g 0 h ``` However if I simply use LIMIT 5, this leaves out data points with name f and g. How would I be able to select from a-h. This is just an example data piece. My actual data contains a lot more rows so I can simply just exclude the bottom row. EDIT sorry when i said top 5 i meant top 5 Num entries. so 5, 4 ,2 ,1,0 but 1 has duplicates so I wanted to select all of these duplicates
I think you need a query like this: ``` SELECT * FROM ( SELECT t1.Num, t1.Name, COUNT(DISTINCT t2.Num) AS seq FROM yourTable t1 LEFT JOIN yourTable t2 ON t1.Num <= t2.Num GROUP BY t1.Num, t1.Name) dt WHERE (seq <= 5); ``` [[SQL Fiddle Demo]](http://www.sqlfiddle.com/#!9/eba96/1)
You can calculate via adding a new field with an incremental row number within your SQL logic as following: ``` Feeds Num Name 1 5 a 2 4 b 2 4 c 3 2 d 4 1 e 4 1 f 4 1 g 5 0 h ``` and then limit the result by the required rank (in your case 5). Following is the SQL for your reference: ``` SELECT num, name from ( SELECT @row_number:=CASE WHEN @num=num THEN @row_number ELSE @row_number+1 END AS feeds,@num:=num AS num, name FROM table1, (SELECT @row_number:=0,@num:='') AS t ORDER BY num desc )t1 WHERE feeds <= 5 ``` [SQL-fiddle link](http://sqlfiddle.com/#!9/5e66c7/20)
Selecting the top 5 in a column with duplicates
[ "", "mysql", "sql", "" ]
In a table I have column A and B. I want to update A using the value of B and then update B to a new value. This has to be done atomically. I am trying something like this ``` -- Intially A = 1, B = 2 UPDATE T SET A = B, B = 10 WHERE ID = 1; -- Now A = 2, B = 10 ``` Though this is working, I am unable to find documentation which guarantees me that A = B is evaluated first and B = 10 is evaluated later. I looked through the [oracle sql reference of the update statement](http://docs.oracle.com/cd/B28359_01/server.111/b28286/statements_10008.htm.)
You can find this in SQL standard, which defines general rules. Oracle certainly conforms to this standard. See here - SQL 92: <http://www.contrib.andrew.cmu.edu/~shadow/sql/sql1992.txt> Page 393, chapter "13.9 <update statement: positioned>", point 6) > 6) **The <value expression>s are effectively evaluated before updat- > ing the object row.** If a contains a reference > to a column of T, then the reference is to the value of that > column in the object row before any value of the object row is > updated. Consider a general update syntax:< ``` UPDATE .... SET <object column 1> = <value expression 1>, <object column 2> = <value expression 2>, ...... <object column N> = <value expression N>; ``` The rule #6 says that all expressions on right side are evaluated first, before updating of any column in the row. Only old row's values (before the update) are considered while evaluating all expressions.
In a RDBMS (unlike a programming language) there's no order of evaluation, it's all done at once. It's like you set variables to the previous value first and then use those variables: ``` SET a=b, b=a ``` simply switches `a` and `b`. Warning: Only MySQL does it totally wrong, resulting in both set to the same `b` value, here you'll need a temp variable like: ``` SET temp=b, b=a, a = temp ```
Order of evaluation of expression in the update-set-clause in oracle database
[ "", "sql", "oracle", "" ]
I have a stored procedure that creates quite a few temp tables in memory. I have the following query which takes an extremely long time to run (7 minutes). ``` select a.DEPT, a.DIV, a.PART, convert(datetime,convert(varchar(2),datepart("mm",a.Release_Date))+'/1/'+ convert(varchar(4),datepart("yyyy",a.Release_Date)),101) as rptng_mnth from @tmpReportData3 a where not exists (select distinct DEPT,DIV,PART from @tmpReportData4 b where a.DEPT = b.DEPT and a.DIV = b.DIV and a.PART = b.PART) order by rptng_mnth ``` Is there a way to speed this up?
This is your query, with the unnecessary `select distinct` removed from the subquery: ``` select a.DEPT, a.DIV, a.PART, convert(datetime,convert(varchar(2),datepart("mm",a.Release_Date))+'/1/'+ convert(varchar(4),datepart("yyyy",a.Release_Date)),101) as rptng_mnth from @tmpReportData3 a where not exists (select DEPT, DIV, PART from @tmpReportData4 b where a.DEPT = b.DEPT and a.DIV = b.DIV and a.PART = b.PART ) order by rptng_mnth; ``` Your performance problem is probably caused by the `not exists`. Writing the query using `left join` might provide some benefit. But, the easiest approach is to switch from using a table variable to a temporary table, `#tmpReportData4`. Then add an index on the temporary table: `#tmpReportData4(dept, div, part)`.
A good start would be to change the "not in" to a left join. You might also consider using "#" (rather than "@") temp tables, because you can index #-tables. Can you include the complete stored procedure? ``` select a.DEPT ,a.DIV ,a.PART ,convert(datetime,convert(varchar(2),datepart("mm",a.Release_Date))+'/1/'+ convert(varchar(4),datepart("yyyy",a.Release_Date)),101) as rptng_mnth from @tmpReportData3 a left join @tmpReportData4 b on b.dept = a.dept and a.div = b.div and a.part = b.part where b.dept is null order by a.rptng_mnth ```
Need Help Speeding Up a SQL Server Query
[ "", "sql", "sql-server", "stored-procedures", "" ]
I am using `SQL Server 2014`. I have a Table having Column `BookNo` as `datatype int`. This column contains below data ``` |BookNo| 1 2 3 4 5 10 12 13 25 26 27 28 ``` I want to the Consecutive Numbers Range in Sql query. From above data my output should be like ``` 1 to 5 10 to 13 25 to 28 ``` Any Help...
I think you can use a query like this: ``` SELECT BookNo, ISNULL(LEAD(prev) OVER (ORDER BY BookNo) , (SELECT MAX(BookNo) FROM yourTable)) As toCon FROM ( SELECT *, LAG(BookNo) OVER (ORDER BY BookNo) prev, BookNo - LAG(BookNo) OVER (ORDER BY BookNo) diff FROM yourTable) dt WHERE (ISNULL(diff, 0) <> 1); ``` [[SQL Fiddle Demo]](http://www.sqlfiddle.com/#!18/f4a29/1)
Another solution based on *Windowed Aggregate Functions* which also runs on versions below SS2014 (and should perform better than the `LAG`/`LEAD`): ``` SELECT MIN(BookNo) AS BookNoFrom, MAX(BookNo) AS BookNoTo FROM ( SELECT BookNo, BookNo - ROW_NUMBER() OVER (ORDER BY BookNo) AS dummy FROM yourTable ) dt GROUP BY dummy ``` See [Fiddle](http://sqlfiddle.com/#!6/bf321/1) The `dummy` calculation is based on the fact that both `BookNo` and `ROW_NUMBER` are sequential numbers, but there might be gaps in the `BookNo`. For consecutive `BookNo` the difference is always the same, when there's a gap it increases (the `dummy` value has no actual meaning, but it's the same value for consecutive rows).
Get consecutive numbers Range from SQL Server Table
[ "", "sql", "sql-server", "" ]
I have * A list of Primary keys(around 10). * A table has 100 rows. Lets say among 10 keys which I have, if there are 8 keys found in the table. I need the output of the remaining 2 keys which is not present in the table. Eg: i have 10 empl id's which i need to query in Empl table. Empl table has 100 or even more rows. Among 10 empl id which i have , only 8 are there in empl table. I need to get that remaining 2 empl ids which are not there in empl table. NOTE: if you use not in , it will give all other empl ids from empl table. But i need only those two which are not present.
To make the query a little shorter I give you an example for 3 keys you have to check in a table ``` select k.* from ( select 1 as pk union all select 3 union all select 7 ) k left join your_table t on t.id = k.pk where t.id is null ```
``` select * from table where my_key not in (select distinct my_key from the_other_table) ```
Query which should return the non existing values in table
[ "", "mysql", "sql", "database", "oracle", "" ]
Hi I have a dataset look like this ``` Brand Category ---------------------- A 1 A 1 A 1 B 1 B 1 C 1 A 2 C 2 C 2 C 2 ``` and I want to get the market share for each brand in each category. Say, market share for A in category 1 is 3/6=50%. I used the sql code ``` proc sql; select Brand, count(brand) / (select count(category) from dataset group by category) as percent from dataset group by brand, category; ``` but the SAS report the error of ``` ERROR: Subquery evaluated to more than one row. ``` Please help. Thank you so much!
You need to merge the category total counts back onto the brand\*category combinations. PROC SQL will do that for you automatically if you want. ``` data have ; input Brand $ Category $ @@; cards; A 1 A 1 A 1 B 1 B 1 C 1 A 2 C 2 C 2 C 2 ; proc sql; select brand , category , nobs , sum(nobs) as cat_total , nobs/calculated cat_total as percent from (select category,brand,count(*) as nobs from have group by 1,2 ) group by category order by 1,2 ; ``` NOTE: The query requires remerging summary statistics back with the original data.
``` select count(category) from dataset group by category ``` This subquery returns more than 1 row. It returns the count for each category. But you want the count of a specific category, so replace it with ``` select count(category) from dataset where category = d.category ``` and make sure you give `dataset` an alias i.e. `from dataset d` Here's another way using derived tables where one derived table contains the count for each brand/category and the second table contains the total count per category. ``` select cnt/total, t1.brand, t1.category from ( select count(*) cnt, brand , category from dataset group by brand, category ) t1 join ( select count(*) total, category from dataset group category ) t2 on t2.category = t1.category ```
Group by in Subquery SAS
[ "", "sql", "group-by", "sas", "subquery", "" ]
I am trying to convert EPOCH to a HUMAN DATETIME. * When I test this on their website, it returns the right value. * When I do it within SQL Developer, it's wrong. This lead me to check other basic SQL to see how they get returned, ``` select sysdate from dual = 12-OCT-2015 ``` It's missing the time? ``` SELECT TO_CHAR(TO_DATE('2015/05/15 8:30:25', 'YYYY/MM/DD HH:MI:SS')) FROM dual; = 15-MAY-15 ``` Again its missing the time? ``` SELECT TO_DATE('2015/05/15 8:30:25', 'YYYY/MM/DD HH:MI:SS') FROM dual; = 15-MAY-15 ``` With out TO\_CHAR, still missing time. Then my EPOCH SQL, ``` SELECT TO_DATE('01/01/1970 00:00:00','DD/MM/YYYY HH24:MI:SS')+(1325289600000/60/60/24) AS EPOCH FROM dual; ``` > This should return: Sat, 31 Dec 2011 00:00:00 GMT but it returns: > 04-OCT-30 Again, TIME is missing. SERVER is on EST, so time is out by 5 hour Oracle Server 11g Using SQL DEveloper 4.0.2.15 Thanks for any help, Ben
Your *epoch* includes milliseconds, you need to divide it by 1000: ``` SELECT TO_DATE('01/01/1970 00:00:00','DD/MM/YYYY HH24:MI:SS') +(1325289600000/1000/60/60/24) AS EPOCH FROM dual; ```
Mask should be in TO\_CHAR function as well to output time. ``` SELECT TO_CHAR(TO_DATE('2015/05/15 8:30:25', 'YYYY/MM/DD HH:MI:SS'), 'YYYY/MM/DD HH:MI:SS') FROM dual; ``` But it will be character string. If you want to do something with this date (add something and so on), you should put TO\_CHAR last (first do everything with date, then put TO\_CHAR with mask)
Why wont TO_DATE return the TIME section of my query?
[ "", "sql", "oracle", "epoch", "to-date", "to-char", "" ]
I want to get previous Tuesday (or any given day of week) for specified date. Here is the sample input and expected output for Tuesday: ``` CREATE TABLE #temp(testdate DATETIME); INSERT INTO #temp(testdate) VALUES ('2015-10-06 01:15'), -- Tue -> Tue 2015-10-06 00:00 ('2015-10-07 04:30'), -- Wed -> Tue 2015-10-06 00:00 ('2015-10-08 00:30'), -- Thu -> Tue 2015-10-06 00:00 ('2015-10-09 21:00'), -- Fri -> Tue 2015-10-06 00:00 ('2015-10-10 19:00'), -- Sat -> Tue 2015-10-06 00:00 ('2015-10-11 01:15'), -- Sun -> Tue 2015-10-06 00:00 ('2015-10-12 13:00'), -- Mon -> Tue 2015-10-06 00:00 ('2015-10-13 18:45'), -- Tue -> Tue 2015-10-13 00:00 ('2015-10-14 12:15'), -- Wed -> Tue 2015-10-13 00:00 ('2015-10-15 10:45'), -- Thu -> Tue 2015-10-13 00:00 ('2015-10-16 04:30'), -- Fri -> Tue 2015-10-13 00:00 ('2015-10-17 12:15'), -- Sat -> Tue 2015-10-13 00:00 ('2015-10-18 00:30'), -- Sun -> Tue 2015-10-13 00:00 ('2015-10-19 10:45'), -- Mon -> Tue 2015-10-13 00:00 ('2015-10-20 01:15'), -- Tue -> Tue 2015-10-20 00:00 ('2015-10-21 23:45'), -- Wed -> Tue 2015-10-20 00:00 ('2015-10-22 21:00'), -- Thu -> Tue 2015-10-20 00:00 ('2015-10-23 18:45'), -- Fri -> Tue 2015-10-20 00:00 ('2015-10-24 06:45'), -- Sat -> Tue 2015-10-20 00:00 ('2015-10-25 06:45'), -- Sun -> Tue 2015-10-20 00:00 ('2015-10-26 04:30'); -- Mon -> Tue 2015-10-20 00:00 DECLARE @clampday AS INT = 3; -- Tuesday SELECT -- DATEADD/DATEPART/@clampday/??? ``` What is the most appropriate way to get the previous Tuesday (or any day of week) using T-SQL?
i hope this will help you ``` SELECT DATEADD(day,- (DATEPART(dw, testdate) + @@DATEFIRST - 3) % 7,testdate) AS Saturday from #temp ``` OR ``` SELECT DATENAME(weekday,DATEADD(day,- (DATEPART(dw, testdate) + @@DATEFIRST - 3) % 7,testdate)) +' '+ CONVERT(nvarchar,DATEADD(day,- (DATEPART(dw, testdate) + @@DATEFIRST - 3) % 7,testdate),101) AS Saturday from @temp ``` OUTPUT would be in below formate ``` Saturday Tuesday 10/06/2015 ``` Remark: the whole query is just combination and calculation of [DATEFIRST](https://msdn.microsoft.com/en-IN/library/ms187766.aspx), [DATEPART](https://msdn.microsoft.com/en-us/library/ms174420.aspx) and [DATEADD](https://msdn.microsoft.com/en-us/library/ms186819.aspx) to manipulate a time
You can use get number of week by using `DATEPART` and then use `CASE` statement in following: ``` SELECT testdate, CASE DATEPART(dw,testdate) WHEN 1 THEN DATEADD(dd,-5,testdate) WHEN 2 THEN DATEADD(dd,-6,testdate) WHEN 3 THEN DATEADD(dd, 0,testdate) WHEN 4 THEN DATEADD(dd,-1,testdate) WHEN 5 THEN DATEADD(dd,-2,testdate) WHEN 6 THEN DATEADD(dd,-3,testdate) WHEN 7 THEN DATEADD(dd,-4,testdate) END FROM #temp ``` Accoring to @jpw comment, you have to set [**DATEFIRST**](https://msdn.microsoft.com/en-us/library/ms181598.aspx) to 7 (default) in following: ``` SET DATEFIRST 7 ```
Get previous Tuesday (or any given day of week) for specified date
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "datetime", "" ]
We have an hour table in our application which stores the working hours for each associate. It has hour values as follow ``` 0.30 , 0.30 , 1.10 ``` `0.30` indicates 30 minutes and 1.10 indicates 1 hour 10 minutes. So when I calculate the sum of hours I got 1.7, but I need to get 1.3 (I need to convert `1.10` to `0.70`). How to achieve this?
you can use some math calculation to achieve what you want .. ``` SELECT workHours - FLOOR(workHours) + FLOOR(workHours)*0.60 ``` --- sample value: ``` SELECT 1.10 - FLOOR(1.10) + FLOOR(1.10)* 0.60 ``` > result: 0.70
A simple solution would be to store the time in minutes making arithmetic simple. Then in the presentation layer convert it to the desired format.
Add the time values in SQL Server
[ "", "sql", "sql-server", "t-sql", "" ]
I'm trying to split a column from a result set into 2 columns based on the values from the column. So a user can subscribe to multiple items and the user can have 2 email addresses which can receive this subscription. The result set gives a list of subscriptions and their corresponding entries for subscribed email ids. DB details ``` Table 1 - user_subscriptions user_id email_id - 1 for email id 1 and 2 for email id 2 subscription_id Table 2 - subscriptions subscription_id subscription_name ``` Now I need all the subscriptions for the user whether subscribed by either of the email ids or not. So I get a result set something like this ``` +----------------------+----------+ | subscription_name | email_id | +----------------------+----------+ | item1 | 1 | | item1 | 2 | | item2 | null | | item3 | 1 | | item4 | null | | item5 | 2 | +----------------------+----------+ ``` So I'm looking to split the above result set into something like below ``` +-------------------+---------+---------+ | subscription_name | email_1 | email_2 | +-------------------+---------+---------+ | item1 | 1 or Y | 1 or Y | | item2 | 0 or N | 0 | | item3 | 1 | 0 | | item4 | 0 | 0 | | item5 | 0 | 1 | +-------------------+---------+---------+ ``` Hope this question makes sense. Any help would be appreciated! Updated ----------- Sample Data: `subscriptions - +-----------------+-------------------+ | subscription_id | subscription_name | +-----------------+-------------------+ | 1 | item1 | | 2 | item2 | | 3 | item3 | | 4 | item4 | | 5 | item5 | +-----------------+-------------------+` user\_subscriptions `+---------+----------+-----------------+ | user_id | email_id | subscription_id | +---------+----------+-----------------+ | 101 | 1 | 1 | | 101 | 2 | 1 | | 101 | 1 | 3 | | 101 | 2 | 5 | | 102 | 1 | 1 | | 102 | 2 | 1 | +---------+----------+-----------------+` Expected Result: For user\_id = 101 `+-----------------+-------------------+--------+--------+ | subscription_id | subscription_name | mail_1 | mail_2 | +-----------------+-------------------+--------+--------+ | 1 | item1 | Y | Y | | 2 | item2 | N | N | | 3 | item3 | Y | N | | 4 | item4 | N | N | | 5 | item5 | N | Y | +-----------------+-------------------+--------+--------+`
``` SELECT S.subscription_id, S.subscription_name, CASE WHEN US1.mail_ID IS NULL THEN 'N' ELSE 'Y' END mail_1, CASE WHEN US2.mail_ID IS NULL THEN 'N' ELSE 'Y' END mail_2 FROM subscriptions S LEFT JOIN user_subscriptions US1 ON S.subscription_id = US1.subscription_id AND US1.mail_id = 1 LEFT JOIN user_subscriptions US2 ON S.subscription_id = US2.subscription_id AND US2.mail_id = 2 WHERE us1.user_id = 5 -- or use a variable @user_ID OR us2.user_id = 5 ```
You need a conditional aggregate: ``` select us.subscription_name, -- there's at least one email CASE WHEN MIN(us.email_id) IS NOT NULL THEN 'Y' ELSE 'N' END as email_1, -- there's more than one email CASE WHEN MIN(us.email_id) <> MAX(us.email_id) THEN 'Y' ELSE 'N' END as email_2 from subscriptions as s left join user_subscriptions as us on s.subscription_id = us.subscription_id where us.user_id = ... group by us.subscription_name ```
SQL - Splitting a column based on the values
[ "", "sql", "sybase", "" ]
I have a table ``` User | Phone | Value Peter | 0 | 1 Peter | 456 | 2 Peter | 456 | 3 Paul | 456 | 7 Paul | 789 | 10 ``` I want to select `MAX` value for every user, than it also lower than a tresshold For tresshold 8, I want result to be ``` Peter | 456 | 3 Paul | 456 | 7 ``` I have tried the GROUP BY with HAVING, but I am getting ``` column "phone" must appear in the GROUP BY clause or be used in an aggregate function ``` Similar query logic works in MySQL, but I am not quite sure how to operate with GROUP BY in PostgreSQL. I dont want to GROUP BY phone.
After I have results from "juergen d" solution, I came up with this which gives me the same results faster ``` SELECT DISTINCT ON(user) user, phone, value FROM table WHERE value < 8 ORDER BY user, value DESC; ```
``` select t1.* from your_table t1 join ( select user, max(value) as max_value from your_table where value < 8 group by user ) t2 on t1.user = t2.user and t1.value = t2.max_value ```
PostgreSQL - MAX value for every user
[ "", "sql", "postgresql", "greatest-n-per-group", "" ]
I have a table called `MEDECIN` with 2 columns as follows : ``` SQL> DESC MEDECIN; Name Null? Type ----------------------------------------- -------- ---------------------------- NUM_MED NOT NULL NUMBER(4) SPECIALITE NOT NULL NVARCHAR2(13) ``` And it contains 32 rows, here is its content : ``` SQL> SELECT * FROM MEDECIN; NUM_MED SPECIALITE ---------- ---------------------------------------------------- 4 Orthopédiste 7 Cardiologue 8 Cardiologue 10 Cardiologue 19 Traumatologue 24 Orthopédiste 26 Orthopédiste 27 Orthopédiste 31 Anesthésiste 34 Pneumologue 50 Pneumologue 53 Traumatologue 54 Pneumologue 64 Radiologue 80 Cardiologue 82 Orthopédiste 85 Anesthésiste 88 Cardiologue 89 Radiologue 99 Anesthésiste 113 Pneumologue 114 Traumatologue 122 Pneumologue 126 Radiologue 135 Anesthésiste 140 Cardiologue 141 Traumatologue 144 Radiologue 152 Cardiologue 179 Anesthésiste 180 Cardiologue 196 Traumatologue 32 rows selected. ``` The problem is that when I execute the request `SELECT * FROM MEDECIN WHERE SPECIALITE = 'Cardiologue';` I get `no rows selected` ! How can this happens ? As you can see, there is many rows where `SPECIALITE = 'Cardiologue'`.
Should work, unless the filter is failing to match any rows. **Setup** ``` SQL> CREATE TABLE MEDECIN 2 ( 3 NUM_MED NUMBER(4) NOT NULL, 4 SPECIALITE NVARCHAR2(13) NOT NULL 5 ); Table created. SQL> INSERT INTO MEDECIN VALUES 2 (4, 'Orthopédiste' 3 ); 1 row created. SQL> COMMIT; Commit complete. SQL> SELECT * FROM medecin; NUM_MED SPECIALITE ---------- ------------- 4 Orthopédiste ``` **Query** ``` SQL> SELECT * FROM MEDECIN WHERE SPECIALITE = 'Orthopédiste'; NUM_MED SPECIALITE ---------- ------------- 4 Orthopédiste ``` You could also try **TRIM**/**LIKE** to remove any trailing spaces. For example, ``` SQL> INSERT INTO MEDECIN VALUES 2 (5, 'Orthopédis ' 3 ); 1 row created. SQL> SELECT * FROM MEDECIN WHERE SPECIALITE = 'Orthopédis'; no rows selected SQL> SELECT * FROM MEDECIN WHERE SPECIALITE LIKE 'Orthopédis%'; NUM_MED SPECIALITE ---------- ------------- 4 Orthopédiste 5 Orthopédis SQL> SELECT * FROM MEDECIN WHERE TRIM(SPECIALITE) = 'Orthopédiste'; NUM_MED SPECIALITE ---------- ------------- 4 Orthopédiste ```
Since it is an nvarchar2 try to put a capital N in front of the string like so = N'Cardiologue'.
Why the WHERE clause in Oracle XE database doesn't work when comparing with a string?
[ "", "sql", "oracle", "oracle11g", "oracle-xe", "" ]
I have written sql that creates 4 columns: `Tier0`, `Tier1`, `Tier2` and `Weight`. Weight is associated with the tiers such that > when Tier0 = 'X' then weight = 0, > when Tier1 = 'X' then weight = 0.7, and > when Tier2 = 'X' then weight = 1. Right now my code for this portion of my SELECT statement is ``` CASE WHEN (SUM(vst.len_of_stay) / COUNT(vst.len_of_stay)) > (SUM(drg.drg_std_days_stay) / COUNT(drg.drg_std_days_stay)) THEN 'X' Else '' END AS Tier0, CASE WHEN (SUM(vst.len_of_stay) / COUNT(vst.len_of_stay)) = (SUM(drg.drg_std_days_stay) / COUNT(drg.drg_std_days_stay)) THEN 'X' Else '' END AS Tier1, CASE WHEN (SUM(vst.len_of_stay) / COUNT(vst.len_of_stay)) < (SUM(drg.drg_std_days_stay) / COUNT(drg.drg_std_days_stay)) THEN 'X' Else '' END AS Tier2, CASE WHEN (SUM(vst.len_of_stay) / COUNT(vst.len_of_stay)) > (SUM(drg.drg_std_days_stay) / COUNT(drg.drg_std_days_stay)) THEN 0 WHEN (SUM(vst.len_of_stay) / COUNT(vst.len_of_stay)) = (SUM(drg.drg_std_days_stay) / COUNT(drg.drg_std_days_stay)) THEN 0.7 WHEN (SUM(vst.len_of_stay) / COUNT(vst.len_of_stay)) < (SUM(drg.drg_std_days_stay) / COUNT(drg.drg_std_days_stay)) THEN 1 END AS Weight ``` Is there a way to contain that final long case staement in the first three case statements? I would like to have one case statement for each Tier if possible, that both puts an X in the correct tier column, and also updates the weight column correctly. Any help would be appreciated.
You could use a derived table: ``` SELECT Tier0, Tier1, Tier2, CASE WHEN Tier0 = 'X' THEN 0 WHEN Tier1 = 'X' THEN 0.7 WHEN Tier2 = 'X' THEN 1 END [Weight] FROM (Your current query here) AS T; ``` Or a CTE: ``` ;WITH CTE AS ( Your current query here ) SELECT Tier0, Tier1, Tier2, CASE WHEN Tier0 = 'X' THEN 0 WHEN Tier1 = 'X' THEN 0.7 WHEN Tier2 = 'X' THEN 1 END [Weight] FROM CTE; ```
What about this way? ``` CASE WHEN (SUM(vst.len_of_stay) / COUNT(vst.len_of_stay)) > (SUM(drg.drg_std_days_stay) / COUNT(drg.drg_std_days_stay)) THEN 0 WHEN (SUM(vst.len_of_stay) / COUNT(vst.len_of_stay)) = (SUM(drg.drg_std_days_stay) / COUNT(drg.drg_std_days_stay)) THEN 0.7 WHEN (SUM(vst.len_of_stay) / COUNT(vst.len_of_stay)) < (SUM(drg.drg_std_days_stay) / COUNT(drg.drg_std_days_stay)) THEN 1 END AS Weight ```
Update Multiple Columns using single case statement
[ "", "sql", "sql-server", "case", "" ]
How can I make a valid request like: ``` UPDATE b2c SET tranche = '18 - 25' WHERE (dateofbirth::date BETWEEN NOW()::date - 18 'year' AND NOW()::date - 25 'year') ``` Thanks for help
``` dateofbirth::date BETWEEN (NOW() - interval '25 year')::date AND (NOW() - interval '18 year')::date ```
You can use Postgres cast syntax: ``` UPDATE b2c SET tranche = '18 - 25' WHERE dateofbirth::date BETWEEN NOW()::date - '25y'::interval AND NOW()::date - '18y'::interval ```
Postgresql date subtract
[ "", "sql", "postgresql", "datetime", "" ]
I need help. I have table Employees and table Departments. And I need to get the department, which has the biggest sum of salaries. I try this: ``` SELECT department_name,MAX(sum_salary) as sum_salary FROM (SELECT department_name,SUM(salary) AS sum_salary FROM EMPLOYEES,DEPARTMENTS WHERE DEPARTMENTS.DEPARTMENT_ID = EMPLOYEES.DEPARTMENT_ID GROUP BY DEPARTMENT_NAME) GROUP BY DEPARTMENT_NAME; ``` But the result is the list of departments, not only one value. Thank you for your help!
If, in the case of ties, you only want one, then `order by` and some form of getting one row works. The ANSI standard method is: ``` SELECT d.DEPARTMENT_NAME, SUM(e.salary) AS sum_salary FROM EMPLOYEES e JOIN DEPARTMENTS d ON d.DEPARTMENT_ID = e.DEPARTMENT_ID GROUP BY d.DEPARTMENT_NAME ORDER BY SUM(e.salary) DESC FETCH FIRST 1 ROW ONLY; ``` Note the use of proper `JOIN` syntax and table aliases. EDIT: Oracle 12g supports the above syntax. You can do it in earlier versions as: ``` SELECT t.* FROM (SELECT d.DEPARTMENT_NAME, SUM(e.salary) AS sum_salary FROM EMPLOYEES e JOIN DEPARTMENTS d ON d.DEPARTMENT_ID = e.DEPARTMENT_ID GROUP BY d.DEPARTMENT_NAME ORDER BY SUM(e.salary) DESC ) t WHERE rownum = 1; ``` [Here's](http://sqlfiddle.com/#!4/faede/6) a SQL Fiddle.
``` select min(max(department_name)) keep (dense_rank last order by sum(salary)) as department_name, min(sum(salary)) keep (dense_rank last order by sum(salary)) as sum_salary from EMPLOYEES join DEPARTMENTS using(department_id) group by department_id ``` [fiddle](http://sqlfiddle.com/#!4/d1d42/1)
SQL - How to get MAX of SUM?
[ "", "sql", "oracle", "sum", "max", "" ]
The database is running on Oracle Database 11g using SQL \*Plus 11.2. Are aggregate methods not allowed in a WITH clause or is WITH doing something magical? This code tells me "most\_expensive" is an invalid identifier. Yet, a sub-query works with no issue. ``` WITH most_expensive AS (SELECT MAX (enrollment_cost) FROM Enrollments) SELECT e.member_id FROM Enrollments e WHERE e.enrollment_cost = most_expensive; ```
Query factoring (with clauses) allows you to define temporary table aliases. In your example most\_expensive is going to reference a table object containing a single row with a single column. You can use it anywhere in the query where you can use a table. Now, if you create a table called t1 (with create table statment), give it one column and insert 1 row, you still won't be able to do "WHERE x = t1". In other words, a subquery is not always the same as a table, and WITH clauses gives you something that behaves like tables, not like subqueries. The following works though: ``` WITH most_expensive AS (SELECT MAX (enrollment_cost) FROM Enrollments) SELECT member_id FROM Enrollments e WHERE e.enrollment_cost = (select * from most_expensive); ``` <http://sqlfiddle.com/#!4/9eecb7/6340>
I don't see any benefit of using **sub-query factoring**(WITH clause) here. The query could simply be written as: ``` SELECT member_id FROM Enrollments e WHERE e.enrollment_cost = (SELECT MAX (enrollment_cost) FROM Enrollments ); ``` Compare the explain plans: **Without sub-query factoring:** ``` SQL> set autot on explain SQL> SELECT empno FROM emp e WHERE e.sal = 2 (SELECT MAX (sal) FROM emp 3 ); EMPNO ---------- 7839 Execution Plan ---------------------------------------------------------- Plan hash value: 1876299339 ---------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ---------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 8 | 8 (0)| 00:00:01 | |* 1 | TABLE ACCESS FULL | EMP | 1 | 8 | 4 (0)| 00:00:01 | | 2 | SORT AGGREGATE | | 1 | 4 | | | | 3 | TABLE ACCESS FULL| EMP | 14 | 56 | 4 (0)| 00:00:01 | ---------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter("E"."SAL"= (SELECT MAX("SAL") FROM "EMP" "EMP")) ``` **With sub-query factoring:** ``` SQL> WITH max_sal AS 2 ( SELECT MAX (sal) sal FROM emp 3 ) 4 SELECT empno FROM emp e WHERE e.sal = 5 (SELECT sal FROM max_sal 6 ); EMPNO ---------- 7839 Execution Plan ---------------------------------------------------------- Plan hash value: 73843676 ----------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ----------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 8 | 8 (0)| 00:00:01 | |* 1 | TABLE ACCESS FULL | EMP | 1 | 8 | 4 (0)| 00:00:01 | | 2 | VIEW | | 1 | 13 | 4 (0)| 00:00:01 | | 3 | SORT AGGREGATE | | 1 | 4 | | | | 4 | TABLE ACCESS FULL| EMP | 14 | 56 | 4 (0)| 00:00:01 | ----------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter("E"."SAL"= (SELECT "SAL" FROM (SELECT MAX("SAL") "SAL" FROM "EMP" "EMP") "MAX_SAL")) ``` See the filter applied, all you are doing is making it nested query and going one level deep without actually adding any benefit.
Oracle SQL Plus WITH Clause "invalid identifier"
[ "", "sql", "oracle", "" ]
I am having trouble getting the correct result with `CHARINDEX` . I have a column with the following dummy samples: ``` File Path C/Desktop/Folder1/FileName1 C/Folder3/Filename2 C/Folder4/Folder5/Folder6/Filename3 ``` And I would like it to return the following: ``` Filename1 Filename2 Filename3 ``` Is it possible to cut off the string in such a way?
Let assume your full paths are stored in table fileinfo in column FullFilePath ``` SELECT LTRIM( RTRIM( REVERSE( SUBSTRING( REVERSE(FullFilePath), 0, CHARINDEX('\', REVERSE(FullFilePath),0) ) ) ) ) FROM Fileinfo ```
Try `PATINDEX` (Transact-SQL) ``` SELECT RIGHT(FilePath, PATINDEX('%/%', REVERSE(FilePath))-1) FROM FileTable ``` Ref: <https://msdn.microsoft.com/en-us/library/ms188395.aspx>
How to cut off string with various lengths based on character?
[ "", "sql", "sql-server", "string", "" ]
I have a sales table with 3 columns ``` OrderId, CustomersId, OrderDateTime ``` How to write a T-SQL select query to find number of `orderId`, January2015orders and April2015orders in the results? Thanks! Should I use union or a case statement or ???
What you are looking for is the [datepart function](https://msdn.microsoft.com/de-de/library/ms174420(v=sql.120).aspx) If you want orders from the January, April of the year 2015 you could write your query as follows: ``` SELECT count(t.OrderId) as Orders, DatePart(month, t.OrderDateTime) as Month FROM SalesTable t WHERE datepart(year, t.OrderDateTime) = 2015 AND (datepart(month, t.OrderDateTime) = 1 OR datepart(month, t.OrderDateTime) = 4) GROUP BY datepart(month, t.OrderDateTime) ``` See this [fiddle](http://sqlfiddle.com/#!6/00b7f/25) for a working example **EDIT:** If you want to full month name, instead of the number, you could apply one of these solutions from [here](https://stackoverflow.com/questions/185520/convert-month-number-to-month-name-function-in-sql). The query would then look like this: ``` SELECT count(t.OrderId) as Orders, DateName(month , DateAdd( month , DatePart(month, t.OrderDateTime), -1 )) FROM SalesTable t WHERE datepart(year, t.OrderDateTime) = 2015 and (datepart(month, t.OrderDateTime) = 1 or datepart(month, t.OrderDateTime) = 4) group by datepart(month,t.OrderDateTime) ``` **EDIT2:** As per your comment, the columns January2015Orders and April2015Orders are mandatory. In this case, you could go with this solution: ``` SELECT count(t.OrderId) as Orders, DatePart(month, t.OrderDateTime) as January2015Orders, null as April2015Orders FROM SalesTable t WHERE datepart(year, t.OrderDateTime) = 2015 and datepart(month, t.OrderDateTime) = 1 group by datepart(month,t.OrderDateTime) UNION SELECT count(t.OrderId) as Orders, null as January2015Orders, DatePart(month, t.OrderDateTime) as April2015Orders FROM SalesTable t WHERE datepart(year, t.OrderDateTime) = 2015 and datepart(month, t.OrderDateTime) = 4 group by datepart(month,t.OrderDateTime) ``` The first query selects January2015Orders with its value and April as null. This is followed by a second query, which selects January as null and April2015Orders with its value. Not pretty, but it (hopefully) renders the correct results. [Here's the fiddle](http://sqlfiddle.com/#!6/00b7f/38) to play around with
If I understand you correctly: ``` select month(OrderDateTime),count(OrderId) from your data group by month(OrderDateTime) ``` It would be good to know If you mean: > number of orderId as a count ?
Comparing orders by months in SQL Server
[ "", "sql", "sql-server", "" ]
Let's assume I have the following two tables: ``` CREATE TABLE users ( id MEDIUMINT NOT NULL AUTO_INCREMENT, name CHAR(30) NOT NULL, PRIMARY KEY (id) ) ENGINE=MyISAM; CREATE TABLE logins ( user_id NOT NULL, day DATE NOT NULL, PRIMARY KEY (`user_id, `day`) ) ENGINE=MyISAM; ``` What I'm trying to do here is get a query for all users with the first day they logged in and the last day they logged in. The query I was executing to achieve this looks like the following: ``` SELECT u.id AS id, u.name AS name, MIN(l.day) AS first_login, MAX(l.day) AS last_login FROM users u LEFT JOIN logins l ON u.id = l.user_id ``` The problem is that because of the use of `MIN()` and `MAX()`, I'm only receiving one row back in the entire result. I'm sure it's my use of those functions that's causing this. I should have one row per user, even if they do not have any login entries. This is the reason for me using a `LEFT JOIN` vs an `INNER JOIN`.
in order to use aggregate functions (min, max, ...) you need grouping. Try something like this: ``` SELECT u.id AS id, u.name AS name, MIN(l.day) AS first_login, MAX(l.day) AS last_login FROM users u LEFT JOIN logins l ON u.id = l.user_id GROUP BY u.id ```
Any sensible database except MySQL would have given you an error on mixing row-terms and aggregate terms, making the error clearer. MySQL, unfortunately allows this behavior, making it harder to notice that you forgot the `group by` clause needed to generate a row per user: ``` SELECT u.id AS id, u.name AS name, MIN(l.day) AS first_login, MAX(l.day) AS last_login FROM users u LEFT JOIN logins l ON u.id = l.user_id GROUP BY u.id, u.name -- missing in the original query ```
Using an SQL LEFT JOIN with the MAX() and MIN() functions
[ "", "mysql", "sql", "select", "" ]
I have two models : User and Rsvp. User has many Rsvps. I want to get all Users that has no Rsvp created at a specified date. So i wrote this : ``` scope :free, -> (date) { joins(:rsvps).where.not(rsvps: {created_at: date.beginning_of_day..date.end_of_day}) } ``` But it only works if User has at least one Rsvp but not for 'date'. It doesn't work when a User has no Rsvp at all. I tried with a left outer join, but it doesn't work. I must miss something : ``` scope :free, -> (date) { joins("LEFT OUTER JOIN rsvps ON rsvps.user_id = users.id").where.not(rsvps: {created_at: date.beginning_of_day..date.end_of_day}) } ``` Here's my unit test : ``` it 'should fetch user if he has no rsvp at all' do # Given user = User.create!(email: 'm@m.com', password: '12345678', password_confirmation: '12345678') # Then expect(User.free(Date.new(2014, 1, 1)).count).to eq 1 end ```
This should work: `scope :free, -> (date) { includes(:rsvps).where("rsvps.id IS NULL OR rsvps.created_at NOT BETWEEN ? AND ?", date.beginning_of_day, date.end_of_day).references(:rsvps) }` Hope it helps!
It's normally that `joins` returns Users having at least one Rsvp. That's why I suggest using `includes`. Try this one (probably you have to modify the condition because I'm not sure it's what you want exactly): ``` scope :free, -> (date) { includes(:rsvps).where('rsvps.created_at != ?', date ).references(:rsvps) } ```
Left outer join and where clause with rails activerecord
[ "", "sql", "ruby-on-rails-4", "activerecord", "left-join", "" ]
Sorry for bad topic but I wasn't sure what to call it.. I have a table looking like this: ``` +-----++-----+ | Id ||Count| +-----++-----+ | 1 || 1 | +-----++-----+ | 2 || 5 | +-----++-----+ | 3 || 8 | +-----++-----+ | 4 || 3 | +-----++-----+ | 5 || 6 | +-----++-----+ | 6 || 8 | +-----++-----+ | 7 || 3 | +-----++-----+ | 8 || 1 | +-----++-----+ ``` I'm trying to make a select from this table where every time the SUM of row1 + row2 + row3 (etc) reaches 10, then it's a "HIT", and the count starts over again. Requested output: ``` +-----++-----++-----+ | Id ||Count|| HIT | +-----++-----++-----+ | 1 || 1 || N | Count = 1 +-----++-----++-----+ | 2 || 5 || N | Count = 6 +-----++-----++-----+ | 3 || 8 || Y | Count = 14 (over 10) +-----++-----++-----+ | 4 || 3 || N | Count = 3 +-----++-----++-----+ | 5 || 6 || N | Count = 9 +-----++-----++-----+ | 6 || 8 || Y | Count = 17 (over 10..) +-----++-----++-----+ | 7 || 3 || N | Count = 3 +-----++-----++-----+ | 8 || 1 || N | Count = 4 +-----++-----++-----+ ``` How do I do this, and with best performance? I have no idea..
You could use [Recursive Queries](https://technet.microsoft.com/en-us/library/ms186243(v=sql.105).aspx) Please note the following query assuming the id value are all in sequence, otherwise, please use `ROW_NUMBER()` to create a new id ``` WITH cte AS ( SELECT id, [Count], [Count] AS total_count FROM Table1 WHERE id = 1 UNION ALL SELECT t2.id,t2.[Count], CASE WHEN t1.total_count >= 10 THEN t2.[Count] ELSE t1.total_count + t2.[Count] END FROM Table1 t2 INNER JOIN cte t1 ON t2.id = t1.id + 1 ) SELECT * FROM cte ORDER BY id ``` [SQL Fiddle](http://sqlfiddle.com/#!6/81614/7)
You can't do this using window/analytic functions, because the breakpoints are not known in advance. Sometimes, it is possible to calculate the breakpoints. However, in this case, the breakpoints depend on a non-linear function of the previous values (I can't think of a better word than "non-linear" right now). That is, sometimes adding "1" to an earlier value has zero effect on the calculation for the current row. Sometimes it has a big effect. The implication is that the calculation has to start at the beginning and iterate through the data. A minor modification to the problem *would* be solvable using such functions. If the problem were, instead, to carry over the excess amount for each group (instead of restarting the sum), the problem would be solvable using cumulative sums (and some other trickery). Recursive queries (which others have provided) or a sequential operation is the best way to approach this problem. Unfortunately, it doesn't have a set-based method for solving it.
Running SUM in T-SQL
[ "", "sql", "sql-server", "t-sql", "" ]
I have two databases heading into this: a table acting\_gigs with columns actor\_name and movie\_title and table movies with columns movie\_title and release\_year. I would like to make a SQL query that lists the names of all the actors that have participated in every single movie in a given release\_year, and display two columns: the actors' names (actor\_names) and the year in which they participated in every movie (release\_year). For example: ``` movie_title | release_year ------------------------------------------ 'The Green Mile' | 2000 'Titanic' | 1997 'Cast Aways' | 2000 'Independence Day' | 1997 actor_name | movie_title ------------------------------------------------- 'Leonardo DiCaprio' | 'Titanic' 'Tom Hanks' | 'The Green Mile' 'Will Smith' | 'Independence Day' 'Tom Hanks' | 'Cast Aways' ``` Which means that the table I would like to return is ``` actor_name | release_year --------------------------- 'Tom Hanks' | 2000 ``` I have been trying to use subqueries and outer joining, but I have not been able to quite arrive at a solution. I know that I have to use count, but I'm unsure how to apply it multiple times in a manner such as this.
Here's one way: ``` SELECT y.actor_name, y.release_year FROM (SELECT release_year, COUNT(*) AS cnt FROM movies GROUP BY release_year) AS x INNER JOIN (SELECT actor_name, release_year, COUNT(*) AS cnt FROM acting_gigs AS t1 INNER JOIN movies AS t2 ON t1.movie_title = t2.movie_title GROUP BY actor_name, release_year) AS y ON x.release_year = y.release_year AND x.cnt = y.cnt ``` Derived table `x` contains the count of movies per year, whereas derived table `y` contains the count of movies per year / per actor. The `JOIN` predicates: * `x.release_year = y.release_year` and * `x.cnt = y.cnt` guarantee that, for a specific year, only actors that participated in *all movies of that year* are returned. [**Demo here**](http://sqlfiddle.com/#!15/efeae/5) Here's another, probably more efficient, way using window functions: ``` SELECT DISTINCT actor_name, release_year FROM ( SELECT actor_name, release_year, COUNT(*) OVER (PARTITION BY actor_name, release_year) AS cntPerActorYear, COUNT(*) OVER (PARTITION BY release_year) AS cntPerYear FROM acting_gigs AS t1 INNER JOIN movies AS t2 ON t1.movie_title = t2.movie_title ) AS t WHERE cntPerActorYear = cntPerYear ``` [**Demo here**](http://sqlfiddle.com/#!15/efeae/8)
This should do the trick: ``` select m.release_year , a.actor_name , count(1) total_movies from movies m join actors a on a.movie_title = m.movie_title group by m.release_year, a.actor_name order by m.release_year, a.actor_name -- or however you want to order it ```
How to list movie actors who acted in every single film released in any given year
[ "", "sql", "postgresql", "" ]
I have a legacy column in my table which stores name and surname values together. For Example: `Mike Huge Muyra` or `John Kutgira`. I need to split column by whitespace. Result must be like that (**Last word is surname, others are name**) ``` Name : Mike Huge, Surname : Muyra Name : John, Surname : Kutgira ``` I have been researching for two hours. (**REGEXP\_SUBSTR** is looking cool) But I can not find the solution.
You can use regular expressions for this. However, because the rule involves the *last* space perhaps regular string operations are fine: ``` select trim(substr(legacy, 1 - instr(reverse(legacy), ' '))) as surname, trim(substr(legacy, 1, length(legacy) - instr(reverse(legacy), ' '))) as restname ``` You might want to wrap these in `case legacy like '% %'` just to be sure they have a space.
Try finding out the last index of space (' ') character and then using substring (twice I guess for first and last part) to get what you want. Here is an [answer](https://stackoverflow.com/questions/856817/finding-last-index-of-a-string-in-oracle) for the last index of part.
Split column which have two or three word by space in oracle sql
[ "", "sql", "regex", "oracle", "split", "" ]
OP is completely new to databases and SQL, so this question may be answered somewhere else but I don't quite have the vocabulary to find what I'm looking for; a push in the right direction would be great. I'm trying to make a view of two tables inner joined. This is currently how they look: [![enter image description here](https://i.stack.imgur.com/vbu61.png)](https://i.stack.imgur.com/vbu61.png) and this is how I would like them to look: [![enter image description here](https://i.stack.imgur.com/iHIbo.png)](https://i.stack.imgur.com/iHIbo.png) The issue with this is that the view is empty because c4 and c5 can be null values. I essentially want these two later inner joins on c4 and c5 to happen if one of them has a value. Just to be thorough: 1. If c4 exists, inner join on it. 2. If c5 exists, inner join on it. 3. If neither exist, don't inner join on it. Each of these are preceded by the inner joins between UTC and colNum. By this I mean that the UTC and colNum joins always happen. I know that sql is a query language, so it does not do computation, but there must be a filter that will allow for this logic to be applied to these two tables. It is useful to note that if c4 exists, c5 is null and if c5 exists, c4 is null, and that I still want a row (joined based on the previous two inner joins) if both are null. Again, I don't really know the language surrounding SQL, so my effort to find an answer before asking one was hampered. If something like this has already been answered, just point me to it.
It is a little big statement to do it in comment so I will post it as an answer. If my understanding of the problem is correct then it will be like: ``` select * from sizeconditionstable t1 join specalloytable t2 on (t1.c4 is not null and t2.c4 is not null and t1.c4 = t2.c4) or (t1.c5 is not null and t2.c5 is not null and t1.c5 = t2.c5) ``` Edit: ``` select * from sizeconditionstable t1 join specalloytable t2 on (t1.utc = t2.utc and t1.colnum = t2.colnum) and ((t1.c4 = t2.c4) or (t1.c4 is null and t2.c4 is null)) and ((t1.c5 = t2.c5) or (t1.c5 is null and t2.c5 is null)) ``` This is the version which will join always on `utc` and `colnum` and also on c4 and c5 if they are filled in both tables.
I think a UNION ALL query is your best bet here. ``` SELECT field1, field2, field3 FROM table1 t1 JOIN table2 t2 ON t2.field1 = t1.field1 AND t2.field2 = t2.field2 UNION ALL SELECT field1, field2, field3 FROM table1 t1 JOIN table2 t2 ON t2.field1 = t1.field1 AND t2.field3 = t2.field3 ``` In this case, if field 2 is null and field 3 is not, then there are no results from the first query but the second one brings in the results
SQL Inner join on one field when second is null and on second when first is null
[ "", "sql", "sql-server", "sql-server-2008", "join", "inner-join", "" ]
The problem is that I want to list the orders that haven´t been payed at the end of the selected month (for example october 31.10.2015) That means: list all orders from the beginning of time to 31.10 AND date\_payed BETWEEN (1.11 - today) AND date payed = NULL or before 1970 $sel\_date = 10-2015 (the selected day) $s3 = 11-2015 //in this case ``` from octt_order WHERE order_status_id >= 1 AND (date_format(date_added,'%m-%Y') BETWEEN '11-2010' AND '$sel_date') AND (date_payed BETWEEN STR_TO_DATE('$s3','%m-%Y') AND STR_TO_DATE('11-2300','%m-%Y') OR date_payed is NULL) ORDER BY date_format(date_added,'%Y/%m/%d') ASC"); ```
This was the solution for me ``` octt_order WHERE order_status_id >= 1 AND (date_added BETWEEN STR_TO_DATE('01-2001','%m-%Y') AND STR_TO_DATE('$s3','%m-%Y')) AND (date_payed BETWEEN STR_TO_DATE('$s3','%m-%Y') AND STR_TO_DATE('11-2300','%m-%Y') OR date_payed is NULL) ORDER BY date_format(date_added,'%Y/%m/%d') ASC"); ```
The first `BETWEEN` doesn't compare dates but strings. So '02-2000' will come before '03-2000', but after '02-2020', which is probably not what you want. Compare dates (like in the second `BETWEEN`) to fix that. The second `BETWEEN` looks like it could do with some more brackets, though you might get away with your version if the operator precedence works in your favour. Your order by does not have to convert the date to a string. `ORDER BY date_added` would be fine. It would help to know what you want, what you get, and what the problem is.
SQL multiple AND statements
[ "", "mysql", "sql", "statements", "" ]
I need big help from you I am using sql server 2008 and I want to get the output using sql query. I have a following data in the table. ``` Id Code ----------------- 1 01012 2 01012 3 01012 4 01012 5 01013 6 01013 7 01014 ``` I need Following output ``` Id Code ----------------- 1 01012 2 01012A 3 01012B 4 01012C 5 01013 6 01013A 7 01014 ```
**SQL Server 2012+ Solution** *Can be adpated to 2008 be replacing `CONCAT` with `+` and `CHOOSE` with `CASE`.* Data: ``` CREATE TABLE #tab(ID INT, Code VARCHAR(100)); INSERT INTO #tab SELECT 1, '01012' UNION ALL SELECT 2, '01012' UNION ALL SELECT 3, '01012' UNION ALL SELECT 4, '01012' UNION ALL SELECT 5, '01013' UNION ALL SELECT 6, '01013' UNION ALL SELECT 7, '01014'; ``` Query: ``` WITH cte AS ( SELECT ID, Code, [rn] = ROW_NUMBER() OVER(PARTITION BY Code ORDER BY id) FROM #tab ) SELECT ID, Code = CONCAT(Code, CHOOSE(rn, '', 'A', 'B', 'C', 'D', 'E', 'F')) -- next letters FROM cte; ``` **[LiveDemo](https://data.stackexchange.com/stackoverflow/query/372339)**
You can use `ROW_NUMBER`. When Rn = 1, retain the original `Code` else, add `A`, `B`, and so on. To determine which letter to add, the formula is `CHAR(65 - RN - 2)`. ``` WITH CTE AS( SELECT *, Rn = ROW_NUMBER() OVER(PARTITION BY Code ORDER BY Id) FROM tbl ) SELECT Id, Code = CASE WHEN Rn = 1 THEN Code ELSE Code + CHAR(65 + Rn - 2) END FROM CTE ```
Add A,B,C letters to Duplicate values
[ "", "sql", "sql-server", "sql-server-2008", "sql-server-2008-r2", "sql-server-2012", "" ]
I am using a statement as below and get this error: > SELECT Failed. 3771: Illegal expression in WHEN clause of CASE > expression. I had better hopes from Teradata. SQL Server can do it but Teradata can't. How can I work around this? Any solution? ``` sel ( CASE WHEN EXISTS ( sel '1' from VolatileTable Dtb1 where Dtb1.c1=FACT_Table_5MillionRows.C1) THEN "FACTTablew5MillionRows"."CustomColumName" ELSE 'ALL OTHER' END ) (NAMED "CustomColumName" ) from "Db"."FACTTablew5MillionRows" ```
Teradata doesn't like EXISTS in Correlated Scalar Subqueries within a CASE, but you can rewrite it like this: ``` select ( CASE WHEN C1 = ( select MIN(C1) from VolatileTable Dtb1 where Dtb1.c1=ft.C1) THEN ft."CustomColumName" ELSE 'ALL OTHER' END ) (NAMED "CustomColumName" ) from "Db"."FACTTablew5MillionRows" as ft ``` If `VolatileTable.C1` is unique you can remove the `MIN`. But in 95% logic like this can be replaced by a LEFT JOIN: ``` select ft.*, CASE WHEN Dtb1.c1 IS NOT NULL THEN ft."CustomColumName" ELSE 'ALL OTHER' end as "CustomColumName" from "Db"."FACTTablew5MillionRows" as ft left join VolatileTable Dtb1 on Dtb1.c1=ft.C1 ``` This will return duplicated rows if `VolatileTable.C1` is not unique, then you need to change it to: ``` from "Db"."FACTTablew5MillionRows" as ft left join (select distinct C1 from VolatileTable) Dtb1 on Dtb1.c1=ft.C1 ```
Replace ``` WHEN EXISTS (...) ``` By ``` WHEN 1 = (SELECT 1 WHERE EXISTS (...)) ```
teradata SQL tuning pundits - SELECT Failed. 3771: Illegal expression in WHEN clause of CASE expression
[ "", "sql", "case", "teradata", "exists", "sql-tuning", "" ]
I want to write a query which retrieves rows in each tables present under a particular owner. I am not very good at writing queries and so I could not figure out the solution on internet. The query I am writing is: ``` SELECT TABLE_NAME, COUNT(*) FROM DBA_TABLES WHERE TABLE_NAME IN (SELECT TABLE_NAME FROM DBA_TABLES WHERE OWNER='ABC'); ``` I know the query is completely wrong but I have just given it to let know what exactly I want. I hope it helps. The error which I am getting is: `ORA-00937: not a single-group group function` Please help me in writing the exact query for the same. Thanks!
> Extracting number of records in each table Then you are completely in a wrong direction while querying **DBA\_TABLES**. It will give you the count of tables owned by a user, but not the count of rows in each table. There are three ways: * In pure SQL, the table name must be static. i.e. you will have to list down the table names explicitly and get the count. ``` SELECT COUNT(*) FROM table_1; SELECT COUNT(*) FROM table_2; SELECT COUNT(*) FROM table_3; ... and so on... ``` Using a good text editor, you could do it quickly. * If the statistics are collected up to date, then you could rely on **NUM\_ROWS** in **DBA\_TABLES**. In any case, better gather the statistics first using **DBMS\_STATS**. ``` SQL> SELECT table_name, num_rows FROM dba_tables WHERE owner='SCOTT'; TABLE_NAME NUM_ROWS ---------- ---------- CLASS 0 SALGRADE 5 BONUS 0 DEPT 4 EMP 14 ``` * In PL/SQL, loop through all the table names, create dynamic sql and execute (ab)using **EXECUTE IMMEDIATE**.
When you using group function (count) you must place all columns which appear in SELECT and not in group function in GROUP BY section ``` SELECT TABLE_NAME, COUNT(*) FROM DBA_TABLES WHERE OWNER='ABC' GROUP BY TABLE_NAME; ``` BUT this query will return nothing interesting to you. It is not number of rows. It is number of tables in that schema.
Extracting number of records in each table
[ "", "sql", "oracle", "ora-00937", "" ]
I want to display the number of reminders in queue as a statistic on my website. That statistic is the number of rows in my table. What I have been doing in the past is having a separate table that caches the number. The cached table has just one row and one column contained the number of rows in the reminder table. Every time a new reminder is added, I have another query that increments the value by one. I have recently started incorporating the `count()` function into my project. My question is; is it better performance wise to perform a `count()` operation on the table to display the current number of reminders every time someone loads a page (the statistic is displayed in the footer) or is it faster to have that number already stored in another table so that I am just reading from that table? At what point would it be faster to use a cached value rather than the count function?
As with the most optimization related question the answer is: well, it depends. If your table uses myisam table type, then the number of rows is already cached in the table itself and count(\*) without where will read that number. If you use innodb table engine and you have lots of inserts and less selects, then maintaining the count number will be more expensive than counting the rows. If you do not have too many inserts, then using a cached number will probably be faster, since innodb is relatively slow on count(\*) without where. See [mysql performance blog on count(\*)](https://www.percona.com/blog/2006/12/01/count-for-innodb-tables/) for more detailed explanation on innodb.
If you are thinking of caching row counts, you probably shouldn't and you probably don't need it. There is a built in mechanism [**SHOW TABLE STATUS**](http://dev.mysql.com/doc/refman/5.5/en/show-table-status.html) Part of the output of that query includes: > The number of rows. Some storage engines, such as MyISAM, store the > exact count. For other storage engines, such as InnoDB, this value is > an approximation, and may vary from the actual value by as much as 40 > to 50%. In such cases, use SELECT COUNT(\*) to obtain an accurate > count. > > The Rows value is NULL for tables in the INFORMATION\_SCHEMA database. This paragraph also answers your question about the efficiency of SELECT COUNT(\*) - on MyISAM tables it's fast, it does not depend on the number of rows in the table because the internal counter is used. How does [innodb](http://dev.mysql.com/doc/refman/5.5/en/innodb-restrictions.html) differ? > Innodb does not keep an internal count of rows in a table because concurrent > transactions might “see” different numbers of rows at the same time. > To process a SELECT COUNT(\*) FROM t statement, InnoDB scans an index > of the table, which takes some time if the index is not entirely in > the buffer pool. If your table does not change often, using the MySQL > query cache is a good solution. To get a fast count, you have to use a > counter table you create yourself and let your application update it > according to the inserts and deletes it does. If an approximate row > count is sufficient, SHOW TABLE STATUS can be used. Notice that this part of the documentation does speak of caching the count. But you will notice that if there is an index that covers the table completely the count(\*) query is still fast. Since you naturally have a primary key and that primary key is likely to be in a buffer at least partially the performance impact will be neglible. Note that the story is completely different in the other popular open source database Postgresql. There count(\*) slows down proportionate to the table size. Thankfully in mysql it isn't so bad. **In conclusion: Since a cached row count is only approximate, you can just use show table status instead.**
Recount or cache count of mysql table
[ "", "mysql", "sql", "caching", "" ]
For simplicity say I have a column 'Hour' which is either the value 10, or 12. I wish to update a new column, Hour\_type which will be 'A' if the Hour value is 10 or B if 12 and so on. I can output a column, 'Hour\_type' by using CASE, as follows ``` SELECT CASE WHEN Hour = 10 then 'A' WHEN hour = 12 then 'B' else 'c' end as Hour_type from Traffic_Counts ``` This outputs the correct answer but does not insert the values into the table. I wish to set a column which exists in the table with these values. ``` SELECT CASE WHEN Hour = 10 then 'A' WHEN hour = 12 then 'B' end as Hour_type from Traffic_Counts set Hour_type = Hour_type ``` This results in a Syntax error. In pseudocode I am trying to add an 'if' to this simple update column ``` update table set Hour_type = 'a' if Hour = 10, 'b' if Hour = 12; ```
The `case [...] end` block is an expression. `update` is used to `set` columns from expressions. So, just take the entire `case [...] end` block and make that the right-hand-side of an `update` to your desired column. This would work for any other `select`able expression, too (barring conflicting types, etc.)
Use this, but i am not sure what are you doing. This return 'a', 'b' or 'X' if the HOUR not 10 or 12 ``` update table set Hour_type = IF(HOUR = 10 , 'a', IF(HOUR = 12, 'b', 'X')); ```
Using CASE to update column value depending on other column values
[ "", "mysql", "sql", "" ]
I have one table like ``` product_id tag_id value 1 1 10 1 2 51 1 3 47 2 1 15 2 2 59 2 3 44 3 1 10 3 2 51 3 3 47 4 1 10 4 2 12 4 3 55 ``` I want to create query that returns distinct product id's that meets specific criterias from ALL three tag id's. For example i want the product id's that has tag\_id 1 = 10 and tag\_id 2 = 51 and tag\_id 3 = 47. Thnks
Either do a `GROUP BY`, use `HAVING` to make sure all 3 different tag\_id/value combos are included. ``` SELECT product_id FROM tablename WHERE (tag_id = 1 AND value = 10) OR (tag_id = 2 AND value = 51) OR (tag_id = 3 AND value = 47) group by product_id having count(distinct tag_id) >= 3 ``` Or, do a double self join: ``` select distinct t1.product_id from (select product_id from tablename where tag_id = 1 AND value = 10) t1 join (select product_id from tablename where tag_id = 2 AND value = 51) t2 on t1.product_id = t2.product_id join (select product_id from tablename where tag_id = 3 AND value = 47) t3 on t2.product_id = t3.product_id ```
This is usually done using `HAVING`: ``` SELECT product_id FROM tablename WHERE (tag_id = 1 AND value = 10) OR (tag_id = 2 AND value = 51) OR (tag_id = 3 AND value = 47) GROUP BY product_id HAVING COUNT(*) = 3 -- number of searched combinations ``` This assumes that the combination`tag_id`/`value` is unique, otherwise you have to change the count to add a distinct like `COUNT(DISTINCT tag_id)`
Sql return distinct rows with specific values
[ "", "sql", "distinct-values", "" ]
please help me solve this problem: You are given a table, containing two columns: column is one of the followings: ``` Doctor Professor Singer Actor ``` Write a query to output the names underneath the corresponding occ. in the following format: ``` +--------+-----------+--------+------+ | Doctor | Professor | Singer | Actor| +--------+-----------+--------+------+ ``` Names must be listed in alphabetically sorted order. Sample Input ``` Name Occupation Meera Singer Ashely Professor Ketty Professor Christeen Professor Jane Actor Jenny Doctor Priya Singer ``` Sample Output ``` Jenny Ashley Meera Jane Samantha Christeen Priya Julia NULL Ketty NULL Maria ``` Note Print "NULL" when there are no more names corresponding to an occupation. I tried using : ``` SELECT * FROM ( SELECT [Name], [Occupation] FROM occupations ) AS source PIVOT ( max([Name]) FOR [occupation] IN ([Doctor], [Professor], [Singer], [Actor]) ) as pvt; ``` which gives the following output: ``` Priya Priyanka Kristeen Samantha ``` How to fix it ?
You just need to give each name a row number based on their occupation and order alphabetically.. then include that row number in your pivot query. ``` CREATE TABLE Occupations ( NAME VARCHAR(MAX), Occupation VARCHAR(MAX) ) INSERT INTO Occupations VALUES ('Samantha','Doctor'), ('Julia','Actor'), ('Maria','Actor'), ('Meera','Singer'), ('Ashley','Professor'), ('Ketty','Professor'), ('Christeen','Professor'), ('Jane','Actor'), ('Jenny','Doctor'), ('Priya','Singer'); SELECT [Doctor], [Professor], [Singer], [Actor] FROM (SELECT ROW_NUMBER() OVER (PARTITION BY Occupation ORDER BY Name) rn, [Name], [Occupation] FROM Occupations ) AS source PIVOT (MAX(Name) FOR [occupation] IN ([Doctor],[Professor],[Singer],[Actor])) as pvt ORDER BY rn DROP TABLE Occupations ``` -- Edit: We need to enclose the subquery after PIVOT within parenthesis "()" to make it work on SQL Server.
I tried this in **Oracle**, seemed easier to comprehend: ``` SELECT min(Doctor), min(Professor), min(Singer), min(Actor) FROM ( Select ROW_NUMBER() OVER (PARTITION BY Occupation order by Name) rn, CASE WHEN Occupation = 'Doctor' then Name end as Doctor, CASE WHEN Occupation = 'Professor' then Name end as Professor, CASE WHEN Occupation = 'Singer' then Name end as Singer, CASE WHEN Occupation = 'Actor' then Name end as Actor from OCCUPATIONS order by Name) a group by rn order by rn; ```
How to use pivot in sql server (without aggregates )?
[ "", "sql", "sql-server", "database", "" ]
I have this string `1111-2222-3-4-55-12345678901234567 1` in MYSQL Table Field. What I need to do is to separate first 5 parameters separated with a `-`. Like I need to Separate: ``` 1111 2222 3 4 55 ```
Even though I did not test it, this should solve your problem. ``` SELECT `mystring`, SUBSTRING_INDEX(mystring,'-',1) AS part1, SUBSTRING_INDEX(SUBSTRING_INDEX(mystring,'-',2),'-',-1) AS part2, SUBSTRING_INDEX(SUBSTRING_INDEX(mystring,'-',3),'-',-1) AS part3, SUBSTRING_INDEX(SUBSTRING_INDEX(mystring,'-',4),'-',-1) AS part4, SUBSTRING_INDEX(SUBSTRING_INDEX(mystring,'-',5),'-',-1) AS part5, FROM my_table; ------------------------------------------------------------------------------ | mystring | part1 | part2 | part3 | part4 | part5 | ------------------------------------------------------------------------------ | 1111-2222-3-4-55-12345678901234567 | 1111 | 2222 | 3 | 4 | 55 | ``` This is splitting the text with an increased index count and re-splitting it from the last index for each part through part2 to part5
``` $stringElements = explode('-', $string); echo $stringElements[0];// 1111 echo $stringElements[1];// 2222 echo $stringElements[2];// 3 echo $stringElements[3];// 4 echo $stringElements[4];// 55 $stringElements[5];// 12345678901234567 1 ```
Need to Separate the String with a separator of - in MYSQL
[ "", "mysql", "sql", "" ]
There is a column `name` from which I want to use to make a new column. example: ``` name asd_abceur1mz_a asd_fxasdrasdusd3mz_a asd_abceur10yz_a asd_fxasdrasdusd15yz_a ``` The length of the column is not fixed so I assumed i have to use charindex to have a reference point from which I could trim. **What i want:** at the end there is always `z_a`, and i need to place in a separate column the left part from `z_a` like this: ``` nameNew eur1m usd3m eur10y usd15y ``` The problem is that the number (in this example 1, 3, 10, 15) has 1 or two digits. I need to extract the information from `name` to `nameNew`. After that i was thinking to make it easier to read and to output it like this: ``` eur_1m usd_3m eur_10y usd_15y ``` I tried using a combination of substring and charindex, but so far without success. ``` SELECT * , SUBSTRING(name, 1, ( CHARINDEX('z_a', NAME) - 1 )) AS nameNew FROM myTable ``` This is for the first step, trimming the string, for the 2nd step (making it easier to read) I don't know how to target the digit and place an `_`. Any help would be appreciated. Using sql server 2012 ## edit: First of all thank you for your time and solutions. But your queries more or less even if they are working for 1 or 2 digits have the same problem. Consider this situation: ``` name ab_dertEUR03EUR10YZ_A ``` if `eur` is two times in the string, then how can I eliminate this? Sorry for not includding this in my original post but i forgot that situation is possible and now that's a problem. ## edit: test your queries here, on this example: <http://www.sqlfiddle.com/#!3/21610/1> **Please note** that at the end it can be any combination of 1 or 2 digits and the letter y or m. Ex: `ab_rtgtEUR03EUR2YZ_A` , `ab_rtgtEUR03EUR2mZ_A`, `ab_rtgtEUR03EUR20YZ_A`, `ab_rtgtEUR03EUR20mZ_A` Some values for testing: `('ex_CHFCHF01CHF10YZ_A'), ('ab_rtgtEUR03EUR2YZ_A'), ('RON_asdRON2MZ_A'), ('tg_USDUSD04USD5YZ_A');` My understanding of your queries is that they perform something simillar to this (or at least they should) ``` ex_CHFCHF01CHF10YZ_A -> ex_CHFCHF01CHF10Y -> Y01FHC10FHCFHC -> Y01FHC -> CHF01Y -> CHF_01Y RON_asdRON2MZ_A -> RON_asdRON2M -> M2NORdsa_ron -> M2NOR -> RON2M -> RON_2M ```
This works for one or two digits: ``` stuff(case when name like '%[0-9][0-9]_z[_]a' then left(right(name, 9), 6) when name like '%[0-9]_z[_]a' then left(right(name, 8), 5) end, 4, 0, '_') ```
You can use a combination of `substring` , `reverse` and `charindex`. [SQL Fiddle](http://www.sqlfiddle.com/#!6/6f3dc/1) ``` select substring(namenew,1,3) + '_' + substring(namenew, 4, len(namenew)) from ( select case when name like '%[0-9][0-9]_z[_]a' then reverse(substring(reverse(name), charindex('a_z',reverse(name)) + 3, 6)) when name like '%[0-9]_z[_]a' then reverse(substring(reverse(name), charindex('a_z',reverse(name)) + 3, 5)) end as namenew from myTable ) t ```
using charindex in a substring to trim a string
[ "", "sql", "sql-server", "t-sql", "sql-server-2012", "" ]
I have a report written out of SQL that I am pulling into an Excel workbook (hence the multiple tags) with Patient Information that I want to filter into other values. For instance my workbook looks something like this: ``` Name Acct # Date Shmo,Joe 12345 1/1/15 Shmo,Joe 49738 1/2/15 Shmo,Joe 19725 2/1/15 Smith,Jane 59785 1/5/15 Smith,Jane 36740 3/2/15 ``` This is all well and good until now when they want this data filtered to "remove" the patient name but retain that multiple accounts are associated with a patient. For example, I would like a piece of code that will change the above to: ``` Name Acct # Date Patient 1 12345 1/1/15 Patient 1 49738 1/2/15 Patient 1 19725 2/1/15 Patient 2 59785 1/5/15 Patient 2 36740 3/2/15 ``` I'm hoping if it can be done either in SQL or in Excel. Even if it just a unique number (1, 2, etc.) without the "Patient" in front would be great.
Answer for Excel. This adds a new column in D. Change to a different column if necessary. ``` Sub Zach() Dim lastRow As Long lastRow = WorkSheets("Sheet6").Cells(WorkSheets("Sheet6").Rows.Count, "A").End(xlUp).Row 'change sheet n = 1 S = 2 For i = 2 To lastRow If Cells(i + 1, 1) <> Cells(i, 1) Then Range(Cells(S, 4), Cells(i, 4)).Value = "Patient " & n n = n + 1 S = i + 1 End If Next End Sub ```
If you are using Sql Server then the following will give the alias you want: ``` concat('Patient ',DENSE_RANK() over(order by name)) ```
Substitute codes for names
[ "", "sql", "excel", "vba", "" ]
I have a table `company` with column `customerID` and I want every customer id from 80000 to 80999. But the column `customerID` is varchar an some of them contain letters. So, I must put `customerID` into ' ' and when I try this query: ``` select * from company where customerID between '80000' and '80999' ``` it returns also `customerID` 800 for example. Any ideas how this issue can be solved?
``` where customerID between '80000' and '80999' and character_length(customerID) = 5 -- only five digit IDs ```
You could use regular expressions to format. Something like this should get you started [Query to get only numbers from a string](https://stackoverflow.com/questions/16667251/query-to-get-only-numbers-from-a-string)
SQL - WHERE between on varchar column
[ "", "sql", "where-clause", "between", "" ]
I have a table with a column of comma separated values, i want to take them as different values and place inside The In clause of another select statement. i've tried with cross apply but didn't get it working properly the table (T1) looks like : ``` Empcode Eid Unitcodes 007645 164 UNT111$UNT112$UNT113$ 000645 162 UNT100$UNT102$UNT20$UNT97$UNT98$UNT99$UNT136$ 002585 163 UNT25$UNT39$ 003059 180 UNT76$ 000559 165 UNT109$UNT114$UNT166$UNT27$UNT60$UNT103$UNT58$ 003049 175 UNT106$UNT54$UNT86$UNT87$UNT130$UNT131$UNT132$ 003049 177 UNT51$UNT56$UNT91$UNT92$ ``` and i need a query something like : ``` select * from T2 where empcode='abcd' unitcode in ('UNT111','UNT112','UNT113') //only that particular emps Unitcodes from the table T1 ```
You can do It in following: **QUERY** ``` SELECT * FROM #test2 WHERE ID IN ( SELECT LTRIM(RTRIM(m.n.value('.[1]','varchar(8000)'))) AS Unitcodes FROM ( SELECT CAST('<XMLRoot><RowData>' + REPLACE(Unitcodes,'$','</RowData><RowData>') + '</RowData></XMLRoot>' AS XML) AS x FROM #test )t CROSS APPLY x.nodes('/XMLRoot/RowData')m(n) ) ``` **SAMPLE DATA** ``` CREATE TABLE #test ( Empcode INT, Eid INT, Unitcodes NVARCHAR(MAX) ) INSERT INTO #test VALUES (000559, 165, 'UNT109$UNT114$UNT166$UNT27$UNT60$UNT103$UNT58$'), (003049, 175, 'UNT106$UNT54$UNT86$UNT87$UNT130$UNT131$UNT132$') CREATE TABLE #test2 ( ID NVARCHAR(MAX) ) INSERT INTO #test2 VALUES ('UNT54'),('UNT130'),('UNT999') ``` **OUTPUT** ``` ID UNT54 UNT130 ```
Create a function : This function will split your values ``` Create FUNCTION [dbo].[fn_Split](@text varchar(8000), @delimiter varchar(20)) RETURNS @Strings TABLE ( position int IDENTITY PRIMARY KEY, value varchar(8000) ) AS BEGIN DECLARE @index int SET @index = -1 WHILE (LEN(@text) > 0) BEGIN SET @index = CHARINDEX(@delimiter , @text) IF (@index = 0) AND (LEN(@text) > 0) BEGIN INSERT INTO @Strings VALUES (@text) BREAK END IF (@index > 1) BEGIN INSERT INTO @Strings VALUES (LEFT(@text, @index - 1)) SET @text = RIGHT(@text, (LEN(@text) - @index)) END ELSE SET @text = RIGHT(@text, (LEN(@text) - @index)) END RETURN END ``` then write your query ``` select * from T2 where empcode='abcd' unitcode in (fn_Split("yourcomma seprated column",',') ```
Comma separated values In An IN statement (SQL)
[ "", "sql", "sql-server", "cross-apply", "" ]
I need to inactivate a bunch of users. I have a list like this (sample): N0120454 N0219746 N0074342 N0203867 N0155928 N0025471 N0017467 N0239158 N0191759 N0007671 ``` UPDATE dbo.Users set IsActive = 0, IsLocked = 1 where UserName = 'N0007671' ``` Is there a way to perform this without doing it UserId by UserId?
If you have a hard-coded list the easiest way is to do: ``` UPDATE dbo.Users set IsActive = 0, IsLocked = 1 where UserName IN ('N0120454', 'N0219746', 'N0074342', 'N0203867', 'N0155928', 'N0025471', 'N0017467', 'N0239158', 'N0191759', 'N0007671') ``` If you expect to pass in a single string value containing a bunch of then it is simpler to loop through the values in the app. Otherwise you'll need to use dynamic SQL, or write a function that parses the string into multiple values and uses a temp table to join to. Neither are standard or pretty.
May be using [in operator](http://www.w3schools.com/sql/sql_in.asp) ``` UPDATE dbo.Users set IsActive = 0, IsLocked = 1 where UserName in( 'N0007671', 'N0219746', 'N0074342' ...) ``` If your vales are in the database table, you have 2 ways to do it 1 - Using [IN](http://www.w3schools.com/sql/sql_in.asp) ``` UPDATE dbo.Users SET IsActive = 0, IsLocked = 1 WHERE UserName IN ( SELECT username FROM your_table WHERE conditions ) ``` 2- Using [EXISTS](http://www.w3resource.com/sql/special-operators/sql_exists.php) ``` UPDATE dbo.Users SET IsActive = 0, IsLocked = 1 WHERE EXISTS ( SELECT NULL FROM your_table yt WHERE yt.username = dbo.Users and others_conditions ) ```
Updating Based on List
[ "", "sql", "sql-update", "" ]
Okay, so I have 3 tables: users ``` CREATE TABLE IF NOT EXISTS `users` ( `user_id` int(11) NOT NULL, `user_username` varchar(25) NOT NULL, `user_email` varchar(100) NOT NULL, `user_password` varchar(255) NOT NULL, `user_enabled` int(1) NOT NULL DEFAULT '1', `user_staff` varchar(15) NOT NULL DEFAULT '', `user_account_type` varchar(20) NOT NULL DEFAULT '0', `user_registerdate` date NOT NULL, `user_twofactor` int(11) NOT NULL DEFAULT '0', `user_twofackey` varchar(255) NOT NULL, `user_forgot_email_code` varchar(255) NOT NULL, `user_emailverified` varchar(25) NOT NULL DEFAULT 'unverified', `user_banned` varchar(25) NOT NULL DEFAULT 'unbanned', `user_has_avatar` int(11) NOT NULL DEFAULT '0', `user_has_banner` int(11) NOT NULL DEFAULT '0' ) ENGINE=InnoDB AUTO_INCREMENT=15 DEFAULT CHARSET=latin1; -- -- Dumping data for table `users` -- INSERT INTO `users` (`user_id`, `user_username`, `user_email`, `user_password`, `user_enabled`, `user_staff`, `user_account_type`, `user_registerdate`, `user_twofactor`, `user_twofackey`, `user_forgot_email_code`, `user_emailverified`, `user_banned`, `user_has_avatar`, `user_has_banner`) VALUES (1, 'fhfhfhf', 'lol@gmail.com', 'removed', 1, 'admin', 'Business', '2015-07-21', 0, '0', '0', 'unverified', 'unbanned', 1, 0); ``` company ``` CREATE TABLE IF NOT EXISTS `company` ( `company_id` int(11) NOT NULL, `company_name` varchar(100) NOT NULL, `company_user` int(11) NOT NULL, `company_enabled` varchar(50) NOT NULL DEFAULT 'enabled', `company_has_avatar` int(5) NOT NULL DEFAULT '0', `company_has_banner` int(5) NOT NULL DEFAULT '0' ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=latin1; -- -- Dumping data for table `company` -- INSERT INTO `company` (`company_id`, `company_name`, `company_user`, `company_enabled`, `company_has_avatar`, `company_has_banner`) VALUES (1, 'Rad', 3, 'enabled', 0, 0); ``` training\_company ``` CREATE TABLE IF NOT EXISTS `training_company` ( `training_company_id` int(11) NOT NULL, `training_company_name` varchar(100) NOT NULL, `training_company_user` int(11) NOT NULL, `training_company_enabled` varchar(50) NOT NULL DEFAULT 'enabled', `training_company_has_avatar` int(5) NOT NULL DEFAULT '0', `training_company_has_banner` int(5) NOT NULL DEFAULT '0' ) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=latin1; -- -- Dumping data for table `training_company` -- INSERT INTO `training_company` (`training_company_id`, `training_company_name`, `training_company_user`, `training_company_enabled`, `training_company_has_avatar`, `training_company_has_banner`) VALUES (1, '123', 3, 'enabled', 0, 0), (2, '123', 3, 'enabled', 0, 0), (3, '123', 3, 'enabled', 0, 0); ``` Each have a profile, that have an incrementing id, so will have the same id, Iam just defining they via type, so user would be user, training would be training and company would be company, I am allowing a user to follow either one. SQL ``` SELECT * FROM timeline_status LEFT JOIN users ON timeline_status.timeline_status_user = users.user_id LEFT JOIN timeline_likes ON timeline_status.timeline_status_id = timeline_likes.timeline_likes_main_status LEFT JOIN friends ON timeline_status.timeline_status_user = friends.friends_friend LEFT JOIN user_personal_information ON timeline_status.timeline_status_user = user_personal_information.user_personal_information_user LEFT JOIN following ON timeline_status.timeline_status_user = following.following WHERE timeline_status_enabled = 'enabled' AND timeline_status.timeline_status_type = 'user' AND (timeline_status.timeline_status_user = :status_user OR friends.friends_user = :friend_user) AND (timeline_status_privacy = 'onlyme' AND timeline_status_user = :status_user2 OR timeline_status_privacy = 'public' OR timeline_status_privacy = 'private') GROUP BY timeline_status_id ORDER BY timeline_status_date DESC LIMIT :start, :end ``` So I'd want to select from users if type = user, and row exists in followers and/or friends, select from companies or training from followers if type = company or training. My status have the company/user/training id, and the type, so I know which table to select the 'user from' my following table; ``` CREATE TABLE IF NOT EXISTS `following` ( `following_id` int(11) NOT NULL, `following_user` int(11) NOT NULL, `following_type` varchar(50) NOT NULL, `following` int(11) NOT NULL ) ENGINE=InnoDB AUTO_INCREMENT=9 DEFAULT CHARSET=latin1; -- -- Dumping data for table `following` -- INSERT INTO `following` (`following_id`, `following_user`, `following_type`, `following`) VALUES (5, 3, 'company', 14), (8, 3, 'training', 1); ``` timeline status: ``` CREATE TABLE IF NOT EXISTS `timeline_status` ( `timeline_status_id` int(11) NOT NULL, `timeline_status_user` int(11) NOT NULL, `timeline_status_privacy` varchar(25) NOT NULL DEFAULT 'public', `timeline_status_type` varchar(25) NOT NULL DEFAULT 'user', `timeline_status_post` text NOT NULL, `timeline_status_date` datetime NOT NULL, `timeline_status_enabled` varchar(25) NOT NULL DEFAULT 'enabled' ) ENGINE=InnoDB AUTO_INCREMENT=123 DEFAULT CHARSET=latin1; -- -- Dumping data for table `timeline_status` -- INSERT INTO `timeline_status` (`timeline_status_id`, `timeline_status_user`, `timeline_status_privacy`, `timeline_status_type`, `timeline_status_post`, `timeline_status_date`, `timeline_status_enabled`) VALUES (98, 3, 'private', 'user', 'hello', '2015-10-02 16:29:48', 'enabled'), (99, 3, 'onlyme', 'user', 'yo', '2015-10-02 16:29:56', 'enabled'), (100, 3, 'public', 'user', 'fghyjt', '2015-10-02 17:51:28', 'enabled'), (101, 1, 'private', 'training', 'teest..', '2015-10-03 14:26:45', 'enabled'), (102, 15, 'public', 'company', 'hello', '2015-10-06 13:32:30', 'enabled'); ``` So how can I do it so if the following type = company select from company, if following tye = training select from training, and if following type = user, keep the sql how it is at the moment. Because at the moment, I am following a company with the id of 1, but there's a user with an id of 1 too, so I am getting their statuses.
Your best bet would be to use the `UNION` operator to mix them all in 1 table, and then query based on type. For instance, you could do something like this : ``` SELECT f.*, t.training_company_name as name, null as staff, t.training_company_enabled as enabled, t.training_company_has_banner as banner, t.training_company_has avatar as avatar FROM following f INNER JOIN training_company t on f.following_user = t.training_company_user AND f.following_type='training' UNION ALL SELECT f.*, c.company_name as name, null as staff, c.company_enabled as enabled, c.company_has_banner as banner, c.company_has avatar as avatar FROM following f INNER JOIN company c on f.following_user = c.company_user AND c.following_type='company' UNION ALL SELECT f.*, u.user_username as name, u.user_staff as staff, u.user_enabled as enabled, u.user_has_banner as banner, u.user_has avatar as avatar FROM following f INNER JOIN users u on f.following_user = c.company_user AND f.following_type='user' ``` And from there you will have a derived table/view that will look like `V_followers(timeline_status_id, timeline_status_user, timeline_status_privacy, timeline_status_type, timeline_status_post, timeline_status_date, timeline_status_enabled, name, staff, enabled, banner, avatar)`. I'm not 100% certain the syntax is MySql-correct though, but the idea remains the same.
I think you will need another variable to tell you the type in addition to the user id. Then you can wrap both up in a CASE statement, like so: ``` WHERE CASE WHEN type = 'USER' THEN timeline_status.timeline_status_user = id WHEN type = 'FRIENDS' THEN friends.friends_user = id WHEN type = 'FOLLOWING' THEN following.user = id END ```
Trying to left join 3 tables where the id could be the same, I am defining it via the type
[ "", "mysql", "sql", "" ]
I'm trying to split a column, in SQL, into multiple columns. My data looks like this: ``` Column1 | Column2 | Column3 ABC | 123 | User7;User9 nbm | qre | User1;User2;User3 POI | kjh | User1;User4;User5;User9 ``` I need to split the Column3 into 4 new columns - each column containing the first "User". Each value within this column is separated by a semi-colon. One of the problems I have is that Column3 can have any number of users listed (all separated by semi-colons), so I don't know how many "new" columns I would need added. The final output would need to look like: ``` Column1 | Column2 | Column3 | NewColumn1 | NewColumn2 | ETC. ```
Besides the fact that this is bad design here is a solution: Just paste this into an empty query window and execute. Adapt to your needs... ``` declare @tbl TABLE(Column1 VARCHAR(15),Column2 VARCHAR(15),Column3 VARCHAR(150)); INSERT INTO @tbl VALUES ('ABC','123','User7;User9') ,('nbm','qre','User1;User2;User3') ,('POI','kjh','User1;User4;User5;User9'); WITH Splitted AS ( SELECT Column1,Column2,CAST('<x>'+REPLACE(Column3,';','</x><x>')+'</x>' AS XML) AS Col3Splitted FROM @tbl ) SELECT Column1,Column2,Col3Splitted ,Col3Splitted.value('x[1]','varchar(max)') AS Column4 ,Col3Splitted.value('x[2]','varchar(max)') AS Column5 ,Col3Splitted.value('x[3]','varchar(max)') AS Column6 ,Col3Splitted.value('x[4]','varchar(max)') AS Column7 /*Add as many as you need*/ FROM Splitted ``` Following the discussion with @SeanLang I add this dynamic approach. It will count the highest number of semicolons in Column3 and build the statement above dynamically. ``` CREATE TABLE #tbl (Column1 VARCHAR(15),Column2 VARCHAR(15),Column3 VARCHAR(150)); INSERT INTO #tbl VALUES ('ABC','123','User7;User9') ,('nbm','qre','User1;User2;User3') ,('POI','kjh','User1;User4;User5;User9'); DECLARE @sql VARCHAR(MAX)= 'WITH Splitted AS ( SELECT Column1,Column2,CAST(''<x>''+REPLACE(Column3,'';'',''</x><x>'')+''</x>'' AS XML) AS Col3Splitted FROM #tbl ) SELECT Column1,Column2'; DECLARE @counter INT = 0; WHILE @counter<=(SELECT MAX(LEN(Column3) - LEN(REPLACE(Column3, ';', ''))) from #tbl) BEGIN SET @counter=@counter+1; SET @sql=@sql+',Col3Splitted.value(''x[' + CAST(@counter AS VARCHAR(10)) + ']'',''varchar(max)'') AS Column' + CAST(@counter+3 AS VARCHAR(10)); END SET @sql=@sql+ ' FROM Splitted;'; EXEC (@sql); DROP TABLE #tbl; ```
Here is a method that will be 100% dynamic. It will produce any number of columns based solely on the data it finds. The prevailing method for this around SO is a dynamic pivot. I prefer a dynamic crosstab as I find the syntax less obtuse and it has a slight benefit from a performance standpoint too. :) Here is an article which explains this methodology very well. <http://www.sqlservercentral.com/articles/Crosstab/65048/> Also, I am using the DelimitedSplit8K function originally penned by Jeff Moden and improved over time through the community. You can read about it and find the code for it here. <http://www.sqlservercentral.com/articles/Tally+Table/72993/> ``` if OBJECT_ID('tempdb..#Something') is not null drop table #Something; create table #something ( Column1 varchar(5) , Column2 varchar(5) , Column3 varchar(50) ); insert #something select 'ABC', '123', 'User7;User9' union all select 'nbm', 'qre', 'User1;User2;User3' union all select 'POI', 'kjh', 'User1;User4;User5;User9'; declare @StaticPortion nvarchar(2000) = 'with orderedResults as ( select s.Column1 , s.Column2 , x.Item , x.ItemNumber from #something s cross apply dbo.DelimitedSplit8K(Column3, '';'') x ) select Column1 , Column2 '; declare @DynamicPortion nvarchar(max) = ''; WITH E1(N) AS (select 1 from (values (1),(1),(1),(1),(1),(1),(1),(1),(1),(1))dt(n)), E2(N) AS (SELECT 1 FROM E1 a, E1 b), --10E+2 or 100 rows E4(N) AS (SELECT 1 FROM E2 a, E2 b), --10E+4 or 10,000 rows max cteTally(N) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E4 ) select @DynamicPortion = @DynamicPortion + ', MAX(Case when ItemNumber = ' + CAST(N as varchar(6)) + ' then Item end) as Subject' + CAST(N as varchar(6)) + CHAR(10) from cteTally t where t.N <= ( select top 1 COUNT(*) from #something s cross apply dbo.DelimitedSplit8K(Column3, ';') x group by Column1 Order by COUNT(*) desc ); declare @FinalStaticPortion nvarchar(2000) = ' from orderedResults group by Column1, Column2 order by Column1, Column2'; declare @SqlToExecute nvarchar(max) = @StaticPortion + @DynamicPortion + @FinalStaticPortion; select @SqlToExecute; --uncomment the following line when you are sure this is what you want to execute. --exec sp_executesql @SqlToExecute; ```
Split semicolon-delimited column into multiple columns
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have the following table: ``` crit_id | criterium | val1 | val2 ----------+------------+-------+-------- 1 | T01 | 9 | 9 2 | T02 | 3 | 5 3 | T03 | 4 | 9 4 | T01 | 2 | 3 5 | T02 | 5 | 1 6 | T03 | 6 | 1 ``` I need to convert the values in 'criterium' into columns as 'cross product' with val1 and val2. So the result has to lool like: ``` T01_val1 |T01_val2 |T02_val1 |T02_val2 | T03_val1 | T03_val2 ---------+---------+---------+---------+----------+--------- 9 | 9 | 3 | 5 | 4 | 9 2 | 3 | 5 | 1 | 6 | 1 ``` Or to say differently: I need every value for all criteria to be in one row. This is my current approach: ``` select case when criterium = 'T01' then val1 else null end as T01_val1, case when criterium = 'T01' then val2 else null end as T01_val2, case when criterium = 'T02' then val1 else null end as T02_val1, case when criterium = 'T02' then val2 else null end as T02_val2, case when criterium = 'T03' then val1 else null end as T03_val1, case when criterium = 'T03' then val2 else null end as T04_val2, from crit_table; ``` But the result looks not how I want it to look like: ``` T01_val1 |T01_val2 |T02_val1 |T02_val2 | T03_val1 | T03_val2 ---------+---------+---------+---------+----------+--------- 9 | 9 | null | null | null | null null | null | 3 | 5 | null | null null | null | null | null | 4 | 9 ``` What's the fastest way to achieve my goal? *Bonus question:* I have 77 criteria and seven different kinds of values for every criterium. So I have to write 539 `case` statements. Whats the best way to create them dynamically? I'm working with PostgreSql 9.4
## Prepare for crosstab In order to use [`crosstab()`](http://www.postgresql.org/docs/9.4/static/tablefunc.html) function, the data must be reorganized. You need a dataset with three columns (`row number`, `criterium`, `value`). To have all values in one column you must *unpivot* two last columns, changing at the same time the names of criteria. As a row number you can use `rank()` function over partitions by new criteria. ``` select rank() over (partition by criterium order by crit_id), criterium, val from ( select crit_id, criterium || '_v1' criterium, val1 val from crit union select crit_id, criterium || '_v2' criterium, val2 val from crit ) sub order by 1, 2 rank | criterium | val ------+-----------+----- 1 | T01_v1 | 9 1 | T01_v2 | 9 1 | T02_v1 | 3 1 | T02_v2 | 5 1 | T03_v1 | 4 1 | T03_v2 | 9 2 | T01_v1 | 2 2 | T01_v2 | 3 2 | T02_v1 | 5 2 | T02_v2 | 1 2 | T03_v1 | 6 2 | T03_v2 | 1 (12 rows) ``` This dataset can be used in `crosstab()`: ``` create extension if not exists tablefunc; select * from crosstab($ct$ select rank() over (partition by criterium order by crit_id), criterium, val from ( select crit_id, criterium || '_v1' criterium, val1 val from crit union select crit_id, criterium || '_v2' criterium, val2 val from crit ) sub order by 1, 2 $ct$) as ct (rank bigint, "T01_v1" int, "T01_v2" int, "T02_v1" int, "T02_v2" int, "T03_v1" int, "T03_v2" int); rank | T01_v1 | T01_v2 | T02_v1 | T02_v2 | T03_v1 | T03_v2 ------+--------+--------+--------+--------+--------+-------- 1 | 9 | 9 | 3 | 5 | 4 | 9 2 | 2 | 3 | 5 | 1 | 6 | 1 (2 rows) ``` ## Alternative solution For 77 criteria \* 7 parameters the above query may be troublesome. If you can accept a bit different way of presenting the data, the issue becomes much easier. ``` select * from crosstab($ct$ select rank() over (partition by criterium order by crit_id), criterium, concat_ws(' | ', val1, val2) vals from crit order by 1, 2 $ct$) as ct (rank bigint, "T01" text, "T02" text, "T03" text); rank | T01 | T02 | T03 ------+-------+-------+------- 1 | 9 | 9 | 3 | 5 | 4 | 9 2 | 2 | 3 | 5 | 1 | 6 | 1 (2 rows) ```
I agree with Michael's comment that this requirement looks a bit weird, but if you really need it that way, you were on the right track with your solution. It just needs a little bit of additional code (and small corrections wherever val\_1 and val\_2 where mixed up): ``` select sum(case when criterium = 'T01' then val_1 else null end) as T01_val1, sum(case when criterium = 'T01' then val_2 else null end) as T01_val2, sum(case when criterium = 'T02' then val_1 else null end) as T02_val1, sum(case when criterium = 'T02' then val_2 else null end) as T02_val2, sum(case when criterium = 'T03' then val_1 else null end) as T03_val1, sum(case when criterium = 'T03' then val_2 else null end) as T03_val2 from crit_table group by trunc((crit_id-1)/3.0) order by trunc((crit_id-1)/3.0); ``` This works as follows. To aggregate the result you posted into the result you would like to have, the first helpful observation is that the desired result has less rows than your preliminary one. So there's some kind of grouping necessary, and the key question is: "What's the grouping criterion?" In this case, it's rather non-obvious: It's criterion ID (minus 1, to start counting with 0) divided by 3, and truncated. The three comes from the number of different criteria. After that puzzle is solved, it is easy to see that for among the input rows that are aggregated into the same result row, there is only one non-null value per column. That means that the choice of aggregate function is not so important, as it is only needed to return the only non-null value. I used the sum in my code snippet, but you could as well use min or max. As for the bonus question: Use a code generator query that generates the query you need. The code looks like this (with only three types of values to keep it brief): ``` with value_table as /* possible kinds of values, add the remaining ones here */ (select 'val_1' value_type union select 'val_2' value_type union select 'val_3' value_type ) select contents from ( select 0 order_id, 'select' contents union select row_number() over () order_id, 'max(case when criterium = '''||criterium||''' then '||value_type||' else null end) '||criterium||'_'||value_type||',' contents from crit_table cross join value_table union select 9999999 order_id, ' from crit_table group by trunc((crit_id-1)/3.0) order by trunc((crit_id-1)/3.0);' contents ) v order by order_id; ``` This basically only uses a string template of your query and then inserts the appropriate combinations of values for the criteria and the val-columns. You could even get rid of the with-clause by reading column names from information\_schema.columns, but I think the basic idea is clearer in the version above. Note that the code generated contains one comma too much directly after the last column (before the from clause). It's easier to delete that by hand afterwards than correcting it in the generator.
How to pivot or 'merge' rows with column names?
[ "", "sql", "postgresql", "pivot", "" ]
Excel's datetime values look like 42291.60493, which means MySQL sees them as strings and not as dates. Is there a MySQL code that can convert them to MySQL datetime? (i.e. [like in MS SQL](https://stackoverflow.com/questions/13850605/t-sql-to-convert-excel-date-serial-number-to-regular-date))
I can think of 2 solutions: 1. Convert your dates within excel to a formatted date string that conforms to mysql's date and time format using text() function within excel. 2. Convert the number using calculation to date within mysql: (the expression below may be simplified) ``` select date_add(date_add(date('1899-12-31'), interval floor(@datefromexcel) day), interval floor(86400*(@datefromexcel-floor(@datefromexcel))) second) ```
Excel stores date times as the number of days since 1899-12-31. Unix stores the number of seconds since 1970-01-01, but only non-negative values are allowed. So, for a *date*, you can do ``` select date_add(date('1899-12-31'), interval $Exceldate day ) ``` This doesn't work for fractional days. But, for a unix date time, this would be nice: ``` select $ExcelDate*24*60*60 + unix_timstamp('1899-12-31') ``` But negative values are problematic. So, this requires something like this: ``` select ($ExcelDate - datediff('1899-12-31', '1970-01-01')) * 24*60*60 ``` That is, just count the number of seconds since the Unix cutoff date. Note: this assumes that the date is after 1970-01-01, because MySQL doesn't understand unix dates before the cutoff.
MySQL code to convert Excel datetime
[ "", "mysql", "sql", "excel", "type-conversion", "" ]
``` DELETE FROM BIZ WHERE [Orgnl_Cmpltn_Date] BETWEEN '2014-02-31' AND '2014-04-01' ``` This is the `DELETE` statement I wrote. There is an error that says: > Conversion failed when converting date and/or time from character string. I know I have to write the correct date format, but I am not sure how that goes. This question has not been answered elsewhere because the answers I saw did not specify date format (in the context that I am asking for)
You wrote 31st of February... Maybe..... that date doesn't exists. ``` DELETE FROM BIZ WHERE [Orgnl_Cmpltn_Date] BETWEEN '2014-02-28' AND '2014-04-01' ``` For a general idea of convert date: ``` DELETE FROM BIZ WHERE [Orgnl_Cmpltn_Date] BETWEEN CONVERT(date,'2014.02.28',102) and CONVERT(date,'2014.04.01',102) ``` Here you can find the complete list of values for third parameter of `CONVERT` <https://msdn.microsoft.com/en-us/library/ms187928.aspx>
Use this instead ``` DELETE FROM BIZ WHERE [Orgnl_Cmpltn_Date] >= '2014-02-28' AND [Orgnl_Cmpltn_Date] <= '2014'04'01' ``` I don't know if this matters, but February has only 28 or 29 days.
SQL Server - Deleting rows between a date range using SQL. Date conversion fails
[ "", "sql", "sql-server", "sql-server-2008", "datetime", "sql-delete", "" ]
I need to write a query that will result with a "fail" condition when one test fails. Testing results are listed in the Fail and Pass columns. One manufacturer product can have many tests. My Data is like this: ``` MFG Name | Code | Breaker | Fail | Pass ----------------------------------------- ABC R 2 0 1 ABC R 1 1 0 ``` Regardless of the Breaker value if one test fails, the entire batch fails. The result I was looking for is as follows: ``` MFG Name | Code | Result ------------------------------------------- ABC R FAIL ``` Any help or insight you can give would be appreciated.
Try: ``` select mfg_name, code, case when sum(fail) > 0 then 'FAIL' else 'PASS' end as result from tbl group by mfg_name, code ``` This assumes you want to show combinations of (`mfg_name` and `code` pairs with no fails as a 'PASS', for only fails you would add `having sum(fail) > 0`)
This is an aggregation query. Assuming `fail` only takes on two values (0 and 1): ``` select MFGName, Code, (case when max(fail) = 1 then 'FAIL' else 'PASS' end) as Result from tests group by MFGName, Code; ```
Sql query to compare the result between multiple records and provide a result
[ "", "sql", "coalesce", "aggregates", "" ]
I have a Select query: ``` Select left([Foundation_Account_Name],4) From [dbo].[tbl_Foundation_Account] Where [Foundation_Account_Name] ``` That returns ``` MAZA SUMI APLH ``` I have a second Select query that selects the same fields from a temp table ``` Select left([Foundation Account Name],4) From [dbo].[Import_tbl_RDO] Where [Foundation Account Name] IS NOT NULL)) ``` That returns ``` SUMI ``` I want to use these select queries in a where clause that will return just `SUMI` when I run the select on the `tbl_Foundation_Account`. My idea was this, ``` Select left([Foundation_Account_Name],4) From [dbo].[tbl_Foundation_Account] Where [Foundation_Account_Name] = (Select left([Foundation_Account_Name],4) From [dbo].[tbl_Foundation_Account] Where [Foundation_Account_Name] = (Select left([Foundation Account Name],4) From [dbo].[Import_tbl_RDO] Where [Foundation Account Name] IS NOT NULL)) ``` In theory (At least in my head anyway) the main select query is only going to select records from `tbl_Foundation_Account` where the second and third select query equally each other, which should be `SUMI`. Right now this just returns nothing, and the deeper down the rabbit hole I get, the most confused I'm making myself. Is this query that messed up, or am I close to achieving my desired result
You can use `intersect` to get the common results from both the queries. ``` Select left([Foundation_Account_Name],4) From [dbo].[tbl_Foundation_Account] INTERSECT Select left([Foundation Account Name],4) From [dbo].[Import_tbl_RDO] Where [Foundation Account Name] IS NOT NULL ```
Just building on what you have already... and JOINING the tables together... though intersect seems cleaner... ``` with A as (SELECT left([Foundation_Account_Name],4) ID FROM [dbo].[tbl_Foundation_Account]), b as (SELECT left([Foundation Account Name],4) ID FROM [dbo].[Import_tbl_RDO]) SELECT C.left([Foundation_Account_Name],4) FROM [dbo].[tbl_Foundation_Account] C INNER JOIN A on A.ID = C.left([Foundation_Account_Name],4) INNER JOIN B on A.ID = B.ID ```
Issue with Subquery in Where Clause
[ "", "sql", "sql-server", "" ]
I have an controller method like this: ``` def index @categories = Category.all end ``` How do I order `@categories` names alphabetically?
You can `order`: ``` @categories = Category.order(:name) ```
In you Categories Controller: ``` class CategoriesController < ApplicationController def index @categories = Category.order(:name) end end ``` This will by default order the `:name` column in by alphabetical order.
Order ActiveRecord results alphabetically
[ "", "sql", "ruby-on-rails", "activerecord", "sql-order-by", "rails-activerecord", "" ]
I have joined 1 table twice on the same query, I keep getting error messages that the 'FROM clause have same exposed names. Even using AS does not seem to work, any ideas or suggestions? here is the query I am using; ``` select Contact.*, PERSON.*, address.* from address full join Contact on address.uprn = Contact.uprn full join PERSON on Contact.contactno = PERSON.contact full join address on address.uprn = PERSON.driveruprn ```
``` select Contact.*, PERSON.*, a1.*, a2.* from address a1 full join Contact on a1.uprn = Contact.uprn full join PERSON on Contact.contactno = PERSON.contact full join address a2 on a2.uprn = PERSON.driveruprn ``` , however there is no full join in mysql, workaround ``` select * from t1 left join t2 ON t1.id = t2.id union select * from t1 right join t2 ON t1.id = t2.id ```
You have to alias the second and subsequent usages of a table: ``` select ... from address <---first usage join contact ... join person ... join address AS other_address ... <---second usage ^^^^^^^^^^^^^^^^ ``` Doesn't really matter exactly where you do the aliases, but if you use a single table multiple times, all but ONE of those usages have to have unique aliases.
Joining 1 table twice in the same SQL query
[ "", "mysql", "sql", "" ]
Hey I have a large database where customers request data that is specific to them. They usually send me the requests in a text or csv file. I was wondering if there is a way to get sql to read that file and take the content and put them into a sql query. This way I don't have to open up that file and copy and paste everything into a sql query.
Steve already answered it. Let me add few words only. > you can not use the csv, text,excel or anyother format directly in > query for DML/DDL.. you can use file directly only for export/import.
No. MySQL is not designed to do this. You need an intermediate script that can interpret the files and generate the queries you require.
Putting a file content into an sql query?
[ "", "mysql", "sql", "" ]
I have some strings like ``` 2) Some text 34) Some text more 5 Some other text ``` and I need to find if certain text starts with pattern some numbers (not fixed) followed by a closing bracket ')' I tried: ``` If (PATINDEX('[^a-zA-Z]%', @myString) > 0) begin print @myString end ``` but it showed all the strings in the print, not the first two. How to resole this issue. Thanks!
SQL Server doesn't have great pattern matching but if you know that the numbers prefixing the `)` will always be less then 500, you can do this: ``` DECLARE @myString VARCHAR(100) = '266) Some text' If (PATINDEX('[0-9])%', @myString) > 0) OR --Check for one digit (PATINDEX('[0-9][0-9])%', @myString) > 0) OR --Check for two digits (PATINDEX('[0-9][0-9][0-9])%', @myString) > 0) --Check for three digits begin print @myString end ```
This will not consider all combinations but you can check. For example it certainly will fail if there will be several ')' symbols in string : ``` select * from t where s like '[0-9]%)%' and s not like '[0-9]%[^0-9]%)%' ``` Fiddle here <http://sqlfiddle.com/#!3/3b5fd/3>
How to find if a string starts with a number and a bracket in SQL server?
[ "", "sql", "sql-server", "sql-server-2008", "" ]
``` SELECT LTRIM(RTRIM(HOST0149.LOGINID)) AS LOGINID, LTRIM(RTRIM(HOST0140.EMAIL)) AS EMAIL, LTRIM(RTRIM(HOST0149.USERKEY)) AS ROLE FROM HOST0149 LEFT JOIN HOST0140 ON HOST0149.PERSONKEY = HOST0140.PERSONKEY ``` Hi, I am trying to apply a rule on the select for email column. Sometimes the login ID will be the users email address, how do I apply a rule that if the email appears in the Login ID, to return nothing for the email column? Any help would be much appreciated, thanks.
`CASE...WHEN` should do. Basically, you handle both cases explicitely. ``` SELECT LTRIM(RTRIM(HOST0149.LOGINID)) AS LOGINID, CASE WHEN LTRIM(RTRIM(HOST0140.EMAIL)) = LTRIM(RTRIM(HOST0149.LOGINID)) THEN NULL ELSE LTRIM(RTRIM(HOST0140.EMAIL)) END AS EMAIL, LTRIM(RTRIM(HOST0149.USERKEY)) AS ROLE FROM HOST0149 LEFT JOIN HOST0140 ON HOST0149.PERSONKEY = HOST0140.PERSONKEY ```
I think a `case` expression does what you want: ``` SELECT LOGINID, (CASE WHEN LOGINID <> EMAIL THEN EMAIL END) as EMAIL, ROLE FROM (SELECT LTRIM(RTRIM(HOST0149.LOGINID)) AS LOGINID, LTRIM(RTRIM(HOST0140.EMAIL)) AS EMAIL, LTRIM(RTRIM(HOST0149.USERKEY)) AS ROLE FROM HOST0149 LEFT JOIN HOST0140 ON HOST0149.PERSONKEY = HOST0140.PERSONKEY ) hh ```
Applying a rule
[ "", "sql", "sql-server", "sql-server-2008", "if-statement", "constraints", "" ]
I wrote a stored procedure who take a reference date, and add a hour to this element. Here is my line doing the operation : ``` DATEADD(day, DATEDIFF(DAY, 0, @conductor_date), [HOUR]) ``` For example, whith `@conductor_date = '2015-10-15'` and `[HOUR] = 23:00` it works and generate me a date like that : `'2015-10-15:23:00:00'` I face a logical issue when the value [HOUR] is more than 24. In fact, to solve my problem I need to generate `'2015-10-16:00:40:00'` when `[HOUR] = 24:40` Actualy with this values, I face the logical following exception : > The conversion of a varchar data type to a datetime data type resulted > in an out-of-range value. To sum up, I need to take care of hours that are more than '23:59' and switch to the next day : ``` DECLARE @conductor_date datetime DECLARE @hour varchar(5) SET @conductor_date = '2015-10-15' SET @hour = '24:40' SELECT DATEADD(day, DATEDIFF(DAY, 0, @conductor_date), @hour) ``` Expected : `2015-10-16:00:40:00`
You can split your `@hour` field into hours and minutes and add them separately: ``` DECLARE @conductor_date datetime DECLARE @hour varchar(5) DECLARE @hours int DECLARE @minutes int DECLARE @offset datetime SET @conductor_date = '2015-10-15' SET @hour = '24:40' SET @hours = cast(left(@hour, 2) as int) SET @minutes = cast(right(@hour, 2) as int) SET @offset = dateadd(day, datediff(day, 0,@conductor_date), 0) -- the begin of the day SELECT DATEADD(hour, @hours, dateadd(minute, @minutes, @offset)) ``` Of course all can be done in one line but for sake of visualization I have put it into separate statements.
According to the [documentation](https://msdn.microsoft.com/en-us/library/ms186724.aspx), date / time types don't support times larger then 23:59:59.9999999. You have to do manual string parsing for this. First you need to extract the total hours, divide that by 24 to get total days. Then calculate leftover hours, and with that reconstruct your time offset. With these in hand, you can build your required output value: ``` DECLARE @v VARCHAR(20) = '24:40' DECLARE @start VARCHAR(20) = '2015-10-15' DECLARE @days INT DECLARE @leftover INT SET @leftover = CAST(LEFT(@v, 2) AS INT) SET @days = @leftover / 24 SET @leftover = @leftover - @days * 24 SET @v = CAST(@leftover AS VARCHAR(2)) + SUBSTRING(@v, 3, 20) SELECT DATEADD(DAY, @days + DATEDIFF(DAY, 0, @start), @v) ``` Here's a working [SQLFiddle](http://sqlfiddle.com/#!6/9eecb/5944). This supports time string that start with HH (leading zeros) with any valid accuracy (HH:mm:ss.fffffff).
Data operations
[ "", "sql", "sql-server", "t-sql", "" ]
I have a dataset of 3 columns - Name, date-time, value For ex: ``` Name DateTime value ---------------------------- A 1/1/2011 1:00:00 626 A 1/1/2011 2:00:00 2311 B 1/1/2011 3:00:00 775 A 1/1/2011 4:00:00 6621 A 1/1/2011 5:00:00 8491 B 1/1/2011 6:00:00 9061 B 1/1/2011 7:00:00 3611 B 1/2/2011 4:00:00 5491 A 1/2/2011 5:00:00 21 ``` and I want to sum the values grouped by Name and 24 hours but without repetition (if I used one row I don't want to reuse it) Expected value: ``` Name Value A 2368 B 2042 B 549 A 2 ``` I used a `while` loop to mark the occurrences within 24 hours with unique index and it works but takes forever to run because I have big data table
i found a solution: find the max value for each Name within 24h and than to check which of the dates is the first and includes the current occurrence within ``` SET NOCOUNT ON; CREATE TABLE #Temp ( Name varchar(128) ,[Date] DATETIME ,[Value] INT ); INSERT INTO #Temp SELECT 'A','2011-01-01 1:00:00',626 UNION ALL SELECT 'A','2011-01-01 2:00:00',231 UNION ALL SELECT 'B','2011-01-01 3:00:00',775 UNION ALL SELECT 'A','2011-01-01 4:00:00',662 UNION ALL SELECT 'A','2011-01-01 5:00:00',849 UNION ALL SELECT 'B','2011-01-01 6:00:00',906 UNION ALL SELECT 'B','2011-01-01 7:00:00',361 UNION ALL SELECT 'B','2011-01-02 4:00:00',549 UNION ALL SELECT 'A','2011-01-02 5:00:00',2 UNION ALL SELECT 'A','2011-01-03 4:00:00',1 SELECT * FROM #Temp order by [Date]; select Name,[Date],[Value],(select max([Date]) from #temp t2 where t2.Name=t1.Name and t2.[Date] between t1.[Date] and dateadd(hour,24,t1.[Date])) MaxVal into #temp2 from #temp t1 select *,(select min(date) from #temp2 t2 where t1.[Date] between t2.[Date] and t2.MaxVal and t1.Name=t2.Name ) MinOfMax into #temp3 from #temp2 t1 select Name,sum(value) from #temp3 group by Name,MinOfMax ```
Try this way ``` select Name,SUM(Value) from Tblname group by Name,CONVERT(date,[Datetime],106) ```
Sum values within 24 hours without repetition
[ "", "sql", "sql-server", "" ]
I'm working on a huge DB in PostgreSQL. (Sorry if this is not redacted properly, I've been trying this for hours and still working on it) This is part of structure of the table used for my query: (table user\_activities) with some sample data. ``` +---------------------+---------------------+---------------------+ | user_id | activity | operation | +---------------------+---------------------+---------------------+ | 1 | 1 | 1 | | 1 | 1 | 1 | | 1 | 1 | 1 | | 2 | 1 | 2 | | 2 | 1 | 3 | | 3 | 1 | 3 | | 4 | 1 | 4 | | 4 | 1 | 4 | | 5 | 1 | 4 | | 5 | 1 | 5 | | 6 | 3 | 1 | | 6 | 3 | 1 | | 6 | 3 | 2 | | 7 | 3 | 3 | | 8 | 3 | 4 | | 8 | 3 | 5 | +---------------------+---------------------+---------------------+ ``` And this is my desired output: ``` +---------------------+---------------------+---------------------+ | count(user_id) | activity | operation | +---------------------+---------------------+---------------------+ | 4 | 1 | 1,2 | | 6 | 1 | 3,4,5 | | 6 | 3 | 1,2,3,4,5 | +---------------------+---------------------+---------------------+ ``` I need to count user\_id for each activity and group of operations values. So I need to group by activity when activity is 1 or 3. (already done it `WHERE activity IN (1,3)`). But I also need to group by operation. The problem is that every group of operation will have more than 1 value. Operation can be 1,2,3,4 and 5. And I want to concatenate the groups of 1,2 and also the groups of 3,4,5. But that's not all... If I group by operation, then I'll have 5 groups for each activity. I need to have 2 groups for activity 1 (the groups already specified) and only one group with all operation values if activity is 3. Is this possible? **Edit:** I won't be able to check the answers now, I hope to be able tomorrow. So will give my votes and replies for that answers then, thanks for helping.
[**SQL Fiddle Demo**](http://sqlfiddle.com/#!15/46e1f/8) Just use a CASE to put together the groups you want. ``` WITH cte as ( SELECT "user_id", "activity", "operation", CASE WHEN "activity" = 1 THEN CASE WHEN "operation" IN (1,2) THEN '1_first' ELSE '1_second' END WHEN "activity" = 3 THEN '3_first' END as "op_group" FROM user_activities ) SELECT "activity", "op_group", count("user_id"), array_agg(distinct "operation") as "operation" FROM cte GROUP BY "activity", "op_group" ``` **OUTPUT** ``` | activity | op_group | count | operation | |----------|----------|-------|-----------| | 1 | 1_first | 4 | 1,2 | | 1 | 1_second | 6 | 3,4,5 | | 3 | 3_first | 6 | 1,2,3,4,5 | ```
Updated for your detailed specification: ``` SELECT COUNT(*) as cnt, ua.activity, array_agg(distinct ua.operation) FROM users ua JOIN ( SELECT 1 AS activity, 1 as operation, 1 as GROUP_CODE UNION ALL SELECT 1 AS activity, 2 as operation, 1 as GROUP_CODE UNION ALL SELECT 1 AS activity, 3 as operation, 2 as GROUP_CODE UNION ALL SELECT 1 AS activity, 4 as operation, 2 as GROUP_CODE UNION ALL SELECT 1 AS activity, 5 as operation, 2 as GROUP_CODE UNION ALL SELECT 3 AS activity, 1 as operation, 3 as GROUP_CODE UNION ALL SELECT 3 AS activity, 2 as operation, 3 as GROUP_CODE UNION ALL SELECT 3 AS activity, 3 as operation, 3 as GROUP_CODE UNION ALL SELECT 3 AS activity, 4 as operation, 3 as GROUP_CODE UNION ALL SELECT 3 AS activity, 5 as operation, 3 as GROUP_CODE ) c ON ua.activity = c.activity and ua.operation = c.operation GROUP BY c.GROUP_CODE, ua.activity ``` <http://sqlfiddle.com/#!15/46e1f/15> --- **original answer** This is how I would do it, below I create the logic table dynamically but you can also have the table in your database and join to it. ``` SELECT GROUP_CODE, COUNT(*) as cnt FROM user_activities ua JOIN ( SELECT 1 AS activity, 1 as operation, 1 as GROUP_CODE UNION ALL SELECT 1 AS activity, 2 as operation, 1 as GROUP_CODE UNION ALL SELECT 1 AS activity, 3 as operation, 2 as GROUP_CODE UNION ALL SELECT 1 AS activity, 4 as operation, 2 as GROUP_CODE UNION ALL SELECT 1 AS activity, 5 as operation, 2 as GROUP_CODE UNION ALL SELECT 3 AS activity, 1 as operation, 3 as GROUP_CODE UNION ALL SELECT 3 AS activity, 2 as operation, 3 as GROUP_CODE UNION ALL SELECT 3 AS activity, 3 as operation, 3 as GROUP_CODE UNION ALL SELECT 3 AS activity, 4 as operation, 3 as GROUP_CODE UNION ALL SELECT 3 AS activity, 5 as operation, 3 as GROUP_CODE ) c ON ua.activity = c.activity and ua.operation = c.operation GROUP BY GROUP_CODE ``` This should be quite fast -- remember SQL is designed to work with sets (tables) and joins -- this uses joins to perform the logic. It is also nice because if you make it a table you can change the logic just by changing the table OR have multiple "logics" stored in the table if you add another column to select on and then pick which one to use as the query runs. I've used similar methods to do weighted and personalized ordering in dynamic UIs.
Group by group of values
[ "", "sql", "postgresql", "" ]
I have two tables `Service` and `Status`. The service table only holds a `name` and an `id` ``` | id | name | |----|-------| | 1 | Test1 | | 2 | Test2 | ``` And a Status table like this ``` | id | status | service_id | timestamp | |----|--------|------------|---------------------------| | 1 | OK | 1 | October, 15 2015 09:03:07 | | 2 | OK | 1 | October, 15 2015 09:08:07 | | 3 | OK | 2 | October, 15 2015 10:05:23 | | 4 | OK | 2 | October, 15 2015 10:15:23 | ``` I want to get the data like this ``` | id | name | status | timestamp | |----|-------|--------|---------------------------| | 1 | Test1 | OK | October, 15 2015 09:08:07 | | 2 | Test2 | OK | October, 15 2015 10:15:23 | ``` The latest Status with the service data. I have tried this statement ``` SELECT ser.id, ser.name, a.status, a.timestamp from Service ser inner join (select * from status order by Status.timestamp DESC limit 1) as a on a.service_id = ser.id ``` But I only get ``` | id | name | status | timestamp | |----|-------|--------|---------------------------| | 2 | Test2 | OK | October, 15 2015 10:15:23 | ``` How can I change the statement to get what I want? For testing [SQL Fiddle](http://www.sqlfiddle.com/#!9/46eef/9)
You can do this: ``` SELECT ser.id, ser.name, s.status, s.timestamp FROM Service ser INNER JOIN status as s ON s.service_id = ser.id INNER JOIN ( SELECT service_id, MAX(timestamp) AS MaxDate FROM status GROUP BY service_id ) AS a ON a.service_id = s.service_id AND a.MaxDate = s.timestamp; ``` The join with the subquery: ``` SELECT service_id, MAX(timestamp) AS MaxDate FROM status GROUP BY service_id ``` Will eliminate all the statuses except the one with the latest date.
For each service, use `NOT EXISTS` to return status only when no later one exists: ``` select ser.id, ser.name, st.status, st.timestamp from service ser left join status st1 on ser.id = st1.service_id where not exists (select 1 from status st2 where st2.service_id = st1.service_id and st2.timestamp > st1.timestamp) ``` Optionally doing `LEFT JOIN` to also return services without any status. Switch to `JOIN` if that's not wanted.
Selecting from two tables with inner join and limit
[ "", "mysql", "sql", "" ]
I get an error `ORA-00907: missing right parenthesis` In sql developer shows there is syntax error`syntax error expected: (` in this line > BUDGET\_MANAGEMENT\_SCENARIO.property\_id = > u.property\_id,BUDGET\_MANAGEMENT\_SCENARIO.agency\_id = u.agency\_id Here is my sql statement ``` SELECT PROPERTY_NAME, AGENCY_NAME, ADVERTISER_NAME, BUYING_AE_NAME, CLIENT_AE_NAME FROM "BUDGET_MANAGEMENT_SCENARIO" WHERE "BUDGET_MANAGEMENT_SCENARIO"."PROPERTY_ID" = 2 AND "BUDGET_MANAGEMENT_SCENARIO"."QUARTER_ID" IN (17, 18, 19, 20) AND ((BUDGET_MANAGEMENT_SCENARIO.property_id = u.property_id,BUDGET_MANAGEMENT_SCENARIO.agency_id = u.agency_id) # ERROR IN (SELECT property_id, agency_id FROM "PACING_BUDGET_DETAIL" WHERE (bae_app_user_id = 1 OR pae_app_user_id = 1 OR cae_app_user_id = 1) UNION SELECT property_id, agency_id FROM "USER_ASSIGNMENT" WHERE "USER_ASSIGNMENT"."APP_USER_ID" = 1 AND "USER_ASSIGNMENT"."PACING_BUDGET" = 1) u); ```
The correct select is: ``` SELECT PROPERTY_NAME, AGENCY_NAME, ADVERTISER_NAME,BUYING_AE_NAME,CLIENT_AE_NAME FROM "BUDGET_MANAGEMENT_SCENARIO" WHERE "BUDGET_MANAGEMENT_SCENARIO"."PROPERTY_ID" = 2 AND "BUDGET_MANAGEMENT_SCENARIO"."QUARTER_ID" IN (17, 18, 19, 20) AND ( BUDGET_MANAGEMENT_SCENARIO.property_id, BUDGET_MANAGEMENT_SCENARIO.agency_id) IN (SELECT property_id, agency_id FROM "PACING_BUDGET_DETAIL" WHERE (bae_app_user_id = 1 OR pae_app_user_id = 1 OR cae_app_user_id = 1)UNION SELECT property_id, agency_id FROM "USER_ASSIGNMENT" WHERE "USER_ASSIGNMENT"."APP_USER_ID" = 1 AND "USER_ASSIGNMENT"."PACING_BUDGET" = 1) u); ``` Problem was in (x =x1,y=y1) in (SELECT x1,y1) it should be: (x,y) in (SELECT x1,y1) IT matches exact rows with x,y that are in (x1,y1)
The following isn't valid SQL: ``` (BUDGET_MANAGEMENT_SCENARIO.property_id = u.property_id,BUDGET_MANAGEMENT_SCENARIO.agency_id = u.agency_id) IN (SELECT ... ``` I'm not exactly sure what you're attempting to do, but you cannot compare columns and use the result of that comparison in an `IN (...)` clause.
I get error ORA-00907: missing right parenthesis
[ "", "sql", "oracle", "" ]
Hopefully this is a quickie ``` SELECT * FROM T left JOIN J ON CASE WHEN condition1 THEN 1 --prefer this option even if CASE2 has a value WHEN condition2 THEN 2 ELSE 0 END = 1 (edit: but if 1 does not satisfy, then join on 2) ``` Both cases return results, but I want THEN 1 to supersede THEN 2 and be the lookup priority Can I have SQL do something like join on `max(CASE)`? Basically I am trying to duplicate a nested INDEX/MATCH from Excel edit: what i am hearing is that the Case should stop at the first returned TRUE, but it doesn't behave like that when i test ``` SELECT * FROM T left JOIN J ON CASE WHEN condition1 THEN 1 --prefer this option even if CASE2 has a value WHEN condition2 THEN 1 ELSE 0 END = 1 ``` it seems to prefer the 2nd THEN 1 sometimes, but not always... is there anything i am missing that would cause this behavior?
It doesn't matter which of the conditions causes the rows to match in a join. There are legitimate reasons to use a `case` expression in a join but I think you just want to `or` your conditions and then use the `case` expression to output a ranked reason for the match. ``` SELECT *, CASE WHEN <condition1> THEN 1 WHEN <condition2> THEN 2 END as match_code FROM T LEFT OUTER JOIN J ON <condition1> or <condition2> ``` I don't know what to picture regarding the "nested INDEX/MATCH" from Excel. If I'm on the wrong track above then perhaps you're looking for a nested case expression? Now if your conditions will have matches across *different rows* and you only want to keep one then... ``` WITH matches AS ( SELECT *, CASE WHEN <condition1> THEN 1 WHEN <condition2> THEN 2 END AS match_code FROM T LEFT OUTER JOIN J ON <condition1> OR <condition2> ), ranked as ( SELECT *, MIN(match_code) OVER (PARTITION BY ???) AS keeper FROM matches ) SELECT ... FROM ranked WHERE match_code = keeper ```
Well, you can always have several conditions in your CASE Statements: ``` SELECT * FROM T left JOIN J ON CASE WHEN condition1 THEN 1 --prefer this option even if CASE2 has a value WHEN condition2 And !condition1 THEN 2 ELSE 0 END = 1 ``` --UPDATED-- If both of your conditions are required to match, but condition1 is optional then you can try this statement too: ``` SELECT * FROM T left JOIN J ON CASE WHEN condition1 And condition2 THEN 1 --Both conditions match WHEN condition2 THEN 2 -- condition1 has no match ELSE 0 END = 1 ```
SQL LEFT JOIN with conditional CASE statements
[ "", "sql", "sql-server", "t-sql", "join", "case", "" ]
I have the following table: ``` CREATE TABLE dbo.TestSort ( Id int NOT NULL IDENTITY (1, 1), Value int NOT NULL ) ``` The `Value` column could (and is expected to) contain duplicates. Let's also assume there are already 1000 rows in the table. I am trying to prove a point about unstable sorting. Given this query that returns a 'page' of 10 results from the first 1000 inserted results: ``` SELECT TOP 10 * FROM TestSort WHERE Id <= 1000 ORDER BY Value ``` My intuition tells me that two runs of this query could return different rows if the `Value` column contains repeated values. I'm basing this on the facts that: * the sort is not stable * if new rows are inserted in the table between the two runs of the query, it could possibly create a re-balancing of B-trees (the `Value` column may be indexed or not) EDIT: For completeness: I assume rows never change once inserted, and are never deleted. In contrast, a query with stable sort (*ordering also by Id*) should always return the same results, since `ID`s are unique: ``` SELECT TOP 10 * FROM TestSort WHERE Id <= 1000 ORDER BY Value, Id ``` The question is: Is my intuition correct? If yes, can you provide an actual example of operations that would produce different results (at least "on your machine")? You could modify the query, add indexes on the `Values` column etc. I don't care about the exact query, but about the principle. I am using MS SQL Server (2014), but am equally satisfied with answers for any SQL database. If not, then why?
If you have ties when ordering the order by is not stable. **[LiveDemo](https://data.stackexchange.com/stackoverflow/query/375976)** ``` CREATE TABLE #TestSort ( Id INT NOT NULL IDENTITY (1, 1) PRIMARY KEY, Value INT NOT NULL ) ; DECLARE @c INT = 0; WHILE @c < 100000 BEGIN INSERT INTO #TestSort(Value) VALUES ('2'); SET @c += 1; END ``` Example: ``` SELECT TOP 10 * FROM #TestSort ORDER BY Value OPTION (MAXDOP 4); DBCC DROPCLEANBUFFERS; -- run to clear cache SELECT TOP 10 * FROM #TestSort ORDER BY Value OPTION (MAXDOP 4); ``` The point is I force query optimizer to use parallel plan so there is no guaranteed that it will read data sequentially like Clustered index probably will do when no parallelism is involved. You cannot be sure how Query Optimizer will read data unless you explicitly force to sort result in specific way using `ORDER BY Id, Value`. For more info read **[`No Seatbelt - Expecting Order without ORDER BY`](http://blogs.msdn.com/b/conor_cunningham_msft/archive/2008/08/27/no-seatbelt-expecting-order-without-order-by.aspx).**
Your intuition is correct. In SQL, the sort for `order by` is not stable. So, if you have ties, they can be returned in any order. And, the order can change from one run to another. The [documentation](https://msdn.microsoft.com/en-us/library/ms188385.aspx) sort of explains this: > Using OFFSET and FETCH as a paging solution requires running the query > one time for each "page" of data returned to the client application. > For example, to return the results of a query in 10-row increments, > you must execute the query one time to return rows 1 to 10 and then > run the query again to return rows 11 to 20 and so on. Each query is > independent and not related to each other in any way. This means that, > unlike using a cursor in which the query is executed once and state is > maintained on the server, the client application is responsible for > tracking state. To achieve stable results between query requests using > OFFSET and FETCH, the following conditions must be met: > > * The underlying data that is used by the query must not change. That is, either the rows touched by the query are not updated or all > requests for pages from the query are executed in a single transaction > using either snapshot or serializable transaction isolation. For more > information about these transaction isolation levels, see SET > TRANSACTION ISOLATION LEVEL (Transact-SQL). > * The ORDER BY clause contains a column or combination of columns that are guaranteed to be unique. Although this specifically refers to `offset`/`fetch`, it clearly applies to running the query multiple times without those clauses.
Can SQL return different results for two runs of the same query using ORDER BY?
[ "", "sql", "sql-server", "sql-order-by", "" ]
I want to model a database to store data of several types of tournaments (whith different types of modes: single rounds, double rounds, league, league + playoffs, losers, ...). Maybe, this project would be a kind of Challonge: www.challonge.com My question is: How to create a model in sql-relationship database to store all this types of tournaments? I can't imagine how to do this work. There is a lot of different tables but all tables is related to one attribute: tournamentType... Can I store a tournamentType field and use this field to select the appropiate table on query? Thanks you
I can understand why you're struggling with modeling this. One of the key reasons why this is difficult is because of the [object relational impendance-mismatch](https://en.wikipedia.org/wiki/Object-relational_impedance_mismatch). While I am a huge fan of SQL and it is an incredibly powerful way of being able to organize data, one of its downfalls - and why NoSQL exists - is because SQL is different from Object Oriented Programming. When you describe leagues, with different matches, it's pretty easy to picture this in object form: A `Match` object is extended by `League_Match`, `Round_Match`, `Knockout_Match`, etc. Each of these `Match` objects contains two `Team` objects. `Team` can be extended to `Winner` and `Loser`... But this is not how SQL databases work. So let's translate this into relationships: > I want to model a database to store data of several types of tournaments (whith different types of modes: single rounds, double rounds, league, league + playoffs, losers, ...). * Tournaments and "modes" are a one to many (1:n) relationship. * Each tournament has many teams, and each team can be part of many tournaments (n:n). * Each team has many matches, and each match has two teams (n:n). * Each tournament has multiple matches but each match only belongs to one tournament (1:n). The missing piece here that is hard to define as a universal relationship? - In rounds, each future match has two teams. - In knockout matches, each future match has an exponential but shrinking number of choices depending on the number of initial teams. You could define this in the database layer or you could define this in your application layer. If your goal is to keep referential integrity in mind (which is one of the key reasons I use SQL databases) then you'll want to keep it in the database. Another way of looking at this: I find that it is easiest for me to design a database when I think about the end result by thinking of it as JSON (or an array, if you prefer) that I can interact with. Let's look at some sample objects: Tournament: ``` [ { name: "team A", schedule: [ { date: "11/1/15", vs: "team B", score1: 2, score2: 4 }, { date: "11/15/15", vs: "team C", } ] } ], [ //more teams ] ``` As I see it, this works well for everything except for knockout, where you don't actually know which team is going to play which other team until an elimination takes place. This confirms my feeling that we're going to create descendants of a `Tournament` class to handle specific types of tournaments. Therefore I'd recommend three tables with the following columns: ``` Tournament - id (int, PK) - tournament_name - tournament_type Team - id (int, PK) - team_name (varchar, not null) # Any other team columns you want. Match - id (int, PK, autoincrement) - date (int) - team_a_score (int, null) - team_b_score (int, null) - status (either future, past, or live) - tournament_id (int, Foreign Key) Match_Round - match_id (int, not null, foreign key to match.id) - team_a_id (int, not null, foreign key to team.id) - team_b_id (int, not null, foreign key to team.id) Match_Knockout - match_id (int, not null, foreign key to match.id) - winner__a_of (match_id, not null, foreign key to match.id) - winner_b_of (match_id, not null, foreign key to match.id) ``` You have utilized sub-tables in this model. The benefit to this is that knockout matches and round/league matches are very different and you are treating them differently. The downside is that you're adding additional complexity which you're going to have to handle. It may be a bit annoying, but in my experience trying to avoid it only adds more headaches and makes it far less scalable. Now I'll go back to referential integrity. The challenge with this setup is that theoretically you could have values in both `Match_Round` and `Match_Knockout` when they only belong in one. To prevent this, I'd utilize `TRIGGER`s. Basically, stick a trigger on both the `Match_Round` and `Match_Knockout` tables, which prevents an `INSERT` if the `tournament_type` is not acceptable. Although this is a bit of a hassle to set up, it does have the happy benefit of being easy to translate into objects while still maintaining referential integrity.
It's easy to make data models far more complex than they need to be. A lot of what you describe is business logic that can't actually be answered by a perfect data model. Most of the tournament logic should be captured outside the data model in a programming language, such as mysql functions, Java, Python, C# etc. Really your data model should be all "static" data you need, and none of the moving parts. I would suggest the data model to be: ## METADATA TABLES League\_Type: * Id * Description * Playoff\_Rounds * Resolve\_Losing\_Teams * Max\_Number\_of\_Teams * Min\_Number\_of\_Teams * Number\_Of\_Games\_In\_Season * any other "settings" you want... Game\_Type: * Id * League\_Type\_Id (fk to League\_Type) * Game\_Type\_Name (e.g. regular season, playoff, championship) ## DATA TABLES League: * Id * League\_Type\_Id (fk to League\_Type) * League\_Name Team: * Id * League\_Id (fk to League\_Type) * Team\_Name Game: * Id * League\_Id (fk to League\_Type) * Game\_Type\_Id (fk to Game\_Type) * Home\_Team\_Id (fk to Team) * Visiting\_Team\_Id (fk to Team) * Week\_of\_season * Home\_Team\_Score * Visiting\_Team\_Score * Winning\_Team (Home or Visitor) --- From a data model perspective that should really be all you need. The procedural code should handle things like: * Creating games based on a randomized schedule * Updating scores and winning team in the Game table * Creating playoff games based on when the number of games in the season is up per the league settings table. * Setting matchups in the playoffs based on how many games each team has one. * Forcing the number of teams in a league to be between Min\_Number\_of\_Teams and Max\_Number\_of\_Teams prior to the season beginning. * Etc. You'll also likely want to create some views based on these tables to create some other meaningful information for end users: * Wins/Losses for a team (based on the Team table joined to the Game table) * Current team standings based on the previous wins/losses view for all teams * Home wins for a team * Road wins for a team * Anything else your heart desires! **Final thoughts** You do not want to do anything that would repeat data stored in the database. A great example of this would be creating a separate table for playoff games vs. regular season games. Most of the columns would be duplicated because almost *all* of the functionality and data stored between the two tables is the same. To create both tables would break the rules of normalization. The more compact and simple your data structure can be, the less procedural code you will have to write, and the easier it will be to maintain your database.
How to model several tournaments / brackets types into a SQL database?
[ "", "mysql", "sql", "database-design", "" ]
I have stored Fiscal month as a Nvarchar column in a table and want to sort that table based on fiscal month. Create Table sample( id Ineger ,FiscalMonth Nvarchar(MAX) ); Ex: Table contain this data ``` Id FiscalMonth ----------------- 1 FY15-Oct 2 FY15-Sep 3 FY15-Jul 4 FY15-Aug ``` Now if i sort this table on order by FiscalMonth Column Result would be: SELECT \* FROM sample ORDER BY FiscalMonth; Result: ``` Id FiscalMonth ----------------- 4 FY15-Aug 3 FY15-Jul 1 FY15-Oct 2 FY15-Sep ``` But i want result as Expected Output: ``` Id FiscalMonth ----------------- 3 FY15-Jul 4 FY15-Aug 2 FY15-Sep 1 FY15-Oct ``` Can somebody help me how to approach on this problem without changing the schema of table?
to sort for different years and then by month the `order by` in MS SQL should work as below: ``` SELECT [Id],[FiscalMonth] From Sample order by convert(date,Substring([FiscalMonth],6,3)+ ' 01 ' + Substring([FiscalMonth],3,2),103) ``` This assumes that your `FiscalMonth` Column is always the same format.
You should give a number for each month, distinguish them you can for the last three letter. ``` SUBSTRING (FiscalMonth , 5 , 3 ) ``` And after that sorted by those numbers.
How to sort Fiscal month that are stored as Nvarchar in SQL server 2008
[ "", "mysql", "sql", "sql-server", "sql-server-2008", "" ]
I am trying to figure out the best way to determine, for a specific ID within an Oracle 11g table that has 5 columns and say 100 rows against this ID, if all the column values are the same for these five columns. For example: Table Name: **TABLE\_DATA** Columns: ``` TD_ID ID COL1 COL2 COL3 COL4 COL5 ----------------------------------------------------------------------- 1 1 1 0 3 2 0 2 1 1 0 3 2 0 3 1 1 0 3 2 0 4 1 1 0 3 2 0 5 1 1 0 3 2 0 6 1 1 0 3 2 0 ``` So based on the above example which is just showing 6 rows for now against the ID:1, I want to check that for all COL1, COL2, COL3, COL4 and COL5 values where ID = 1, tell me if all the values are the same from the very first row right down to the last – **if so, then return ‘Y’ else return ‘N’**. Given the above example, the result would be ‘Y’ but for instance, if TD\_ID = 5 and COL3 = 4 then the result would be ‘N’, as all the column values are not the same, i.e.: ``` TD_ID ID COL1 COL2 COL3 COL4 COL5 ----------------------------------------------------------------------- 1 1 1 0 3 2 0 2 1 1 0 3 2 0 3 1 1 0 3 2 0 4 1 1 0 3 2 0 5 1 1 0 4 2 0 6 1 1 0 3 2 0 ``` I’m just not sure what the fastest approach to determine this is, as the table I am looking at may have more than 2000 rows within the table for a specific ID.
See if the following is fast enough for you: ``` SELECT ID, CASE WHEN COUNT(*) > 1 THEN 'No' ELSE 'Yes' END As "Result" FROM (SELECT DISTINCT ID, COL1, COL2, COL3, COL4, COL5 FROM Table_Data) dist GROUP BY ID ```
You may also try this : ``` Select ID , case when count(distinct COL1 || COL2 || COL3 || COL4 || COL5) > 1 then 'N' else 'Y' end RESULT From TABLE_DATA Group by id; ``` In this way you group by id and counts how many distinct combination are there. If only 1 , so all the rows have the same set of values, otherwise it don't.
How to check Oracle column values are all the same for a specific ID?
[ "", "sql", "oracle", "plsql", "oracle11g", "" ]
I use a view to generate affected rows. A table `t1` has five columns with Mo, Tu, We, Th and Fr. Each columns has 0 if this day is not relevant or 1 if this day is relevant (e.g. 1,0,1,0,1 - Mo, We and Fr are relevant, Tu, Th are not relevant). Now I define the day actual date + 1 and will receive all rows which have a 1 in the affected column. ``` ... WHERE CASE SUBSTRING(DATEPART(WEEKDAY, GETDATE() + 1), 1, 2) WHEN 'Mo' THEN t1.Mo = 1 WHEN 'Tu' THEN t1.Tu = 1 ... ``` but this doesn't work. How can i solve this problem?
`DATEPART` return a number, in case of `WEEKDAY` ranging (by default) from 1 (Sunday) to 7 (Saturday) ``` WHERE -- choose the relevant column CASE DATEPART(WEEKDAY, GETDATE() + 1) WHEN 2 THEN t1.Mo WHEN 3 THEN t1.Tu WHEN 4 THEN t1.We WHEN 5 THEN t1.Th WHEN 6 THEN t1.Fr -- compare with the expected value END = 1 ```
You need to separate the value the `case` returns with the comparison, which is a different thing: ``` WHERE 1 = CASE SUBSTRING(DATEPART(WEEKDAY, GETDATE() + 1), 1, 2) WHEN 'Mo' THEN t1.Mo WHEN 'Tu' THEN t1.Tu -- ... END ```
Case in Where Clause using datepart
[ "", "sql", "sql-server", "select", "case", "" ]
is it possible to replace row value with empty string if duplicate value found? For example ``` SELECT ProductCode, Color FROM Product -------------------- ProductCode | Color -------------------- 00A0B | Red 00A0B | Blue 00A0C | Red 00A0C | Black 00A0C | White -------------------- ``` to ``` -------------------- ProductCode | Color -------------------- 00A0B | Red | Blue 00A0C | Red | Black | White -------------------- ``` I'm using SQL Server 2012.
Often, this type of transformation is better done at the application layer, because the result-set is not "SQL-ish". That is, the ordering is important for understanding the rows. But, you can do this as: ``` select (case when row_number() over (partition by ProductCode order by (select NULL)) = 1 then ProductCode end) as ProductCode Color from Product order by ProductCode; ```
Use `ROW_NUMBER` and `CASE`: ``` WITH Cte AS( SELECT *, Rn = ROW_NUMBER() OVER(PARTITION BY ProductCode ORDER BY (SELECT NULL)) FROM Product ) SELECT ProductCode = CASE WHEN Rn = 1 THEN c.ProductCode ELSE '' END, Color FROM Cte c ORDER BY c.ProductCode ```
Replace row value with empty string if duplicate
[ "", "sql", "sql-server", "sql-server-2012", "" ]
I need to count the number of changing values in a column sequentially. Please see image for illustration (correct or expected output) In here, the column `Area` is changing, counter column should display the sequential counter based on the changing values in area. [![enter image description here](https://i.stack.imgur.com/esNnD.jpg)](https://i.stack.imgur.com/esNnD.jpg) I have started with this code ``` SELECT a.tenant, a.area, a.date , a.gsc, f.counter FROM TENANT a inner join (SELECT a.tenant, COUNT(DISTINCT e.Area) AS counter FROM TENANT GROUP BY tenant ) AS f ON a.tenant = f.tenant order by a.tenant, a.date ``` And gives me this output. Counting the number of distinct values found in `Area` column IN ALL rows. [![enter image description here](https://i.stack.imgur.com/MrWsU.png)](https://i.stack.imgur.com/MrWsU.png)
Here's one way to do it using window functions: ``` SELECT tenant, area, [date], sales, DENSE_RANK() OVER (ORDER BY grpOrder) AS counter FROM ( SELECT tenant, area, date, sales, MIN([date]) OVER (PARTITION BY area, grp) AS grpOrder FROM ( SELECT tenant, area, [date], sales, ROW_NUMBER() OVER (ORDER BY date) - ROW_NUMBER() OVER (PARTITION BY area ORDER BY [date]) AS grp FROM tenant ) AS t ) AS s ``` The inner query identifies islands of consecutive `area` values. See `grp` value in below partial output from this sub-query: ``` area date grp -------------------- 18 2015-01-01 0 18 2015-01-02 0 18 2015-01-05 2 18 2015-01-06 2 20 2015-01-03 2 20 2015-01-04 2 ``` Using window version of `MIN` we can calculate `grp` order: field `grpOrder` holds the minimum date per group. Using `DENSE_RANK()` in the outer query we can now easily calculate `counter` values: first group gets a value of 1, next group a value of 2, etc. [**Demo here**](http://sqlfiddle.com/#!6/abc73/2)
You can do it like this with window functions: ``` declare @data table(name varchar(10), area int, dates datetime, sales int) insert into @data(name, area, dates, sales) values ('Little Asia', 18, '20150101', 10) , ('Little Asia', 18, '20150102', 20) , ('Little Asia', 20, '20150103', 30) , ('Little Asia', 20, '20150104', 10) , ('Little Asia', 18, '20150105', 20) , ('Little Asia', 18, '20150106', 30) Select name, area, dates, sales , [counter] = DENSE_RANK() over(order by c) , [count] = Count(*) over(partition by n ,c) From ( Select name, area, dates, sales, n , c = ROW_NUMBER() over(order by n, dates) - ROW_NUMBER() over(partition by area, n order by dates) From ( Select name, area, dates, sales , n = ROW_NUMBER() over(order by dates) - ROW_NUMBER() over(partition by area order by dates) From @data ) as x ) as v order by dates ``` Output: ``` name area dates sales counter count Little Asia 18 2015-01-01 10 1 2 Little Asia 18 2015-01-02 20 1 2 Little Asia 20 2015-01-03 30 2 2 Little Asia 20 2015-01-04 10 2 2 Little Asia 18 2015-01-05 20 3 2 Little Asia 18 2015-01-06 30 3 2 ```
SQL Server Query to Count Number of Changing Values in a Column Sequentially
[ "", "sql", "sql-server", "sql-server-2008", "database-partitioning", "" ]
I am trying to achieve **Case 3**: **Case 1:** Customer is registered but did not placed an order yet. The query gives result for case 1. ``` SELECT c.customers_id, c.customers_firstname, c.customers_lastname, c.customers_email_address, c.customers_telephone, c.customers_fax, ab.entry_street_address, ab.entry_city, ab.entry_state, ab.entry_country_id, ab.entry_postcode FROM customers c JOIN address_book ab ON c.customers_id = ab.customers_id JOIN customers_info ci ON c.customers_id = ci.customers_info_id WHERE ci.customers_info_date_account_created BETWEEN '2014-10-25' AND '2015-10-10' ``` **Case 2:** Customer is registered and also has placed an order. This query is getting additional delivery information from orders table. The following query works for case 2. ``` SELECT c.customers_id, c.customers_firstname, c.customers_lastname, c.customers_email_address, c.customers_telephone, c.customers_fax, ab.entry_street_address, ab.entry_city, ab.entry_state, ab.entry_country_id, ab.entry_postcode, o.delivery_street_address, o.delivery_city, o.delivery_state, o.delivery_country, o.delivery_postcode FROM customers c JOIN address_book ab ON c.customers_id = ab.customers_id JOIN orders o ON c.customers_id = o.customers_id WHERE o.date_purchased BETWEEN '2014-10-25' AND '2015-10-10' ``` **Case 3(What I am trying to achieve):** I want to get all customers registered between certain time period using ci.customers\_info\_date\_account\_created BETWEEN '2014-10-25' and '2015-10-10' Also, I want to check if order is placed by the customer using o.date\_purchased BETWEEN '2014-10-25' and '2015-10-10'. If order is placed, then get all delivery information from orders table. If no order is placed, then leave these columns empty. o.delivery\_street\_address, o.delivery\_city, o.delivery\_state, o.delivery\_country, o.delivery\_postcode
``` SELECT c.customers_id, c.customers_firstname, c.customers_lastname, c.customers_email_address, c.customers_telephone, c.customers_fax, ab.entry_street_address, ab.entry_city, ab.entry_state, ab.entry_country_id, ab.entry_postcode, o.delivery_street_address, o.delivery_city, o.delivery_state, o.delivery_country, o.delivery_postcode FROM customers c INNER JOIN address_book ab ON c.customers_id = ab.customers_id INNER JOIN customers_info ci ON c.customers_id = ci.customers_info_id LEFT JOIN (select * from orders where date_purchased BETWEEN '2014-10-25' AND '2015-10-10') o ON c.customers_id = o.customers_id WHERE ci.customers_info_date_account_created BETWEEN '2014-10-25' AND '2015-10-10' ``` You can list the field names instead of \* in the subquery.
Change `JOIN orders o` to `LEFT JOIN orders o` (which will fill a NULL row), then change your condition to ``` WHERE (o.date_purchased BETWEEN '2014-10-25' AND '2015-10-10' OR o.customers_id IS NULL) ``` That will find anyone with an order in that timeframe, or customers that have not yet placed an order.
Select query with optional condition
[ "", "mysql", "sql", "" ]
How can I count pair-wise occurrences in a SQL Server table? Please note that the order of the given sequence has to be accounted for and shouldn't be changed. Original table: ``` 1 2 3 4 -------- 1 | A A A B 2 | A # don't count 3 | B A A 4 | B # don't count ``` Result: ``` 1 | AA = 3 2 | AB = 1 3 | BB = 0 4 | BA = 1 ``` In addition, the code has to work for large datasets. **Edit**: A pair in this context is a set of two values {x[ij], x[(i+1)j]}, where i=1,...,4 and j=1,...,4. Further, pairs that have the form `A null` or `B null` shouldn't be counted. Moreover, `null A` or `null B` can't happen, therefore they don't have to be accounted for.
**[LiveDemo](https://data.stackexchange.com/stackoverflow/query/374429)** ``` CREATE TABLE #tab([1] NVARCHAR(100), [2] NVARCHAR(100), [3] NVARCHAR(100), [4] NVARCHAR(100)); INSERT INTO #tab VALUES ('A', 'A', 'A', 'B') ,('A' , NULL ,NULL ,NULL ) ,('B' ,'A' ,'A', NULL),('B', NULL, NULL, NULL); WITH cte AS ( SELECT pair = [1] + [2] FROM #tab UNION ALL SELECT pair = [2] + [3] FROM #tab UNION ALL SELECT pair = [3] + [4] FROM #tab ), cte2 AS ( SELECT [1] AS val FROM #tab UNION ALL SELECT [2] FROM #tab UNION ALL SELECT [3] FROM #tab UNION ALL SELECT [4] FROM #tab ), all_pairs AS ( SELECT DISTINCT a.val + b.val AS pair FROM cte2 a CROSS JOIN cte2 b WHERE a.val IS NOT NULL and b.val IS NOT NULL ) SELECT a.pair, result = COUNT(c.pair) FROM all_pairs a LEFT JOIN cte c ON a.pair = c.pair GROUP BY a.pair; ``` How it works: 1. `cte` create all pairs (1,2), (2,3), (3,4) 2. `cte2` get all values from column 3. `all_pairs` create all possible pairs of values `AA, AB, BA, BB` 4. Final use grouping and `COUNT` to get number of occurences. **EDIT:** You can concatenate result as below: **[LiveDemo2](https://data.stackexchange.com/stackoverflow/query/374436)** ``` ... , final AS ( SELECT a.pair, result = COUNT(c.pair), rn = ROW_NUMBER() OVER(ORDER BY a.pair) FROM all_pairs a LEFT JOIN cte c ON a.pair = c.pair GROUP BY a.pair ) SELECT rn, [result] = pair + ' = ' + CAST(result AS NVARCHAR(100)) FROM final ```
I just want to point out a pretty easy way to *express* this logic: ``` with vals as ( select 'A' as val union all select 'B' ) pairs as ( select t1.val as val1, t2.val as val2 from vals t1 cross join vals t2 ) select p.*, (select count(*) from original where [1] = val1 and [2] = val2 or [2] = val1 and [3] = val2 or [3] = val1 and [4] = val2 ) as cnt from pairs p order by cnt desc; ``` This doesn't have great performance characteristics, that is actually easily fixed by using three subqueries and indexes on the data columns.
Count pair-wise occurrences in a T-SQL table
[ "", "sql", "sql-server", "t-sql", "" ]
for a quick check I used a query ``` select COUNT(*) LargeTable ``` and was surprized to see ``` LargeTable ----------- 1 ``` seconds later I realized my mistake, made it ``` select COUNT(*) from LargeTable ``` and got expected result ``` (No column name) ----------- 1.000.000+ ``` but now I don't understand why `COUNT(*)` returned `1` it happens if I do `select COUNT(*)` or `declare @x int = COUNT(*); select @x` another case ``` declare @EmptyTable table ( Value int ) select COUNT(*) from @EmptyTable ``` returns ``` (No column name) ----------- 0 ``` I did't find explanation in SQL standard (<http://www.contrib.andrew.cmu.edu/~shadow/sql/sql1992.txt>, online source is given here <https://stackoverflow.com/a/8949764/1506454>) why `COUNT(*)` returns 1?
In SQL Server a `SELECT` without a `FROM` clause works as though it operates against a single row table. This is not standard SQL. Other RDBMSs provide a utility `DUAL` table with a single row. So this would be treated effectively the same as ``` SELECT COUNT(*) AS LargeTable FROM DUAL ``` A related Connect Item discussing ``` SELECT 'test' WHERE EXISTS (SELECT *) ``` is <https://connect.microsoft.com/SQLServer/feedback/details/671475/select-test-where-exists-select>
Because without the **FROM** clause DBMS cannot know [LargeTable] is a table. You tricked it in guessing it's a **COLUMN NAME** alias You can try it and see ``` select count(*) 'eklmnjdklfgm' select count(*) eklmnjdklfgm select count(*) [eklmnjdklfgm] select count(*) ``` The first 3 examples returns **eklmnjdklfgm** as column name
Why COUNT(*) is equal to 1 without FROM clause?
[ "", "sql", "sql-server-2008", "count", "sql-server-2012", "" ]
``` CREATE VIEW dbo.myview1 As SELECT a.Id , a.Name , a.Age , CASE WHEN b.PmtSched ='Monthly' THEN 12 WHEN b.PmtSched ='Quarterly' THEN 4 WHEN b.PmtSched ='Semi-Annual' THEN 2 WHEN b.PmtSched ='Annually' THEN 1 ELSE 12 END AS ABC, SUM(a.Amount) *50 as TotalAmount , (a.AmtSpent - TotalAmount) * ABC as TOTALSPENDS FROM dbo.User a join dbo.Details b on a.Id = b.Id ``` Here ABC and TotalAmount are Alias columns which needs to be used in computation in view and i am not able to use them.how to achieve this ?is there any way we could do this or we cant ?please help.
Yes, you can use it and you don't need neither subqueries, nor CTEs. It's a simple [CROSS APPLY](https://technet.microsoft.com/en-us/library/ms175156%28v=sql.105%29.aspx?f=255&MSPPError=-2147217396). It's quite elegant and doesn't hurt readability. If you need more information, read [here](http://sqlmag.com/blog/tip-apply-and-reuse-column-aliases). Please see this example: ``` CREATE VIEW dbo.myview1 AS SELECT A.Id , A.Name , A.Age , SUM(A.Amount) * 50 AS TotalAmount , (A.AmtSpent - TotalAmount) * T.ABC AS TotalSpends FROM dbo.[User] AS A CROSS APPLY ( SELECT CASE B.PmtSched WHEN 'Monthly' THEN 12 WHEN 'Quarterly' THEN 4 WHEN 'Semi-Annual' THEN 2 WHEN 'Annually' THEN 1 ELSE 12 END) AS T(ABC) INNER JOIN dbo.Details AS B ON A.Id = B.Id; ```
The simple solution to your problem is to repeat the expression, use a subquery, or use a CTE. However, the more intelligent method is to add a reference table for payment schedules. This would look like: ``` create table PaymentSchedules ( PaymentScheduleId int identity(1, 1) primary key, ScheduleName varchar(255), FrequencyPerYear float -- this could be less often than once per year ); ``` Then the view would look like: ``` CREATE VIEW dbo.myview1 As SELECT a.Id, a.Name, a.Age, ps.FrequencyPerYear, SUM(a.Amount) * 50 as TotalAmount, (a.AmtSpent - SUM(a.Amount) * 50) * ps.FrequencyPerYear as TOTALSPENDS FROM dbo.User a join dbo.Details b on a.Id = b.Id join dbo.PaymentSchedules ps on ps.PaymentScheduleId = a.PamentScheduleId; ```
Can alias column used in a view for calculation in some other column?
[ "", "sql", "sql-server", "sql-server-2008-r2", "" ]
I'm pretty good at SQL but I haven't done this in a while. I have a simple table with a common string: ``` TableName = ExampleTable1 ColumnName = ExampleColumn1 ``` I have a string like this: ``` MYSTRING_10_TB_EXAMPLE1 MYSTRING_120_TB_EXAMPLE2 ``` I have this query: ``` select ExampleColumn1, replace(ExampleColumn1,'MYSTRING_', '') from ExampleTable1 ``` This is returning just the number at the beginning of the string: ``` "10_TB_EXAMPLE1" ``` I now need to remove the string after the first dash after the integer. The integer could be either one digit or four or five but I need everything including the first "\_" removed or anything that starts with "\_TB" to return only the integer. I know you can use STUFF and replace. I think I need to replace my third parameter in the query of '' with another replace or right maybe? I have tried many things here and I can't trim the whole string after the first "\_" sign to just leave the integer.
Use `REPLACE` to trim away the first part of the string as you've already done. Use `CHARINDEX` to find (in the resulting string) the position of the first `_` Then use `LEFT` to keep only the left part of the string up to (but excluding) the first `_` ``` SELECT ExampleColumn1, LEFT( REPLACE(ExampleColumn1,'MYSTRING_', ''), CHARINDEX( REPLACE(ExampleColumn1,'MYSTRING_', ''), '_' ) - 1 ) FROM ExampleTable1 ``` Would return ``` MYSTRING_10_TB_EXAMPLE1 10 MYSTRING_120_TB_EXAMPLE2 120 ``` If you're using mySql instead of `CHARINDEX` use `LOCATE`: ``` LOCATE( '_', REPLACE(ExampleColumn1,'MYSTRING_', '') ) - 1 ```
Use `SUBSTRING_INDEX()` function. This query: ``` SELECT SUBSTRING_INDEX( SUBSTRING_INDEX('MYSTRING_120_TB_EXAMPLE2','_',2), '_', -1); ``` will return `120`
Replace or trim string in SQL
[ "", "sql", "string", "" ]
My table `myTab` has the column `startDate`, which has the datatype "DATE". The data in this column are stored like `dd.mm.yyyy`. Now I'm trying to get data with this query: ``` SELECT * FROM myTab WHERE startDate like '%01.2015" ``` Somehow it doesn't work and I don't know why. Hope someone can help.
To make a text search on the date you would have to convert the date to text. It's more efficient if you calculate the first and last date for what you want to find and get everything between them. That way it's done as numeric comparisons instead of a text pattern match, and it can make use of an index if there is one: ``` SELECT * FROM myTab WHERE startDate >= DATE '2015-01-01' AND startDate < DATE '2015-02-01' ```
``` SELECT * FROM myTab WHERE TO_CHAR(startDate,'dd.mm.yyyy') LIKE '%01.2015' ```
Oracle use LIKE '%' on DATE
[ "", "sql", "oracle", "date-comparison", "" ]
I have a string which contains data as follows: ``` 'ID,MASTER_ID,DATA_SOURCE_ID,DATA_SOURCE_NAME,CHILD_COUNT,CHILD_COUNT_TEXT,PARENT_OR_CHILD,RECORD_TYPE,FULL_NAME_LNF,FULL_NAME_FNF,FIRST_NAME,LAST_NAME,PREFIX,SUFFIX,MIDDLE,TITLE,NAME_OF_ORGANIZATION,NAME_OF_BUSINESS,TYPE_OF_ENTITY,ADDRESS,CITY,STATE,PROVINCE,POSTAL_CODE,COUNTRY,POSTAL_ADDRESS_TYPE,PHONE_AREA_CODE,PHONE_NUMBER,PHONE_COUNTRY_CODE,PHONE,PHONE_TYPE,EMAIL_ADDRESS,URL,HCP_SPECIALTY,HCP_TYPE,HCP_SUBTYPE,RECIPIENT_STATUS,COVERED_RECIPIENT_FLAG,RELATIONSHIP_TO_CR,LAST_MODIFIED_BY,LAST_MODIFIED_DATE,PRIMARY_LICENSE_STATE_AR,PRIMARY_LICENSE_NUM_AR,DEA_REG_NUM_AR,NPI_NUM_AR,UPIN_AR,TAX_PAYER_ID_AR,PRIMARY_LICENSE_STATE_CR,PRIMARY_LICENSE_NUM_CR,DEA_REG_NUM_CR,NPI_NUM_CR,UPIN_CR,DEA_NUMBER,NPI,UPIN,TIN,TAX_PAYER_ID_CR,ATTRIBUTE13,ATTRIBUTE14,ATTRIBUTE15,ATTRIBUTE16,ATTRIBUTE17,ATTRIBUTE18,ATTRIBUTE19,ATTRIBUTE20,ATTRIBUTE21,ATTRIBUTE22,ATTRIBUTE23,ATTRIBUTE24,ATTRIBUTE25,ATTRIBUTE26,ATTRIBUTE27,ATTRIBUTE28,ATTRIBUTE29,ATTRIBUTE30,SOURCE_REGION_CODE,SOURCE_SYSTEM_CODE,REC_INVALID_FLAG,REVISION_FLAG,IS_ACTIVE,PROCESS_STATE,RECIPIENT_CATEGORY01,RECIPIENT_CATEGORY02,RECIPIENT_CATEGORY03,RECIPIENT_CATEGORY04,RECIPIENT_CATEGORY05,RECIPIENT_CATEGORY06,RECIPIENT_CATEGORY07,RECIPIENT_CATEGORY08,RECIPIENT_CATEGORY09,RECIPIENT_CATEGORY10,RECIPIENT_CATEGORY11,RECIPIENT_CATEGORY12,RECIPIENT_CATEGORY13,RECIPIENT_CATEGORY14,RECIPIENT_CATEGORY15,RECIPIENT_CATEGORY16,RECIPIENT_CATEGORY17,RECIPIENT_CATEGORY18,RECIPIENT_CATEGORY19,RECIPIENT_CATEGORY20,RECIPIENT_CATEGORY21,RECIPIENT_CATEGORY22,RECIPIENT_CATEGORY23,RECIPIENT_CATEGORY24,RECIPIENT_CATEGORY25,RECIPIENT_CATEGORY26,RECIPIENT_CATEGORY27,RECIPIENT_CATEGORY28,RECIPIENT_CATEGORY29,RECIPIENT_CATEGORY30,IS_PICKABLE,IS_GOLDEN,PRIMARY_LICENSE_NUM,PRIMARY_LICENSE_EFFECTIVE,PRIMARY_LICENSE_EXPIRES,TERTIARY_LICENSE_EFFECTIVE,TERTIARY_LICENSE_EXPIRES,SECONDARY_LICENSE_EFFECTIVE,SECONDARY_LICENSE_EXPIRES,SECONDARY_LICENSE_NUM,TERTIARY_LICENSE_NUM,ADDRESS2,PHONE_AREA_CODE2,PHONE_NUMBER2,PHONE_COUNTRY_CODE2,PHONE_TYPE2,TERRITORY,PRIMARY_AFFILIATION,PRIMARY_AFFILIATION_STATE,REQUEST_WF_STATE,IS_EDIT_LOCKED,SOURCE_SYSTEM_RECIPIENT_ID,CREATED_BY,CREATION_DATE,APPROVER_COMMENTS,SECONDARY_LICENSE_STATE,PRIMARY_LICENSE_STATE,NPI_DATA,STATE_DATA,DEA_DATA,RPPS,FINESS,SIREN_NUMBER,DPC' ``` Can anybody tell me how to find only the following values ``` DATA_SOURCE_ID, LAST_MODIFIED_BY, LAST_MODIFIED_DATE, ATTRIBUTE13, ATTRIBUTE14, ATTRIBUTE15, ATTRIBUTE16, ATTRIBUTE17, ATTRIBUTE18, ATTRIBUTE19, ATTRIBUTE20, ATTRIBUTE21, ATTRIBUTE22, ATTRIBUTE23, ATTRIBUTE24, ATTRIBUTE25, ATTRIBUTE26, ATTRIBUTE27, ATTRIBUTE28, ATTRIBUTE29, ATTRIBUTE30, ``` And then replace them with following strings ``` 'L.DATA_SOURCE_ID,L.LAST_MODIFIED_BY,L.LAST_MODIFIED_DATE,L.ATTRIBUTE13,L.ATTRIBUTE14,L.ATTRIBUTE15,L.ATTRIBUTE16,L.ATTRIBUTE17,L.ATTRIBUTE18,L.ATTRIBUTE19,L.ATTRIBUTE20,L.ATTRIBUTE21,L.ATTRIBUTE22,L.ATTRIBUTE23,L.ATTRIBUTE24,L.ATTRIBUTE25,L.ATTRIBUTE26,L.ATTRIBUTE27,L.ATTRIBUTE28,L.ATTRIBUTE29,L.ATTRIBUTE30,' ```
You are looking for this. It gives you your desired result ``` SELECT mycol, CONCAT('L.', mycol) AS newCol FROM(SELECT * FROM test WHERE mycol REGEXP 'DATA_SOURCE_ID|LAST_MODIFIED_BY|LAST_MODIFIED_DATE|ATTRIBUTE[0-9]+') as temp ``` **EXPLANATION** ``` SELECT * FROM test WHERE mycol REGEXP 'DATA_SOURCE_ID|LAST_MODIFIED_BY|LAST_MODIFIED_DATE|ATTRIBUTE[0-9]+') ``` This finds all rows that either have `DATA_SOURCE_ID` or `LAST_MODIFIED_BY` or `LAST_MODIFIED_DATE` or `ATTRIBUTE` followed by any number ``` SELECT mycol, CONCAT('L.', mycol) AS newCol ``` This adds `L.` to all the rows that has been found by the subquery. **OUTPUT** ``` L.DATA_SOURCE_ID, L.LAST_MODIFIED_BY, L.LAST_MODIFIED_DATE, L.ATTRIBUTE13, L.ATTRIBUTE14, L.ATTRIBUTE15, L.ATTRIBUTE16, L.ATTRIBUTE17, L.ATTRIBUTE18, L.ATTRIBUTE19, L.ATTRIBUTE20, L.ATTRIBUTE21, L.ATTRIBUTE22, L.ATTRIBUTE23, L.ATTRIBUTE24, L.ATTRIBUTE25, L.ATTRIBUTE26, L.ATTRIBUTE27, L.ATTRIBUTE28, L.ATTRIBUTE29, L.ATTRIBUTE30 ``` Hope this helps
You can try like this: ``` replace( replace(myString, 'DATA_SOURCE_ID', 'yourReplaceValue1'), 'LAST_MODIFIED_BY', yourReplaceValu2)..... ``` This `.....` is for you to make the similar replace for the rest of your replacing strings :)
Find and Replace a strings in oracle
[ "", "sql", "oracle", "replace", "oracle10g", "" ]
Below is the requirement, can you help me to get desired query.. ``` create table sales(Order_ID number, item varchar2(20)); insert into sales values (10, 'RICE'); insert into sales values (10, 'WATER'); insert into sales values (10, 'SALT'); insert into sales values (20, 'TOMATO'); insert into sales values (20, 'ONION'); insert into sales values (30, 'OIL'); insert into sales values (30, 'EGG'); insert into sales values (40, 'CHICKEN'); insert into sales values (50, 'FISH'); ``` I need output as below format. ``` Order_ID ITEM 10 RICE WATER SALT 20 TOMATO ONION 30 OIL EGG 40 CHICKEN 50 FISH ```
Use the `row_number()` analytic function plus a case statement in order to only populate the order\_id for the first row: ``` select case when rn = 1 then order_id end order_id, item from (select order_id, item, row_number() over (partition by order_id order by item) rn from sales); ORDER_ID ITEM ---------- -------------------- 10 RICE SALT WATER 20 ONION TOMATO 30 EGG OIL 40 CHICKEN 50 FISH ```
I usually do it in **SQL\*Plus** and it is all about **formatting your output**. You could use **BREAK ON *column\_name***. For example, ``` SQL> break on deptno SQL> SELECT deptno, ename FROM emp order by deptno; DEPTNO ENAME ---------- ---------- 10 CLARK KING MILLER 20 JONES FORD ADAMS SMITH SCOTT 30 WARD TURNER ALLEN JAMES BLAKE MARTIN 14 rows selected. ``` Most of the GUI based client tools now support most of the **`SQL*Plus` commands**. For example, in **SQL Developer**, you could run as script i.e. **F5**. In **PL/SQL Developer**, there is a different window for SQL\*Plus like environment. However, if you want pure SQL approach, I suggest @Boneist's solution.
Oracle Query with null values
[ "", "sql", "oracle", "" ]
I am trying to access outputs from a database. > Question : how many songs were released during each decade with classification of Pop music. > How exactly can I output all the songs in different decades in one single query search. The desired output should be different columns with all the songs from different decades. My try for the first two decades. ``` Select count(*) as before1970 from Songs and classification = 'Pop' Where ReleaseYear < 1970 and (Select count(*) as 'before1980' from Songs Where ReleaseYear BETWEEN 1970 and 1980 and classification = 'Pop' ); ``` I checked the oldest song from the database is from 1969 so I used 1970 as a starter and then go up by each decade, But I get an output which returns nothing.
Another alternative: ``` Select sum(if(ReleaseYear < 1970, 1, 0)) as before1970, sum(if(ReleaseYear >= 1970 AND ReleaseYear <= 1980, 1, 0)) as before1980 from Songs where classification = 'Pop' ```
You can try something like this ``` SELECT SUM(CASE WHEN ReleaseYear < 1970 THEN 1 ELSE 0 END) AS BEFORE1970, SUM(CASE WHEN ReleaseYear BETWEEN 1970 AND 1980 THEN 1 ELSE 0 END) AS BEFORE1980 FROM Songs WHERE Classification = 'Pop' ```
How to return Multiple outputs from a database as different columns in SQL
[ "", "mysql", "sql", "" ]
I am building an Oracle APEX application in which journalists can bring in articles, which a director selects for a broadcast, along with a newsreader (who can read the articles if the broadcast is created). I am kind of stuck with the following: I would like to create an Oracle SQL query which selects the first and last name of a record in the "user" table, when its availability (which is stored in another table, "Beschikbaarheid") meets the given circumstances (user begin time at broadcast day >= broadcast begin time AND user end time at broadcast day <= broadcast end time). I myself have written the following query. ``` SELECT voornaam || ' ' || achternaam AS display_value, id AS return_value FROM Gebruiker WHERE id NOT IN (SELECT Nieuwslezerselectie.nieuwslezerid FROM Nieuwslezerselectie WHERE -- other conditions AND id IN (SELECT gebruikerid FROM Beschikbaarheid WHERE (dag = to_char(to_date(:P91_DATUM,'DD-MON-YYYY'),'Day')) AND (to_date(:P91_BEGINTIJD,'HH24:MI') >= to_date(begintijd,'HH24:MI')) AND (to_date(:P91_EINDTIJD,'HH24:MI') <= to_date(eindtijd,'HH24:MI')) ); ``` (Excuse me if you are having trouble reading this, I use Dutch naming. Feel free to ask if something is unclear.) However, the APEX select list to which this query is linked is not showing anything if the newsreader is available at the time of the broadcast. I think the problem is "Begintijd" and "Eindtijd" are stored as time stamps, as the values should be date-independent (which time stamps are not), but I have no clue how to store them otherwise. As a varchar maybe? I sincerely hope you have any ideas. Thank you in advance! Luc **update 1** With help I have rewritten the query, it now looks like this: ``` AND id IN (SELECT gebruikerid FROM Beschikbaarheid WHERE (dag = to_char(to_date(:P91_DATUM,'DD-MON-YYYY'),'Day','NLS_DATE_LANGUAGE=Dutch')) AND ( to_date('01-01-2000' || ' ' || to_char(to_timestamp(:P91_BEGINTIJD),'HH24:MI'),'DD-MM-YYYY HH24:MI') BETWEEN to_date('01-01-2000' || ' ' || to_char(begintijd,'HH24:MI'),'DD-MM-YYYY HH24:MI') AND to_date('01-01-2000' || ' ' || to_char(eindtijd,'HH24:MI'),'DD-MM-YYYY HH24:MI') ) AND ( to_date('01-01-2000' || ' ' || to_char(to_timestamp(:P91_EINDTIJD),'HH24:MI'),'DD-MM-YYYY HH24:MI') BETWEEN to_date('01-01-2000' || ' ' || to_char(begintijd,'HH24:MI'),'DD-MM-YYYY HH24:MI') AND to_date('01-01-2000' || ' ' || to_char(eindtijd,'HH24:MI'),'DD-MM-YYYY HH24:MI') ) ); ``` However, I am now getting the *ORA-01840: input value not long enough for date format* error when trying to access the page. I haven't discovered what causes that error yet. Does anyone have any clues? **update 2** Now I am confused. It turned out I didn't need to\_char etc. for my page variables, so I removed those (to\_char etc.). Now I am normally send to the correct page, but the last value I entered in the "set these items / with these values" text fields on the previous page, isn't send. This is what I have in these text fields right now: [![set items with values](https://i.stack.imgur.com/JYI3G.png)](https://i.stack.imgur.com/JYI3G.png) In this image, P85\_BEGINTIJD isn't send. I also switched the values around, but that didn't make a difference. Is there a limit to the amount of values you can send with a request, or something?
Okay, so eventually, I was able to solve it. In my last update, the only lasting problem was that some variables weren't send to the next page. I wasn't able to find the root cause of the problem, so I did a workaround. The only variable that was sent normally to the next page is P91\_UITZENDING (the PK of Uitzending). Luckily, the other variables I needed (begintijd, eindtijd and datum) are attributes of Uitzending so I was able to derive them with statements like this in the source of the corresponding page item: ``` SELECT to_char(begintijd,'HH24:MI') -- replace with eindtijd for eindtijd, and for date: to_char(datum,'DD-MON-YYYY') FROM Uitzending WHERE id = :P91_UITZENDING ``` And the final SQL query for the select box: ``` SELECT voornaam || ' ' || achternaam AS display_value, id AS return_value FROM Gebruiker WHERE -- Other statements AND id IN (SELECT gebruikerid FROM Beschikbaarheid WHERE (dag = to_char(to_date(:P91_DATUM,'DD-MON-YYYY'),'fmDay','NLS_DATE_LANGUAGE=Dutch')) AND ( to_date('01-01-2000' || :P91_BEGINTIJD,'DD-MM-YYYY HH24:MI') BETWEEN to_date('01-01-2000' || to_char(begintijd,'HH24:MI'),'DD-MM-YYYY HH24:MI') AND to_date('01-01-2000' || to_char(eindtijd,'HH24:MI'),'DD-MM-YYYY HH24:MI') ) AND ( to_date('01-01-2000' || :P91_EINDTIJD,'DD-MM-YYYY HH24:MI') BETWEEN to_date('01-01-2000' || to_char(begintijd,'HH24:MI'),'DD-MM-YYYY HH24:MI') AND to_date('01-01-2000' || to_char(eindtijd,'HH24:MI'),'DD-MM-YYYY HH24:MI') ) ); ``` I still don't know why begintijd, eindtijd and datum weren't send the normal way, but at least it is working now. Thank you all for your thinking and help!
Do not use timestamps. se a date field, which includes date and time, more info about date fields in oracle: [Oracle Data types, see paragraph Overview of DATE Datatype](http://docs.oracle.com/cd/B28359_01/server.111/b28318/datatype.htm#i1847)
Oracle SQL: check if time is between two instances (excluding dd/mm/yyyy)
[ "", "sql", "oracle", "time", "oracle-apex", "" ]
There are popular case, where is needed to select a row with max value (or min or whatever else aggregate function result) from a table. My case is more complex - it adds `JOIN` into the picture. Here is an example: ``` CREATE TABLE spacecraft ( id serial PRIMARY KEY, name text NOT NULL ); CREATE TABLE launch ( id serial PRIMARY KEY, spacecraft_id int REFERENCES spacecraft(id) NOT NULL, started timestamptz NOT NULL, success bool NOT NULL ); INSERT INTO spacecraft (id, name) VALUES (1, 'Bebop'), (2, 'Serenity'), (3, 'Death Start'); INSERT INTO launch (spacecraft_id, started, success) VALUES (1, 'January 8 04:05:06 2999 UTC', true), (2, 'December 1 01:00:00 3432 UTC', true), (3, 'February 15 00:00:00 4521 UTC', false), (3, 'July 10 12:05:00 4525 UTC', true); ``` <http://sqlfiddle.com/#!15/a1921> How to select last launch result (launch.success) for each spacecraft? ## UPDATE #1 That's my current solution: ``` SELECT DISTINCT S.*, last_launch, L2.success FROM spacecraft AS S LEFT OUTER JOIN (SELECT *, MAX(started) OVER (PARTITION BY spacecraft_id) AS last_launch FROM launch) AS L1 ON L1.spacecraft_id=S.id LEFT OUTER JOIN launch AS L2 ON L2.started=L1.last_launch; ``` <http://sqlfiddle.com/#!15/45618/38>
Assuming you want all columns from `spacecraft` in the result, plus the latest `launch` and the `success` of the same row. Faster for few launches per spacecraft: ``` SELECT s.*, l.last_launch, l.success FROM spacecraft s LEFT JOIN ( SELECT DISTINCT ON (spacecraft_id) spacecraft_id, started AS last_launch, success FROM launch ORDER BY spacecraft_id, started DESC ) l ON l.spacecraft_id = s.id; ``` Faster for many launches per spacecraft: ``` SELECT s.*, l.last_launch, l.success FROM spacecraft s LEFT JOIN LATERAL ( SELECT started AS last_launch, success FROM launch WHERE spacecraft_id = s.id ORDER BY started DESC LIMIT 1 ) l ON true; ``` `LATERAL` requires Postgres 9.3+. `LEFT JOIN` to include spacecrafts without any launches. Else use `JOIN`. The essential ingredient to make either of these queries ***fast*** is a multicolumn index on `(spacecraft_id, started)`. For this particular query it would be beneficial to `success` to the index to allow index-only scans: ``` CREATE INDEX launch_foo_idx ON launch (spacecraft_id, started DESC, success); ``` [**SQL Fiddle.**](http://sqlfiddle.com/#!15/f0e81/1) Detailed explanation: * [Optimize GROUP BY query to retrieve latest record per user](https://stackoverflow.com/questions/25536422/optimize-group-by-query-to-retrieve-latest-record-per-user/25536748#25536748) * [Select first row in each GROUP BY group?](https://stackoverflow.com/questions/3800551/select-first-row-in-each-group-by-group/7630564#7630564)
If you want to get only max of success, so you have add a condition to filter only success. and using [Window Functions](http://www.postgresql.org/docs/9.4/static/tutorial-window.html) as you did: ``` select spacecraft.* , max(started) over (partition by spacecraft.id) from spacecraft left outer join launch on spacecraft.id = spacecraft_id where success ``` [SQL FIDDLE](http://sqlfiddle.com/#!15/45618/48)
How to query row with max value in a column in a referencing table?
[ "", "sql", "postgresql", "join", "greatest-n-per-group", "" ]
I am trying to automate a process of detaching and dropping a database (via a VBS objshell.run) If I manually use SSMS to detach and drop I can then copy to database files to another location... however if I use: ``` sqlcmd -U sa -P MyPassword -S (local) -Q "ALTER DATABASE MyDB set single_user With rollback IMMEDIATE" ``` then ``` sqlcmd -U sa -P MyPassword -S (local) -Q "DROP DATABASE MyDB" ``` It detaches/drops and then deletes the files. How do I get the detach and drop without the delete?
The MSDN Documentation on [DROP DATABASE](https://msdn.microsoft.com/en-us/library/ms178613.aspx) has this to say about dropping the database without deleting the files (under General Remarks): > Dropping a database deletes the database from an instance of SQL Server and deletes the physical disk files used by the database. If the database or any one of its files is offline when it is dropped, the disk files are not deleted. These files can be deleted manually by using Windows Explorer. **To remove a database from the current server without deleting the files from the file system, use [sp\_detach\_db](https://msdn.microsoft.com/en-us/library/ms188031.aspx)**. So in order for you to drop the database without having the files get deleted with sqlcmd you can change it to do this: ``` sqlcmd -U sa -P MyPassword -S (local) -Q "EXEC sp_detach_db 'MyDB', 'true'" ``` DISCLAIMER: I have honestly never used sqlcmd before but assuming from the syntax of how it's used I believe this should help you with your problem.
Use SET OFFLINE instead of SET SINGLE\_USER ``` ALTER DATABASE [DonaldTrump] SET OFFLINE WITH ROLLBACK IMMEDIATE; DROP DATABASE [DonaldTrump]; ```
Drop DB but don't delete *.mdf / *.ldf
[ "", "sql", "sql-server-2008", "sqlcmd", "" ]
I can't seem to find out how to get the functionality I want. Here is an example of what my table looks like: ``` EmpID | ProjectID | hours_worked | 3 1 8 3 1 8 4 2 8 4 2 8 4 3 8 5 4 8 ``` I want to group by EmpID and ProjectID and then sum up the hours worked. I then want to inner join the Employee and Project table rows that are associated with EmpID and ProjectID, however when I do this then I get an error about the aggregate function thing, which I understand from research but I don't think this would have that problem since there will be one row per EmpID and ProjectID. **Real SQL:** ``` SELECT WorkHours.EmpID, WorkHours.ProjectID, Employees.FirstName FROM WorkHours INNER JOIN Projects ON WorkHours.ProjectID = Projects.ProjectID INNER JOIN Employees ON WorkHours.EmpID = Employees.EmpID GROUP BY WorkHours.ProjectID, WorkHours.EmpID ``` This gives the error: > Column 'Employees.FirstName' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
You can do a basic query to get the grouped hours and use that as a basis for the rest, either in a CTE or as a subquery. For example, as a subquery: ``` SELECT * FROM (SELECT EmpID, ProjectID, SUM(hours_worked) as HoursWorked FROM WorkHours GROUP BY EmpID, ProjectID) AS ProjectHours JOIN Projects ON Projects.ID = ProjectHours.ProjectID JOIN Employees ON Employees.ID = ProjectHours.EmpID ```
You might want to use `OVER (PARTITION BY)` so you won't have to use `GROUP BY`: ``` Select a.EmpID ,W.ProjectID ,W.SUM(hours_worked) OVER (PARTITION BY W.EmpID,W.ProjectID) ,E.FirstName FROM WorkHours W INNER JOIN Projects P ON WorkHours.ProjectID = Projects.ProjectID INNER JOIN Employees E ON WorkHours.EmpID = Employees.EmpID ```
Multiple grouped items
[ "", "sql", "sql-server", "" ]