Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
Say that I define: ``` USE tempdb; GO CREATE TABLE [dbo].[Test1] ( [ID] [INT] IDENTITY(1,1) NOT NULL PRIMARY KEY, [DateString] [VARCHAR](max) ) GO INSERT INTO [dbo].[Test1](DateString) VALUES ('2014-10-20'),('BadDate'); GO CREATE VIEW [dbo].[vw_Test1] AS SELECT Id, CONVERT(DATE,DateString) ConvertedDate FROM dbo.Test1 WHERE isdate(DateString)=1 GO ``` The following query works fine: ``` SELECT * FROM [dbo].[vw_Test1] ``` The following query throws "**Conversion failed when converting date and/or time from character string**". ``` SELECT * FROM [dbo].[vw_Test1] WHERE ConvertedDate='2014-10-20' ``` Obviously this happens because the condition `ConvertedDate='2014-10-20'` gets executed before `isdate(DateString)=1`. How would you go about fixing vw\_Test1 so that it always works
one way for these type of situations is using **CASE** statement, I think your view with should work. ``` CREATE VIEW [dbo].[vw_Test1] AS SELECT Id, case ISDATE(DateString) when 1 then CONVERT(DATE,DateString) else 0 --oranything end ConvertedDate FROM dbo.Test1 WHERE isdate(DateString)=1 ```
This depends on the version of SQL Server that you are using. SQL Server 2012+ has `try_convert()`: ``` CREATE VIEW [dbo].[vw_Test1] as SELECT Id, TRY_CONVERT(DATE, DateString) as ConvertedDate FROM dbo.Test1 WHERE isdate(DateString) = 1; ``` This will return `NULL` if the conversion fails, rather than an error. In general, SQL Server does not guarantee the order of evaluation of components of a `select`. The only exception is the `case` statement -- and this is a partial exception. In any case, using a `case` also solves the problem: ``` CREATE VIEW [dbo].[vw_Test1] as SELECT Id, (CASE WHEN isdate(DateString) = 1 THEN CONVERT(DATE, DateString) END) as ConvertedDate FROM dbo.Test1 WHERE isdate(DateString) = 1; ``` The lack of `ELSE` clause forces a `NULL` if the conversion does not take place.
How to force "where" precedence when expanding views
[ "", "sql", "sql-server", "t-sql", "" ]
Can someone help me about this problem. I have 2 tables: ``` TASKS:(id,once) SAVED_tasks (id, task_id) ``` * table where i save tasks.. I need to show all tasks but NOT if the task have value is 1 and if is already inserted in SAVED\_tasks table.. EXAMPLE: tasks: ``` id, name, once 1, task1, 0 2, task2, 1 3, task3, 0 4, task4, 1 ``` saved\_tasks: ``` id, task_id 1, 1 2, 2 3, 3 4, 4 ``` I need result: ``` 1, task1 3, task3 ```
Try this: ``` SELECT t.id, t.once FROM TASKS t LEFT JOIN SAVED_tasks st ON t.id = st.task_id WHERE (t.once != 1 OR (t.once = 1 AND st.id IS NULL)); ``` **OR** ``` SELECT t.id, t.once FROM TASKS t LEFT JOIN SAVED_tasks st ON t.id = st.task_id WHERE NOT(t.once = 1 AND st.id IS NULL); ```
``` SELECT TASK.id, SAVED_tasks.task_id FROM TASKS Inner join SAVED_tasks ON TASK.id = SAVED_tasks.id AND TASK.once > 1 ```
Select rows with show flag but based on another table
[ "", "mysql", "sql", "select", "join", "" ]
I have a table like the one given below. ``` View : Cat : Name abcView abcView abcCategory2 abcView abcCategory2 abcFilter abcView2 abcView2 abcCategory abcView2 abcView3 ``` View is the parent of Cat and Cat is the parent of Name. View can never be empty if Cat exists. Similarly, Cat can never be empty if Name exists. I want to fetch data in such a way that I dont want any empty spaces or replications in my result. If there are two entries, one with a child and one without a child then I only want to show the entry with the child. But if there is no child then I just want to return the parent(s) name. ``` View : Cat : Name abcView abcCateogry2 abcFilter abcView2 abcCategory abcView3 ```
you could select all entries, where a name exists by ``` SELECT View, Cat, Name FROM yourTable WHERE Name NOT NULL ``` then union it with all entries, where just a cat exists (and no name) ``` SELECT View, Cat, Name FROM yourTable WHERE Name IS NOT NULL UNION SELECT View, Cat, NULL FROM yourTable WHERE Cat IS NOT NULL AND Name IS NULL AND Cat NOT IN (SELECT Cat FROM yourTable WHERE Name IS NOT NULL) ``` and finally union it with all entries, where just a view (and no name and cat exists) ``` SELECT View, Cat, Name FROM yourTable WHERE Name IS NOT NULL UNION SELECT View, Cat, NULL FROM yourTable WHERE Cat IS NOT NULL AND Name IS NULL AND Cat NOT IN (SELECT Cat FROM yourTable WHERE Name IS NOT NULL) UNION SELECT View, NULL, NULL FROM yourTable WHERE View IS NOT NULL AND Cat IS NULL AND View NOT IN (SELECT View FROM yourTable WHERE Cat IS NOT NULL) ```
Try this: ``` SELECT a.View, MAX(a.Cat) Cat, MAX(a.Name) AS `name` FROM tableA a GROUP BY a.View ```
Problems while selecting data from the table?
[ "", "mysql", "sql", "python-2.7", "select", "group-by", "" ]
I'm trying to create a SQL statement, which calculates how many days a delivery of undelivered products are delayed relative to the current date. The result should show the order number, order date, product number and the number of delay days for the order lines where the number of days of delay exceeds 10 days. Here is my SQL statement so far: ``` SELECT Orderhuvuden.ordernr, orderdatum, Orderrader.produktnr, datediff(day, orderdatum, isnull(utdatum, getdate())) as 'Delay days' FROM Orderhuvuden JOIN Orderrader ON Orderhuvuden.ordernr = Orderrader.ordernr AND utdatum IS NULL ``` What I have a problem with is to solve how to show the delayed days that exceeds 10 days. I've tried to add something like: ``` WHERE (getdate() - orderdatum) > 10 ``` But it doesn't work. Does anyone know how to solve this last step?
Add this to your where clause: ``` AND DATEDIFF(day, orderdatum, getdate()) > 10 ```
If the condition that you want is: ``` WHERE (getdate()-orderdatum) > 10 ``` Simply rewrite this as: ``` WHERE orderdatum < getdate() - 10 ``` Or: ``` WHERE orderdatum < dateadd(day, -10, getdate()) ``` These are also "sargable" meaning than an index on `orderdatum` can be used for the query.
Trying to show datediff greater than ten days
[ "", "sql", "sql-server", "" ]
Sorry if the title isn't really understandable, feel free to edit it. I have a table named CLIENT, here is a sample of the data: ``` ID_CLIENT CLIENT_NAME OTHER_ID ---------------------------------------- 1 'COMPANY A' 1 2 'COMPANY B' 4 3 'COMPANY C' 3 4 'COMPANY D' 1 ``` I would like to create a query which get the CLIENT\_NAME instead of the OTHER\_ID. It is really hard to explain, here is the result I would like to see with my query: ``` ID_CLIENT CLIENT_NAME CLIENT_BRANCH -------------------------------------------- 1 'COMPANY A' 'COMPANY A' 2 'COMPANY B' 'COMPANY D' 3 'COMPANY C' 'COMPANY C' 4 'COMPANY D' 'COMPANY A' ``` I would like to "link" OTHER\_ID to the related CLIENT\_NAME... Feel free to edit the question if you know how to explain it better than me. Thank you in advance.
**SELF JOIN** will resolve your issue. Try this: ``` SELECT C1.ID_CLIENT, C1.CLIENT_NAME, C2.CLIENT_NAME CLIENT_BRANCH FROM CLIENT C1 INNER JOIN CLIENT C2 ON C1.OTHER_ID = C2.ID_CLIENT; ``` **EDIT** If you have `NULL` value in `OTHER_ID` column then use **LEFT JOIN** instead of **INNER JOIN** ``` SELECT C1.ID_CLIENT, C1.CLIENT_NAME, C2.CLIENT_NAME CLIENT_BRANCH FROM CLIENT C1 LEFT JOIN CLIENT C2 ON C1.OTHER_ID = C2.ID_CLIENT; ```
Try this: ``` select a.client_id, a.client_name, b.client_name as client_branch from client a join client b on (a.other_id = b.client_id) ``` (seems that I have been too late, sorry)
How to select rows based on join on two columns of same table
[ "", "sql", "oracle", "select", "join", "self-join", "" ]
Here is my problem: My database have table Book, Post. Each book has many post Table *posts* has field '*book\_id*', that is foreign key reference table Book primary key (id). This is my index page. The idea is to get latest post from each book and order by published date. When I code on localhost, every thing is OK. I can get latest post from each book and order by publish date. But when I deploy it in vps. It didn't get latest post, it get first post from each book. I didn't have any experience about it. Please help, thanks On localhost, I use: Apache-2.2, PHP-5.3, Mysql-5.5, ENGINE type for table is InnoDB. On VPS, I use: Nginx 1.7.6, PHP-FPM 5.5.18, MariaDB, ENGINE type for table is MyIsam I guest the problem is InnoDB and MyIsam, I try to fix it. But, if you have free time, please give me some good advise. Thanks a lot p/s: Sorry about my poor english ``` SELECT * FROM `my_book_store`.`books` AS `Book` INNER JOIN ( SELECT * FROM posts WHERE posts.published = 1 AND posts.published_date <= NOW() ORDER BY posts.published_date DESC ) AS `Post` ON (`Post`.`book_id` = `Book`.`id`) WHERE 1 = 1 GROUP BY `Book`.`id` ORDER BY `Post`.`published_date` desc LIMIT 100 ```
Try this: ``` SELECT b.*, p.* FROM my_book_store.books AS b INNER JOIN posts p ON b.id = p.book_id INNER JOIN (SELECT p.book_id, MAX(p.published_date) published_date FROM posts p WHERE posts.published = 1 AND posts.published_date <= NOW() GROUP BY p.book_id ) AS p1 ON p.book_id = p1.book_id AND p.published_date = p1.published_date GROUP BY b.id ORDER BY p.published_date DESC LIMIT 100 ```
You can try the below queries which does the job of getting the last post from each book ``` select b.id, b.name, p.content, p.published_date from book b join post p on p.book_id = b.id left join post p1 on p1.book_id = p.book_id and p1.published_date > p.published_date where p1.id is null; ``` OR ``` select b.id, b.name, p.content, p.published_date from book b join post p on p.book_id = b.id where not exists( select 1 from post p1 where p.book_id = p1.book_id and p1.published_date > p.published_date ) ``` **[DEMO](http://sqlfiddle.com/#!2/b935a4/2)**
Different result query when use mysql and mariadb
[ "", "mysql", "sql", "select", "group-by", "greatest-n-per-group", "" ]
I am running a procedure where I use yesterday's date through `DATEADD(day,-1,getdate())`. The challenge is that my procedure runs at 1:30 at night, and it then returns the data like ``` 2014-12-09 01 30 ``` I need it to return something like `2014-12-09 23 59`, because i have a `DATETIME` column. Does anyone have an idea, how to do that?
I had used something like below in an SP : ``` SELECT Cast(Getdate()-1 AS DATE) + Cast('23:59:00' AS DATETIME) ```
Just for fun - but this would work, but only with a datetime datatype in SQL Server: ``` SELECT CAST((FLOOR(CAST(GETDATE() AS FLOAT))) -0.0006944444 AS DATETIME) ``` This gives you yesterday 23:59:00 In SQL Server the DATETIME datatype is internally handled like a float, so by casting it to float and flooring it, it returns the value for datetime wich just contains the date portion. Then I subtract the float number equivalent to a minute and cast it back as a datetime A more correct way would however properly be to use the appropiate functions, so: ``` SELECT DATEADD(MINUTE, - 1, DATEADD(DAY, DATEDIFF(DAY, 0, GETDATE()), 0)) ``` This calculates the current date without time portion and then subtracts one minute.
getdate with time 2359
[ "", "sql", "sql-server", "sql-server-2008", "" ]
This is a bit of a weird question, but that's what the customer wants :) They want to control what is selected via a control table. Reason for that is, that they don't know how the columns will be called in the end. So when someone wants to run the code, they check how the columns are named and write it down in this control table. There can be up to 4 columns. So the control table looks like this ``` Level1 Level2 Level3 Level4 -------------------------------------- columnname1 columnname2 Null Null ``` So the code should select this ``` SELECT columnname1 ,columnname2 FROM table ``` The null values in the control table should be ignored. I've already tried to do define dynamic parameters and write the select with that, but this of course only works when all 4 columns are filled. Any ideas? Thanks
You can write a dynamic query as: ``` DECLARE @SQLString nvarchar(500); select @SQLString = 'Select '+ CASE WHEN Level1 IS NULL THEN '' ELSE Level1 END + + CASE WHEN Level2 IS NULL THEN '' ELSE + ' , ' + Level2 END + + CASE WHEN Level3 IS NULL THEN '' ELSE + ' , '+ Level3 END + + CASE WHEN Level4 IS NULL THEN '' ELSE + ' , ' +Level4 END + ' FROM controltbl ' FROM controltbl --SELECT @SQLString EXECUTE sp_executesql @SQLString ```
Here is one solution. Select the values into a in-memory table and then select the columns. This isn't that flexible as it requires some manual rework as you progress. ``` DECLARE @TempTable TABLE ( Column1 varchar(50), Column2 varchar(50), Column3 varchar(50), Column4 varchar(50) ) INSERT INTO @TempTable SELECT Level1, Level2, Level3, Level4 FROM Table IF EXISTS (SELECT Column1 FROM @TempTable WHERE Column4 = NULL) BEGIN IF EXISTS (SELECT Column1 FROM @TempTable WHERE Column3 = NULL) BEGIN IF EXISTS (SELECT Column1 FROM @TempTable WHERE Column2 = NULL) BEGIN SELECT Column1 FROM @TempTable END ELSE BEGIN SELECT Column1, Column2 FROM @TempTable END END ELSE BEGIN SELECT Column1, Column2, Column3 FROM @TempTable END END ELSE BEGIN SELECT Column1, Column2, Column3, Column4 FROM @TempTable END ```
Define a Select via a control table, SQL Server 2008
[ "", "sql", "sql-server", "t-sql", "" ]
**Scenario:** I am trying to build a query that will return the SUM of a COUNT. **Purpose of the Query:** I need to identify all of our Participants that are currently listed on 2 or more Positions. **Current Query:** ``` select pa.firstname, pa.lastname, pos.name as "POSITION", count(pos.name) as "POSITION COUNT" from cs_participant pa, cs_position pos where pa.payeeseq = pos.payeeseq and pa.firstname like '%DEV%' and pa.lastname like '%TEST%' group by pa.firstname, pa.lastname, pos.name ``` **Results:** ![enter image description here](https://i.stack.imgur.com/SrpKX.png) I need to SUM the POSITION COUNT and somehow return only FIRSTNAME, LASTNAME, and SUM. So, if a Participant is currently on 2 or more Positions, the query would return the Participant's FIRSTNAME, LASTNAME, and SUM of POSITIONS. If I could also return the POSITION in the far column without creating duplicate rows for each unique POSITION, that would be a huge help! For example, a Participant with 2 POSITIONS would return the Participant's FIRSTNAME, LASTNAME, SUM OF POSITIONS, and POSITION\_1 on the first row, while the second row would have blank values for the Participant's FIRSTNAME, LASTNAME, and SUM of POSITIONS, but return POSITION\_2 in the 4th column (Similar to a "break" function in Web Intelligence). Thanks!
You would just use aggregation with a `having` clause: ``` select pa.firstname, pa.lastname, count(*) as "POSITION COUNT" from cs_participant pa join cs_position pos on pa.payeeseq = pos.payeeseq group by pa.firstname, pa.lastname having count(*) >= 2; ``` I removed the `where` conditions, because it is not clear what those are for. You can, of course, put them back in if they are relevant to your query.
This query uses analytic functions to return the first name, last name, position and (total) position count for each participant. It uses row\_number to only print the first and last name once per participant (and leaves it blank for rows after the 1st row). ``` select case when rn = 1 then firstname else '' end firstname, case when rn = 1 then lastname else '' end lastname, name as "POSITION", position_count as "POSITION COUNT" from ( select pa.firstname, pa.lastname, pos.name, count(*) over (partition by pa.firstname, pa.lastname) position_count, row_number() over (partition by pa.firstname, pa.lastname order by pos.name) rn from cs_participant pa join cs_position pos on pa.payeeseq = pos.payeeseq ) t1 where position_count > 1 order by firstname, lastname, rn ```
How to return the SUM of a COUNT in PL/SQL
[ "", "sql", "oracle", "plsql", "oracle-sqldeveloper", "" ]
I have this sample table: Table1 ``` Column1 ------- Hi&Hello Hello & Hi Snacks &amp; Drinks &nbsp;&nbsp;&nbsp;Hello World&nbsp;&nbsp; ``` **Question:** in SQL Server, how can I replace all the `&` into a character entities (`&amp;`) without affecting the existing character entities? Thanks!
1. If you really want to replace it in your database, you can try running ``` UPDATE Table1 SET Column1 = REPLACE(Column1, '&', '&amp;'); ``` 2. I suppose you want to do this because you want to display data on the site exactly the same way in your database. So I suggest you to escape when you display it (in the application side) since it is not easy to maintain when you have lots of `&amp;amp;` or `&amp;nbsp;` in your database. For example: In php, you can use `htmlspecialchars();` In java, you can `import static org.apache.commons.lang.StringEscapeUtils.escapeHtml;` and then use `escapeHtml();` In ruby on rails, you can use `HTMLEntities.new.encode();` (If you use rails3 or newer version, escaping should be done by default.)
I am assuming that the reason you are asking is because it's not just   character entities you have - I'm assuming it's more than 5 or 6, because if it's that small a number, the easy answer is run an update statement for each character entity replacing the ampersand at the beginning of it with some combination of characters that doesn't exist in the column (e.g. |#|#|#). This will require substring among other things. Then update all the ampersands to &. Then run another update statement replacing your special combination of characters with ampersands. You're done. Assuming it's more different character sets, is this a one time cleanup? If not, if it's something you expect to be ongoing, why is the system storing ampersands as just ampersands but other things as character entities? I'd attack that problem first. If you have no control over that, then my soltuion might still work but would need to be optimized some more. If it is a one-time cleanup , I think you need to create a table for temporary use which contains a list of all the character entities you might conceivably have in your db. You can google HTML character entities and find lists that with cutting and pasting will probably enable you to generate this table in less than 30 minutes. Once you have this table, then your path is fairly clear. Like others said, you probably don't want to store encoded data in the DB like this. In which case, you just write a cursor and loop through your new table of character entities doing updates with replace statements. If you really want to replace only the ampersands though, then instead you loop through the cursor and replace the ampersand at the beginning of every character entity with some combination of characters that could never exist in the column otherwise like |#|#|# for instance. After that, the only ampersands left in the DB should be actual ampersands, not part of character entities. So then it's the same last two steps as I mentioned in my first paragraph: you run a single query to update all the ampersands to &. Then run another update statement replacing your special combination of characters with actual ampersands.
SQL Server : replace "&" to character entities (&amp;)
[ "", "sql", "sql-server", "replace", "character-entities", "" ]
I am using the OR operator in the following fashion: ``` SELECT * FROM Customers WHERE City = "Berlin" OR City = "London" ``` The above returns a results table containing some data, however what also returns data is: ``` SELECT * FROM Customers Where City = "Berlin" OR "London" ``` I am using a questionable SQL editor currently, and want to know if this would be valid in a MySQL environment.
Well, Cassini i have check your syntax and run this query on my table i did not getting same result. I am getting correct output as expected. i have run this command: ``` SELECT * FROM `city` where name = 'London' or 'Berlin' ``` and i got only london in name column. When i run this command: ``` SELECT * FROM `city` where name = 'London' or name = 'Berlin' ``` then i got both the cities in name column. so command is valid it will return only valid output which satisfy the condition. So, i can say that command is valid but execute only that part of query which satisfy MYSQL Select syntax.
You are looking for ``` SELECT * FROM Customers Where City IN('Berlin', 'London'); ``` The query: ``` Where City='Berlin' OR 'London' ``` Applies the [logical OR operator (||)](http://dev.mysql.com/doc/refman/5.0/en/logical-operators.html#operator_or), so `OR "London"` is equivalent to `OR 0`, and `Where City = 'Berlin' OR 0;` will just return `'Berlin'` [SqlFiddle here with truth table here](http://sqlfiddle.com/#!2/4b8ae/7) Minor, but you should look at using [single quotes for string literals](https://stackoverflow.com/questions/11321491/when-to-use-single-quotes-double-quotes-and-backticks), as this is more portable between RDBMS's and use of `"` will depend on [`ANSI QUOTES`](http://dev.mysql.com/doc/refman/5.6/en/sql-mode.html#sqlmode_ansi_quotes).
SQL Using the OR & AND operators (Newbie)
[ "", "mysql", "sql", "operators", "" ]
I've got an issue I've been racking my brain on this and the code I have makes sense to me but still doesn't work. Here is the question: > Give me a list of the names of all the unused (potential) caretakers and the names and types of all unclaimed pieces of art (art that does not yet have a caretaker). Here is how the tables are set up: * `CareTakers`: `CareTakerID, CareTakerName` * `Donations`: `DonationID, DonorID, DonatedMoney, ArtName, ArtType, ArtAppraisedPrice, ArtLocationBuilding, ArtLocationRoom, CareTakerID` * `Donors`: `DonorID, DonorName, DonorAddress` Here is the code I have: ``` SELECT CareTakerName, ArtName, ArtType FROM CareTakers JOIN Donations ON CareTakers.CareTakerID = Donations.CareTakerID WHERE Donations.CareTakerID = '' ``` Any help would be very much appreciated!
I would suggest two queries for the reasons I noted in my comment on the OP above... However, since you requested one query, the following should get you what you asked for, although the result sets are not depicted side-by-side. ``` SELECT CareTakerName, ArtName, ArtType FROM CareTakers LEFT JOIN Donations ON CareTakers.CareTakerID = Donations.CareTakerID WHERE NULLIF(Donations.CareTakerID,'') IS NULL UNION -- Returns a stacked result set SELECT CareTakerName, ArtName, ArtType FROM CareTakers RIGHT JOIN Donations ON CareTakers.CareTakerID = Donations.CareTakerID WHERE NULLIF(CareTakers.CareTakerID,'') IS NULL ``` If this is not sufficient, I can supply two separate queries as I suggested above. \*EDIT: Included NULLIF with '' criteria to treat blank and NULL equally in the where clause.
`Donations.CareTakerID = ''` is not the same as testing for `NULL`. That's testing for an empty string. You want ``` Donations.CareTakerID is NULL ``` Also note that ``` Donations.CaretakerID = NULL ``` will not give you what you want either (a common mistake.)
SELECT statement that only shows rows where there is a NULL in a specific column
[ "", "sql", "sql-server", "" ]
Why doesn't this work? (**NOW SOLVED**) ``` Declare @var1 Varchar(10) Set @var1 = 'Value 1' Declare @var2 Int Set @var2 = (Select [IntValue] From [TableOfValues] Where [StringValue] = @var1 ) Select @var1, [IntValue] From [TableOfValues] ``` But this does... ``` Declare @var1 Varchar(10) Set @var1 = 'Value 1' Declare @var2 Int Set @var2 = (Select [IntValue] From [TableOfValues] Where [StringValue] = 'Value 1' ) Select @var1, [IntValue] From [TableOfValues] ``` I assume it's something to do with the query in the brackets running first, but how do I get my previously defined variable in there? This is obviously a simplified version of a much bigger query. Thanks. **EDIT:** The above does work, but the variable in the real query was longer than 10 chars. Silly mistake I know, but I've been staring at code all day. Sorry to have wasted anyone's time.
By default if you don't mention size of `varchar` in declaration then it will be `1` So in `@var1` only `'V'` will be stored obviously you won't get the desired result. Try changing your **declaration** like this. ``` Declare @var1 Varchar(10) Set @var1 = 'Value 1' ```
Your declaration is not doing what you expect for `@var1`. Here is the problem: ``` Declare @var1 Varchar Set @var1 = 'Value 1' ``` It is setting `@var1` to `V`. The default length for `varchar` with no length is "1" in this context. In SQL Server, *always* use a length with `varchar`: ``` Declare @var1 Varchar(255); Set @var1 = 'Value 1' ```
How to use a variable within a Set @var = Select statement
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I am trying to select the ID from a wp\_posts table where I need to do 3 conditions. 1. Is the *ID* matching with the *post\_id* from the *wp\_postmeta* table where the *meta\_key* = 'gtp\_analytics\_client\_id' and the *meta\_value* is not empty. 2. Is the *ID* matching with the *post\_id* from the *wp\_postmeta* table where the *meta\_key* = 'gtp\_conversion\_uploaded' and the *meta\_value* is not equal to 1. 3. Is the *ID* matching with the *post\_id* from the *wp\_postmeta* table where the *meta\_key* = 'gtp\_lead\_revenue' and the *meta\_value* is not empty. I am a beginner with SQL. This is what I have now, but I cannot use multiple IN's. So I think I need to do it another way. ``` SELECT ID FROM wp_posts WHERE ID IN (SELECT post_id FROM wp_postmeta WHERE meta_key = 'gtp_analytics_client_id' AND meta_value != '') AND IN (SELECT post_id FROM wp_postmeta WHERE meta_key = 'gtp_conversion_uploaded' AND meta_value != 1) AND IN (SELECT post_id FROM wp_postmeta WHERE meta_key = 'gtp_revenue' AND meta_value != '') ``` I get the following error: > *You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'IN (SELECT post\_id FROM wp\_postmeta WHERE meta\_key = 'gtp\_conversion\_uploaded' A' at line 4*
The `and` is not part of the `in` operator, it is three separate `in` operators, so you need the first operand (`ID`) for all of them: ``` SELECT ID FROM wp_posts WHERE ID IN ( ... ) AND ID IN ( ... ) AND ID IN ( ... ) ``` You could also write that as three joins: ``` SELECT p.ID FROM wp_posts p INNER JOIN wp_postmeta m1 ON m1.post_id = p.ID AND m1.meta_key = 'gtp_analytics_client_id' AND m1.meta_value != '' INNER JOIN wp_postmeta m2 ON m2.post_id = p.ID AND m2.meta_key = 'gtp_conversion_uploaded' AND m2.meta_value != 1 INNER JOIN wp_postmeta m3 ON m3.post_id = p.ID AND m3.meta_key = 'gtp_revenue' AND m3.meta_value != '' ```
**When it can be either of the 3 cases** ``` SELECT ID FROM wp_posts WHERE ID IN (SELECT post_id FROM wp_postmeta WHERE (meta_key = 'gtp_analytics_client_id' AND meta_value != '') OR (meta_key = 'gtp_conversion_uploaded' AND meta_value != 1) OR (meta_key = 'gtp_revenue' AND meta_value != '') ) ```
SQL Multiple WHERE IN
[ "", "mysql", "sql", "database", "wordpress", "phpmyadmin", "" ]
I would like to get the first null-valued column **name** when selecting by Id: I tried: ``` SELECT Snap1,Snap2,Snap3,Snap4 where NULL FROM tbtProgressSnap where Id=3 Id | Snap1 | Snap2 | Snap3 | Snap4| ----------------------------------- 1 | | |/a.png |/b.png| ----------------------------------- 2 | d.png | m.png | | | ----------------------------------- 3 | g.png |l.png |p.png | | ``` I expect to get, with respective `Id`s ``` 1 Snap1 2 Snap3 3 Snap4 ```
Try this: ``` SELECT WorkId, (CASE WHEN Snap1 IS NULL OR snap1 = '' THEN 'Snap1' WHEN Snap2 IS NULL OR snap2 = '' THEN 'Snap2' WHEN Snap3 IS NULL OR snap3 = '' THEN 'Snap3' WHEN Snap4 IS NULL OR snap4 = '' THEN 'Snap4' WHEN Snap5 IS NULL OR snap5 = '' THEN 'Snap5' ELSE '' END) AS snapValue FROM tbtProgressSnap WHERE WorkId=4; ``` Check the [**SQL FIDDLE DMEO**](http://sqlfiddle.com/#!3/2981c7/3) **OUTPUT** ``` | WORKID | SNAPVALUE | |--------|-----------| | 4 | Snap5 | ```
You need a case with a group by over min(): ``` SELECT min(id) id, CASE WHEN Snap1 IS NULL THEN 'Snap1' WHEN Snap2 IS NULL THEN 'Snap2' WHEN Snap3 IS NULL THEN 'Snap3' WHEN Snap4 IS NULL THEN 'Snap4' END snap FROM tbtProgressSnap WHERE Snap1 IS NULL OR Snap2 IS NULL OR Snap3 IS NULL OR Snap4 IS NULL GROUP BY 2 ``` See [SQL Fiddle](http://sqlfiddle.com/#!9/902bd/1) with your sample data producing your desired output.
Select the first column name where a value is null by Id
[ "", "mysql", "sql", "sql-server-2008", "select", "" ]
This i my procedure,that determines classId and liveareaId for Animal to insert it into table ``` CREATE PROCEDURE insertAnimal @name nvarchar, @birthyear int, @classname nvarchar, @livearea nvarchar AS BEGIN DECLARE @classid int DECLARE @liveareaid int SET @classid = (SELECT Id FROM dbo.Class WHERE dbo.Class.Name = @classname) SET @liveareaid = (SELECT Id FROM dbo.LiveArea WHERE Name = @livearea) INSERT INTO dbo.Animal (Name,BirthYear,ClassId,LiveAreaId) VALUES ( @name, @birthyear, @classid, @liveareaid ) END GO ``` I have a error: > Cannot insert the value NULL into column 'ClassId', table 'ZOO.dbo.Animal'; column does not allow nulls. INSERT fails. Why `ClassId` is null, can you tell me why whis doesn't work. ``` SET @classid = (SELECT Id FROM dbo.Class WHERE dbo.Class.Name=@classname) ```
This is because you have declared `@classname` as only `nvarchar` and have not specified a length. When length is not specified in a `nvarchar` variable declaration statement, the default length is 1. Declare as: ``` CREATE PROCEDURE insertAnimal @name nvarchar(10), @birthyear int, @classname nvarchar(10), @livearea nvarchar(10) ... ```
``` CREATE PROCEDURE insertAnimal @name nvarchar, @birthyear int, @classname nvarchar, @livearea nvarchar AS BEGIN DECLARE @classid int DECLARE @liveareaid int SET @classid = (SELECT Id FROM dbo.Class WHERE dbo.Class.Name=@classname AND dbo.Class.Id IS NOT NULL) SET @liveareaid = (SELECT Id FROM dbo.LiveArea WHERE Name=@livearea) INSERT INTO dbo.Animal (Name,BirthYear,ClassId,LiveAreaId) VALUES ( @name, @birthyear, @classid, @liveareaid ) END GO ```
SQL Assign value to local variable take from table
[ "", "sql", "select", "declare", "" ]
I have to remove all whitespaces in String `'5 000 000,5' to '5000000,5'`. I tried 3 below but it did not work ``` select replace('5 000 000,5',' ','') from dual; select regexp_replace('5 000 000,5', '[[:space:]]*','') from dual; select regexp_replace('5 000 000,5', ' ','') from dual; ``` Or anyone know how to convert this String '5 000 000,5' to number because TO\_NUMBER failed. Thanks
1. Using **`REGEXP_REPLACE` and `SPACE` class**. `Select regexp_replace('your_value', '[[:space:]]+', '') from dual:` 2. Using **`REPLACE`** `Select REPLACE('your_value', chr(32), '') from dual:`
You can try this and remove any non-numeric chars like comma (`,`) ``` SELECT to_number(regexp_replace('5 000 000,5', '[^0-9]', '')) FROM dual; ```
Remove all whitespaces in string
[ "", "sql", "oracle", "removing-whitespace", "" ]
I have a program say ``` DECLARE str VARCHAR2(20); BEGIN str:='Test'; DBMS_OUTPUT.PUT_LINE(// Here I want to print a character of any index from string//); END; / ``` How to do it?
You'd use the [SUBSTR function](http://docs.oracle.com/cd/B28359_01/server.111/b28286/functions169.htm#SQLRF06114). For example: ``` DECLARE str VARCHAR2(20); BEGIN str:='Test'; FOR i IN 1..LENGTH(str) LOOP DBMS_OUTPUT.PUT_LINE('Character at index ' || i || ' is ' || SUBSTR(str, i, 1)); END LOOP; END; ``` Share and enjoy.
If you are looking to find a particular character and its index within a string, also look at the Oracle function INSTR(). INSTR( string, substring [, start\_position [, nth\_appearance ] ] ) HTH
How to access an index of a string in PL/SQL?
[ "", "sql", "string", "oracle", "plsql", "" ]
I am trying to create a select count statement where the where statement is different for each row. For example i got the following table called dbo.job: ``` User: Job: Mark Garbageman Dirk Garbageman Henk Garbageman Steven Garbageman Mark Delivery Dirk Delivery Henk Delivery Steven Delivery Stevens Delivery ``` Now i want to know how many user i got in table "job" with the job of garbageman, for this i use: ``` Select COUNT(user) as count from job WHERE job.job = 'Garbageman' ``` I will get the following result: ``` Count: 4 ``` Now i also want to know how many users got the job of delivery in the same select statement, i will get something like: ``` Select COUNT(user) as GarbageCount, COUNT(user) as Deliverycount from job WHERE job.job = 'Garbageman' ``` Now i will get the following result: ``` GarbageCount: DelivryCount: 4 4 ``` But i want: Now i will get the following result: ``` GarbageCount: DelivryCount: 4 5 ``` I know why my statement fails, that's because i am only using 1 where statement. But the where statement of both GarbageCount and DeliveryCount are different. And i have simply no clue how to create a query like that. Someone has any suggestions?
You can use Case When with Count. <http://peoplesoft.wikidot.com/conditional-counting-in-sql> MSDN reference -><http://msdn.microsoft.com/en-us/library/ms175997.aspx> ``` select count(case when Job = 'Garbageman' then 1 end) as GarbageCount, count(case when Job = 'Delivery' then 1 end) as DeliveryCount from Job ```
You can achieve this using conditional `sum()` ``` select sum(Job = 'Garbageman') as GarbageCount, sum(Job = 'Delivery') as DelivryCount from job ```
Sql Select statement with different where statements
[ "", "sql", "" ]
I'm getting following output from a query shown in figure 01. Every employee has two records showing their work locations together with effective dates. Current flag shows the current record denoted by 'Y'. Now I want to convert this output to the following format shown in figure 02. Here an employee has a single row. It shows employee's current record and transfer date, previous location and new location. ![Figure 01](https://i.stack.imgur.com/EfJcR.jpg) ![Figure 02](https://i.stack.imgur.com/fEyBH.jpg) Can you show me how to do this please?
Try this: ``` --PREVIOUSRESULT will be your existing result. SELECT A.EMPLOYEENO, A.NAME, A.CURRENTFLAG, (SELECT B.LOCATION FROM PREVIOUSRESULT B WHERE B.EMPLOYEENO = A.EMPLOYEENO AND B.CURRENTFLAG IS NULL) AS FROMVALUE, A.Location AS ToValue, A.TRANSFERDATE AS EFFECTIVEDATE FROM PREVIOUSRESULT A WHERE A.CURRENTFLAG = 'Y' ``` [--Result](http://www.sqlfiddle.com/#!4/ab54d/1)
Just one more answer: ``` select a.empno, a.ename, a.cflag, (select b.location from empdetails b where b.empno=a.empno and b.cflag is null) "From", a.location "To", a.transfer_date from empdetails a where a.cflag is not null; ``` Check this [sqlfiddle](http://www.sqlfiddle.com/#!4/6ffc4/4)
Combining two rows to a single row
[ "", "sql", "database", "plsql", "oracle-sqldeveloper", "" ]
I've a database of musical tracks, and I'm trying to find which genre has the most tracks. Genre is stored in a Work table, with each track in the Track table having a field for which Work it belongs to. This query correctly returns the genres and how many tracks they each have: ``` SELECT w.genre , COUNT(t.name) [numTracks] FROM Track t LEFT JOIN Work w ON t.workName = w.name LEFT JOIN AlbumContains ta ON ta.workName = t.workName GROUP BY w.genre ``` But I can't find a way to just take whichever genre (or multiple genres) has the highest numTracks from that result. Thanks in advance!
Try this: **MySQL** ``` SELECT w.genre, COUNT(t.name) AS numTracks FROM work w LEFT JOIN Track t ON t.workName = w.name GROUP BY w.genre ORDER BY COUNT(t.name) DESC LIMIT 1 ``` **SQL SERVER** ``` SELECT TOP 1 w.genre, COUNT(t.name) AS numTracks FROM work w LEFT JOIN Track t ON t.workName = w.name GROUP BY w.genre ORDER BY COUNT(t.name) DESC ```
``` select * from (SELECT Work.genre, count(Track.name) as numTracks FROM Track LEFT JOIN Work ON Track.workName=Work.name LEFT JOIN AlbumContains ON Track.workName=AlbumContains.workName GROUP BY Work.genre)alias order by aias. numTracks desc limit 1 ```
Find the Maximum from a Count in SQL
[ "", "sql", "sql-server", "select", "count", "group-by", "" ]
I have compiled the following as an example of what i have done so far and would like to know how i should continue: ``` CREATE TABLE tblMembers ( Member_ID int, Name varchar(255) ); CREATE TABLE tblHorses ( Horse_ID int, Name varchar(255), Age int(10), Member_ID int(10) ); ``` So i would like to specify both Member\_ID and Horse\_ID as the PK and create the relationship between tblMembers and tblHorses using Member\_ID I would also like to make the ID columns auto incremental Thank you in advance
Is this what you ware asking? ``` CREATE TABLE tblMembers ( Member_ID int identity(1, 1) not null primary key Name varchar(255) ); CREATE TABLE tblHorses ( Horse_ID int identity(1, 1) not nullprimary key Name varchar(255), Age int, Member_ID int references tblMembers(member_id) ); ``` Storing something like "age" in a column is a really bad idea. After all, age continually changes. You should be storing something like the date of birth.
Use this. **[Fiddler Demo](http://sqlfiddle.com/#!6/c56d1)** Refer this for creating [Primary Key](http://msdn.microsoft.com/en-IN/library/ms189039.aspx),[Foreign Key](http://msdn.microsoft.com/en-IN/library/ms189049.aspx), [Identity](http://msdn.microsoft.com/en-us/library/ms186775.aspx) . ``` CREATE TABLE tblMembers ( Member_ID int IDENTITY(1,1) Primary Key, Name varchar(255) ); CREATE TABLE tblHorses ( Horse_ID int IDENTITY(1,1) Primary Key, Name varchar(255), Age int, Member_ID int Foreign key (Member_ID) REFERENCES tblMembers(Member_ID) ); ``` **Note:** `MS SQL doesn't support length in Integer Type.`
Adding Primary keys and a Relationship to tables
[ "", "sql", "sql-server", "" ]
I have a column with values like `AAA-BBB-CCC` or `AAA-BBB-CCC-DDD`. I am only interested in splitting and retrieving `AAA BBB CCC`. I've tried using `PARSENAME`, however that messes up.
[SQL Fiddle](http://sqlfiddle.com/#!3/7554ea/5) **MS SQL Server 2008 Schema Setup**: ``` CREATE TABLE Table1 ([value] varchar(50)) ; INSERT INTO Table1 ([value]) VALUES ('AAA-BBB-CCC'), ('AAA-BBB-CCC-DDD'), ('AAAAA-BBBB-CCCC'), ('AAAAA-BBBB-CCCCCC-DDD'), ('AAAAAA-BBB-CCC'), ('AAAAAAAA-BBBBBB-CCCCCC-DDD-EEEE') ; ``` **Query 1**: ``` SELECT replace( substring(value,1,( case when charindex('-',value,charindex('-',value,charindex('-',value)+1)+1) = 0 then len(value) else charindex('-',value,charindex('-',value,charindex('-',value)+1)+1)-1 end )),'-',' ') FROM Table1 ``` **[Results](http://sqlfiddle.com/#!3/7554ea/5/0)**: ``` | COLUMN_0 | |------------------------| | AAA BBB CCC | | AAA BBB CCC | | AAAAA BBBB CCCC | | AAAAA BBBB CCCCCC | | AAAAAA BBB CCC | | AAAAAAAA BBBBBB CCCCCC | ``` OR [SQL Fiddle](http://sqlfiddle.com/#!3/7554ea/5) **Query 2**: ``` SELECT value, PARSENAME(replace( substring(value,1,( case when charindex('-',value,charindex('-',value,charindex('-',value)+1)+1) = 0 then len(value) else charindex('-',value,charindex('-',value,charindex('-',value)+1)+1)-1 end )),'-','.'),3) as '1st', PARSENAME(replace( substring(value,1,( case when charindex('-',value,charindex('-',value,charindex('-',value)+1)+1) = 0 then len(value) else charindex('-',value,charindex('-',value,charindex('-',value)+1)+1)-1 end )),'-','.'),2) as '2nd', PARSENAME(replace( substring(value,1,( case when charindex('-',value,charindex('-',value,charindex('-',value)+1)+1) = 0 then len(value) else charindex('-',value,charindex('-',value,charindex('-',value)+1)+1)-1 end )),'-','.'),1) as '3rd' FROM Table1 ``` **[Results](http://sqlfiddle.com/#!3/7554ea/5/0)**: ``` | VALUE | 1ST | 2ND | 3RD | |---------------------------------|----------|--------|--------| | AAA-BBB-CCC | AAA | BBB | CCC | | AAA-BBB-CCC-DDD | AAA | BBB | CCC | | AAAAA-BBBB-CCCC | AAAAA | BBBB | CCCC | | AAAAA-BBBB-CCCCCC-DDD | AAAAA | BBBB | CCCCCC | | AAAAAA-BBB-CCC | AAAAAA | BBB | CCC | | AAAAAAAA-BBBBBB-CCCCCC-DDD-EEEE | AAAAAAAA | BBBBBB | CCCCCC | ```
One solution would be using replace function like so: ``` select replace(columnName, '-', ' ') as SplittedVal ``` If only AAA BBB CCC is needed and the values are formated like 3 characters space 3 characters space 3 characters (so 11 characters in total) then use: ``` select left(replace(columnName, '-', ' '), 11) ```
Split a column using delimiter
[ "", "sql", "sql-server", "" ]
Im trying to list each BookId with all its authors on the same line using listagg. I would like it to look like this; ``` BOOKID AUNAMES 1101: Dilbert Emerson Sartre. ``` But when I execute the command the output is poorly formatted ``` select BOOKID, listagg (AUNAME,',') WITHIN GROUP (ORDER BY auname) AUNAMES FROM BOOK_AUTHOR group by bookid; ``` And looks like this: ``` BOOKID ---------- AUNAMES ------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------- 1101 Dilbert,Emerson,Sartre ``` Im using SQL plus
Nothing stops you to make your query a sub-query and select again: ``` SELECT BOOKID || ':' || AUNAMES FROM ( select BOOKID, listagg (AUNAME,' ') WITHIN GROUP (ORDER BY auname) AUNAMES FROM BOOK_AUTHOR group by bookid ) A ``` If you want to use a space as a delimiter use it in the listagg function
It sounds like you just want to concatenate the two columns. One way is to use `||`: ``` select BOOKID || ': ' || listagg (AUNAME,',') WITHIN GROUP (ORDER BY auname) FROM BOOK_AUTHOR group by bookid; ``` * [SQL Fiddle Demo](http://sqlfiddle.com/#!4/23522/1)
listagg output formatting
[ "", "sql", "oracle", "sqlplus", "" ]
``` SELECT * FROM tbl_Something WHERE RoleID = 1 AND GroupID = CASE WHEN @GroupID = 1 THEN @GroupID OR GroupID IS NULL -- issue here WHEN @GroupID = 2 THEN @GroupID ``` What I want to do is when @GroupID = 2 then GroupID = @GroupID meaning get all the rows with groupID = 2 when @GroupID == 1 then GroupID = @GroupID or GroupID IS NULL meaning gets all rows with GroupID = 1 or GroupID IS NULL Where am i going wrong
You need to split up the condition, because `OR GroupID IS NULL` cannot be part of an expression that selects a `GroupID` for comparison. Since you have only two possibilities for `@GroupID`, it would be cleaner to rewrite your query without `CASE` altogether: ``` SELECT * FROM tbl_Something WHERE RoleID = 1 AND ( (@GroupID = 1 AND GroupID = 1) OR (@GroupID = 2 AND (GroupID = 2 OR GroupID IS NULL)) ) ```
You can use `ISNULL` for this. It will test if `GroupId` is equal to `@GroupId`, if `GroupId` is null it will check if 1 is equal to `@GroupID` ``` ISNULL(GroupID,2) = @GroupID ```
SQL where clause case
[ "", "sql", "sql-server", "" ]
I have the following db diagram : [![Database Diagram for Movies](https://i.stack.imgur.com/OoEmh.png)](https://i.stack.imgur.com/OoEmh.png) I want to find the decade (for example 1990 to 2000) that has the most number of movies. Actually it only deals with "Movies" table. Any idea on how to do that?
You can use the LEFT function in SQL Server to get the decade from the year. The decade is the first 3 digits of the year. You can group by the decade and then count the number of movies. If you sort, or order, the results by the number of movies - the decade with the largest number of movies will be at the top. For example: ``` select count(id) as number_of_movies, left(cast([year] as varchar(4)), 3) + '0s' as decade from movies group by left(cast([year] as varchar(4)), 3) order by number_of_movies desc ```
An alternative to the string approach is to use integer division to get the decade: ``` SELECT [Year]/10*10 as [Decade] , COUNT(*) as [CountMovies] FROM Movies GROUP BY [Year]/10*10 ORDER BY [CountMovies] DESC ``` This returns all, ordered by the decade(s) with the most movies. You could add a TOP (1) to only get the top, but then you'd need to consider tiebreaker scenarios to ensure you get deterministic results.
Finding the decade with largest records, SQL Server
[ "", "sql", "sql-server", "" ]
I need to create a list of GUID's in SQL Server 2008 R2 and I am using the `NEWID()` function. This is what I am trying but I just get only one ID: ``` SELECT TOP 100 NEWID() ``` I am new to SQL Server and I don't know if there is a way to do that or a way to create a loop for do it. I don't need to persist those GUID's I just want to show them on screen.
You can use an arbitrary table as "sequence-generator": ``` SELECT TOP (100) Guid = NEWID() FROM [master]..spt_values; ``` `Demo` Note that this table contains only 2346 rows. Worth reading: [Generate a set or sequence without loops](http://sqlperformance.com/2013/01/t-sql-queries/generate-a-set-1)
Just use a loop. Try the following: ``` create table #GUIDS (tempID uniqueidentifier) declare @i int = 0 while (@i < 100) begin insert into #GUIDS select newid() set @i = @i + 1 end select * from #GUIDS drop table #GUIDS ``` **NOTE:** This is not a good solution to use for a large number of iterations, as it loops through the result in a row-by-row fashion.
There is a way to generate a list of GUID's using NEWID function?
[ "", "sql", "sql-server", "sql-server-2008-r2", "" ]
Basically this is what I have : ``` insert into A values(1689, 1709); insert into A values(1709, 1689); insert into A values(1782, 1709); insert into A values(1911, 1247); insert into A values(1247, 1468); insert into A values(1641, 1468); insert into A values(1316, 1304); insert into A values(1501, 1934); insert into A values(1934, 1501); insert into A values(1025, 1101); ``` As you can see, there are 2 values to work with here. Let's call them a and b (a,b). What I need to create is a query with condition that b must not exist in `column1`. I'm kinda new to this, so among many things I try this looked like the closest answer but it doesn't do the job. ``` SELECT a.* FROM A as a LEFT JOIN B AS b ON b.column = a.column WHERE B.column IS NULL ```
Assuming I'm understanding your question, one option is to use `NOT EXISTS`: ``` select col2 from A A1 where not exists ( select 1 from A A2 where A1.col2 = A2.col1 ) ``` * [SQL Fiddle Demo](http://sqlfiddle.com/#!3/22b11/1) This will return all `col2` records that do no exist in `col1`.
``` SELECT COL1, COL2 FROM A WHERE COL1 NOT IN (SELECT DISTINCT COL2 FROM A) ```
Checking If data from A column exists in B column
[ "", "sql", "sql-server", "join", "exists", "" ]
Is it possible to limit the number of elements in the following `string_agg` function? ``` string_agg(distinct(tag),', ') ```
I am not aware that you can limit it in the `string_agg()` function. You can limit it in other ways: ``` select postid, string_agg(distinct(tag), ', ') from table t group by postid ``` Then you can do: ``` select postid, string_agg(distinct (case when seqnum <= 10 then tag end), ', ') from (select t.*, dense_rank() over (partition by postid order by tag) as seqnum from table t ) t group by postid ```
There are two more ways. 1. make an array from rows, limit it, and then concatenate into string: ``` SELECT array_to_string((array_agg(DISTINCT tag))[1:3], ', ') FROM tbl ``` ("array[1:3]" means: take items from 1 to 3 from array) 2. concatenate rows into string without limit, then use "substring" to trim it: ``` string_agg(distinct(tag),',') ``` If you know that your "tag" field cannot contain `,` character then you can select all text before nth occurence of your `,` ``` SELECT substring( string_agg(DISTINCT tag, ',') || ',' from '(?:[^,]+,){1,3}') FROM tbl ``` This substring will select 3 or less strings divided by `,`. To exclude trailing `,` just add `rtrim`: ``` SELECT rtrim(substring( string_agg(DISTINCT tag, ',') || ',' from '(?:[^,]+,){1,3}'), ',') FROM test ```
PostgreSQL - string_agg with limited number of elements
[ "", "sql", "postgresql", "aggregate-functions", "" ]
I tried like this ``` SELECT DATENAME(DW,GETDATE()) + ' ' + CONVERT(VARCHAR(12), SYSDATETIME(), 107) ``` result is ``` Thursday Dec 11, 2014 ``` **Required OutPut:** SELECT QUERY TO DISPLAY DATE AS SHOWN BELOW ``` Thu Dec 11, 2014 ```
``` SELECT LEFT(DATENAME(DW,GETDATE()),3) + ' ' + CONVERT(VARCHAR(12), SYSDATETIME(), 107) FROM yourtable ```
All you need is to take only left 3 symbols of your day of the week name: ``` SELECT LEFT(DATENAME(DW,GETDATE()),3) + ' ' + CONVERT(VARCHAR(12), SYSDATETIME(), 107) ```
select query to display Date format as DAY MMM DD YYYY in SQL server 2008
[ "", "sql", "sql-server", "database", "sql-server-2008", "sql-server-2005", "" ]
[fiddle.](http://sqlfiddle.com/#!4/ea88e/4) ``` TABLE1 PROP1 PROP2 NUM a a 1 a a 2 a a 3 a b 1 a b 2 a b 3 TABLE2 PROP1 PROP2 NUM a a 1 a a 2 ``` I want to query the missing NUM values in `TABLE2` with regards to `(PROP1, PROP2)` tuples such as `(a,b,3)`. However, if a tuple does not exist in `TABLE2` such as `(a,b)`. I don't want to return it in the result. So, my expected output is ``` PROP1 PROP2 NUM a a 3 ``` Following query I wrote, returns the `(a,b)` tuples as well, which I don't want to. ``` SELECT * FROM TABLE1 T1 LEFT JOIN TABLE2 T2 ON T1.PROP1 = T2.PROP1 AND T1.PROP2 = T2.PROP2 AND T1.NUM = T2.NUM WHERE T2.NUM IS NULL ``` I want to exclude these 3 rows, so I join with `TABLE2` one more time and group the results which gives me the correct result. ``` SELECT T1.PROP1, T1.PROP2, T1.NUM FROM TABLE1 T1 LEFT JOIN TABLE2 T2 ON T1.PROP1 = T2.PROP1 AND T1.PROP2 = T2.PROP2 AND T1.NUM = T2.NUM JOIN TABLE2 T22 ON T1.PROP1 = T22.PROP1 AND T1.PROP2 = T22.PROP2 WHERE T2.NUM IS NULL GROUP BY T1.PROP1, T1.PROP2, T1.NUM ``` *Question*: Is there any way I can do it without a `GROUP BY` statement since it is a bit exhaustive for large tables. I'm using Oracle 11g.
This will do what you want, but I don't know if it will be more efficient: ``` SELECT * FROM TABLE1 T1 JOIN (SELECT DISTINCT PROP1, PROP2 FROM TABLE2 ) TT2 ON T1.PROP1 = TT2.PROP1 AND t1.PROP2 = TT2.PROP2 LEFT JOIN TABLE2 T2 ON T1.PROP1 = T2.PROP1 AND T1.PROP2 = T2.PROP2 AND T1.NUM = T2.NUM WHERE T2.NUM IS NULL; ``` It first filters `table1` on the matching rows, and then does the final comparison.
That's how I would have done it ``` with table1 as (select 'a' prop1, 'a' prop2, 1 num from dual union all select 'a' , 'a' , 2 from dual union all select 'a' , 'a' , 3 from dual union all select 'a' , 'b' , 1 from dual union all select 'a' , 'b' , 2 from dual union all select 'a' , 'b' , 3 from dual), table2 as( select 'a' prop1, 'a' prop2, 1 num from dual union all select 'a' , 'a' , 2 from dual ) select prop1, prop2, num from table1 where (prop1, prop2) in (select prop1, prop2 from table2) minus select prop1, prop2, num from table2 ``` another approach is this: ``` select prop1, prop2, num from table1 where (prop1, prop2, num) not in(select prop1, prop2, num from table2) and (prop1, prop2) in (select prop1, prop2 from table2) ``` Edit: I tried playing around with `exists` to get it, to use table2 just once, but I don't find a solution for that, I would be interessted if someone else finds a solution for that.
Left join and excluding non-existent rows on certain matchups
[ "", "sql", "oracle", "join", "" ]
I have a select statement and returns one column but many rows. I want to convert that to a single string with commas like below. ``` SELECT CODE FROM LG_xxx_CLCARD WHERE CODE < 'KUR' AND CODE > 'CARI1' ``` this returns ``` ASD1 ASD2 ASD3 ``` I want to convert this to ``` 'ASD1','ASD2','ASD3' ``` How can I do this in SQL?
Use the STUFF function: ``` SELECT STUFF( (SELECT ',' + CODE FROM LG_xxx_CLCARD WHERE CODE < 'KUR' AND CODE > 'CARI1' ORDER BY CODE FOR XML PATH ('')), 1, 1, '') ``` If you need the single quotes, then the following will do the trick: ``` SELECT STUFF( (SELECT ',' + '''' + CODE + '''' FROM LG_xxx_CLCARD WHERE CODE < 'KUR' AND CODE > 'CARI1' ORDER BY CODE FOR XML PATH ('')), 1, 1, '') ```
Try this: ``` SELECT LISTAGG(CODE , ',') WITHIN GROUP (ORDER BY CODE ) FROM LG_xxx_CLCARD WHERE CODE < 'KUR' AND CODE > 'CARI1' ``` The result should be 'ASD1','ASD2','ASD3', ..
how to convert rows values to a single string
[ "", "sql", "sql-server", "select", "" ]
basically i have a table1 table and tabl2 table. relation one to many So i'm trying to make a select like ``` $em->createQueryBuilder() ->select('t1') ->from('AcmeAppBundle:Table1', 't1') ->orderBy('t1.id', 'DESC') ->getQuery()->getResult(); ``` so i need the same query but i need the order to be by number of records from table2 the relation is set in the entity something like ``` * @ORM\OneToMany(targetEntity="Table2", mappedBy="something") ``` in raw query it looks like ``` SELECT table1.* FROM table1 left join table2 on table1.id = table2.table1_id group by (table2.table1_id) ```
``` $qb = $em->createQueryBuilder(); $qb ->select('t1') ->from('AcmeAppBundle:Table1', 't1') ->leftJoin('BLABLABundle:BlaEntity', 't2', 'WITH', 't1.id = t2.table1_id') ->orderBy('t1.id', 'DESC') ->groupBy('t2.table1_id'); $result = $qb->getQuery()->getResult(); ```
This would order the result by number of records from table2: ``` $queryBuilder = $entityManager->createQueryBuilder() ->select('t1, COUNT(t2.id) AS myCount') ->from('AcmeAppBundle:Table1', 't1') ->leftJoin('t1.fields', 't2') //fields is the one to many field name for the target entity AcmeAppBundle:Table2 ->groupBy('t1.id') ->orderBy('myCount', 'DESC'); $result = $queryBuilder ->getQuery()->getResult(); ```
symfony 2 and doctrine, selecting rows table by number of joned records from another table
[ "", "sql", "symfony", "orm", "doctrine-orm", "" ]
I have a query: ``` select 'False' as COL_1, coalesce(sum(volume),0) as COL_2 from table1 where eligible like 'False%' and Month = :month ``` Basically I am creating the first column to read False and what results are found and my problem is that if my col\_2 returns null then it still shows the col\_1 false in the result. I want it so that if col\_2 returns null, the whole query returns null, I have been playing with coalesce, notnull, ifnull etc.. to no avail. Any assistance would be appreciated thanks!
``` SELECT CASE WHEN (coalesce(sum(volume),0)) IS NULL THEN NULL ELSE 'False' END AS COL_1, coalesce(sum(volume),0) AS COL_2 FROM table1 WHERE eligible LIKE 'False%' AND Month = :month ```
``` SELECT IF(SUM(volume) = 0, NULL, 'False') AS COL_1, SUM(volume) AS COL_2 ... ```
Sql if one column returns null, the whole resultset is null
[ "", "mysql", "sql", "" ]
I'm fairly new to MySQL and was having some difficulty understanding why the following query is working as it does: ``` select user, COUNT(*) c FROM user_rows GROUP BY user HAVING c > 1 ORDER BY c; ``` Now, the output is correct: it lists a user along with how many rows they have in the user\_rows table. My question is why doesn't the COUNT(\*) just return the number of rows in the user\_rows table? Is it an Order of Operations thing? Just trying to get added clarity for future reference.
Because if you have a `group by` clause then all aggregate functions like `count` or `sum` are applied to each group and not the complete data in the table. Since you group by the `user`, the count is calculated for each user.
The reason it doesn't count all rows in the table is because you are grouping by user
Assistance In Query Logic and Order of Operations
[ "", "mysql", "sql", "" ]
I currently have the following SQL query to pull all duplicate values: ``` Select `Racetime`,`Trainer` from tom_cards where `Trainer` in ( select `Trainer` FROM tom_cards GROUP BY `Racetime`,`Trainer` HAVING COUNT(*) > 1 ) ``` I want to have something more like ``` Select `Racetime`,`Trainer` from tom_cards where `Trainer` and `Racetime in ( select `Trainer` and `Racetime` FROM tom_cards GROUP BY `Racetime`,`Trainer` HAVING COUNT(*) > 1 ) ``` but that's not valid. What would be a valid query to express the same thing?
you can concatenate ``` Select Racetime,Trainer from tom_cards where concat(Racetime,Trainer) in (select concat(Racetime,Trainer) FROM tom_cards GROUP BY Racetime,Trainer HAVING COUNT(*) > 1) ```
Sometimes I handle these situations by doing a group by clause in a sub query and then selecting the rows I want from that. ``` SELECT Racetime, Trainer FROM ( SELECT Racetime, Trainer, COUNT(*) as occurrences FROM tom_cards GROUP BY 1,2 ) Racetime_Trainer_counts WHERE occurrences > 1 ```
Selecting duplicate values from mysql and displaying all of them
[ "", "mysql", "sql", "" ]
I have a sql join where I'm trying to get 3 values from another table at the same time. Below is the data: (SiteID which is Site, HomeId which is home, VisitorId which is Visitor). I have tried ``` Select GameDate From Games INNER JOIN Schools ON Schools.SchoolId = Games.SiteId Where Games.GameId = '1'` ``` But it only gets SiteId, and not HomeId, or VisitorId ``` Games table GameId GameDate SiteId HomeId VisitorId 1 1/5/15 2 2 1 2 1/7/15 1 1 2 3 1/8/15 1 1 2 Schools table SchoolId SchoolName 1 SchoolA 2 SchoolB ``` This is the information I want to get ``` Date Site Home Visitor 1/5/15 SchoolB SchoolB SchoolA 1/7/15 SchoolA SchoolA SchoolB 1/8/15 SchoolA SchoolA SchoolB ```
You'll need to join to the same table multiple times and use table aliases: ``` SELECT Games.GameDate AS Date, SiteSchool.SchoolName AS Site, HomeSchool.SchoolName AS Home, VisitorSchool.SchoolName AS Visitor FROM Games INNER JOIN Schools SiteSchool ON SiteSchool.SchoolId = Games.SiteId INNER JOIN Schools HomeSchool ON HomeSchool.SchoolId = Games.HomeId INNER JOIN Schools VisitorSchool ON VisitorSchool.SchoolId = Games.VisitorId WHERE Games.GameId = '1' ```
You need to join the `Schools` table three times, one for each key: ``` Select g.GameDate , s.SchoolName as Site, h.SchoolName as home, v.SchoolName as visitor From Games g INNER JOIN Schools s ON s.SchoolId = g.SiteId INNER JOIN Schools h on h.SchoolId = g.HomeId INNER JOIN Schools v on v.SchoolId = g.VisitorId Where Games.GameId = 1; ``` When you have the same table used multiple times in the `from` clause, you need to use table aliases to distinguish among them. IN this case, the aliases are "s" for site, "h" for home, and "v" for visitor.
SQL get 3 different values from separate table
[ "", "sql", "sql-server-2014", "" ]
I have 4 tables **`tbl_user`, `tbl_usedetails`,`tbl_aboutme`,`tbl_looking`** contains details of different users. These four tables have a field named **userid** in common. I want to join these four tables with the **userid**. Consider my user id as 3 **`tbl_user`** is the root table where the `userid` is always present. **But in other tables, `userid` may or may not be present.** I tried the following query but it fetch the `userdetails` with `userid` not equal to 3 ``` select * from `tbl_user` as u, `tbl_usedetails` as ud, `tbl_aboutme` as a, `tbl_looking` as l where (u.`userid`=ud.`userid` OR a.`userid`=l.`userid` ) AND (u.`userid`='3') ``` **`tbl_usedetails`** didnt have the row with **userid** 3, but it contains another row with **userid** 13, When execute query it also joins the row with the **userid** 13.
The question is not 100% clear and unambiguous but I think you want to pick up values from other tables where present. If rows are not present for a user in three of the tables, you still want results from other tables. That's a `LEFT JOIN`, as follows: ``` SELECT * FROM `tbl_user` AS u LEFT JOIN `tbl_usedetails` AS ud ON u.userd = ud.userid LEFT JOIN `tbl_aboutme` AS a ON u.userd = a.userid LEFT JOIN `tbl_looking ` AS l ON u.userd = l.userid WHERE u.`userid` = '3' ```
try outer join so that you get the data even though there is no data in secondary table. ``` select * from `tbl_user` as u, left outer join tbl_usedetails ud on u.userid=ud.userid left outer join tbl_aboutme a on a.userid=u.userid left outer join tbl_looking l on l.userid=u.userid where u.userid='3' ```
Error in Join query in mysql
[ "", "mysql", "sql", "join", "" ]
I'm trying to do a conditional `AND` within a SQL `WHERE` clause. A pseudo code example is below. ``` SELECT * FROM [Table] WHERE [A] = [B] AND IF EXISTS ( SELECT TOP 1 1 FROM [Table2] WHERE 1 = 1 ) BEGIN --Do conditional filter (Table3.[C] = Table.[A]) END ``` So, if the if condition is true, the rest of the filtering should be applied. Any help please?
This should cater for the chance of the conditional filter and without ``` AND ( NOT EXISTS ( SELECT TOP 1 1 FROM [Table2] WHERE 1 = 1 ) OR ( EXISTS ( SELECT TOP 1 1 FROM [Table2] WHERE 1 = 1 ) AND ( --Do conditional filter (Table3.[C] = Table.[A]) ) ) ) ```
In a WHERE clause there is not only AND. You can also use OR, NOT and parentheses. Thus you can express any combination of conditions. In your example you don't want to select any data when there is a table2 entry but no matching table3 entry. ``` select * from table1 t1 where a = b and not ( exists (select * from table2) and not exists (select * from table3 t3 where t3.c = t1.a) ); ```
SQL conditional 'WHERE' clause
[ "", "sql", "sql-server-2014-express", "" ]
My first question here. This has been a really helpful platform so far. I am some what a newbie in sql. But I have a freelance project in hand which I should release this month.(reporting application with no database writes) To the point now: I have been provided with data (excel sheets with rows spanning up to 135000). Requirement is to implement a standalone application. I decided to use sql server compact 3.5 sp2 and C#. Due to time pressure(I thought it made sense too), I created tables based on each xls module, with fields of each tables matching the names of the headers in the xls, so that it can be easily imported via CSV import using SDF viewer or sql server compact toolbox added in visual studio. (so no further table normalizations done due to this reason). I have a UI design for a typical form1 in which inputs from controls in it are to be checked in an sql query spanning 2 or 3 tables. (eg: I have groupbox1 with checkboxes (names matching field1,field2.. of table1) and groupbox2 with checkboxes matching field3, field4 of table2). also date controls based on which a common 'DateTimeField' is checked in each of the tables. There are no foreign keys defined on tables for linking(did not arise the need to, since the data are different for each). The only commmon field is a 'DateTimeField'(same name) which exists in each table. (basically readings on a datetime stamp from locations. field1, field 2 etc are locations. For a particular datetime there may or may not be readings from table 1 or table2) How will I accomplish an sql select query(using Union/joins/nested selects - if sql compact 3.5 supports it) to return fields from the 2 tables based on datetime(where clause). For a given date time there can be even empty values for fields in table 2. I have done a lot of research on this and tried as well. but not yet a good solution probably also due to my bad experience. apologies! I would really appreciate any of your help! Can provide a sample of the data how it looks if you need it. Thanks in advance. Edit: Sample Data (simple as that) ## Table 1 t1Id xDateTime loc1 loc2 loc3 (could not format the tabular schmema here. sorry. but this is self explanatory) ... and so on up to 135000 records existing imported from xls ## Table 2 t2Id xDateTime loc4 loc5 loc6 .. and so on up to 100000 records imported from xls. merging table 1 and table 2 will result in a huge amount of blank rows/values for a date time.. hence leaving it as it is. But a UI multiselect(loc1,loc2,loc4,loc5 from both t1 and t2) event from winform needs to combine the result from both tables based on a datetime. ... and so on --- I managed to write it which comes very close. I say very close cause i have test in detail with different combination of inputs.. Thanks to No'am for the hint. Will mark as answer if everything goes well. ``` SELECT T1.xDateTime, T1.loc2, T2.loc4 FROM Table1 T1 INNER JOIN Table2 T2 ON T1.xDateTime = T2.xDateTime WHERE (T1.xDateTime BETWEEN 'somevalue1' AND 'somevalue2') UNION SELECT T2.xDateTime, T1.loc2, T2.loc4 FROM Table1 T1 RIGHT JOIN Table2 T2 ON T1.xDateTime = T2.xDateTime WHERE (T1.xDateTime BETWEEN 'somevalue1' AND 'somevalue2') UNION SELECT T1.xDateTime, T1.loc2, T2.loc4 FROM Table1 T1 LEFT JOIN Table2 T2 ON T1.xDateTime = T2.xDateTime WHERE (T1.xDateTime BETWEEN 'somevalue1' AND 'somevalue2') ```
Thanks a lot to @kbbucks. Works with this so far. ``` SELECT T1.MonitorDateTime, T1.loc2, T.loc4 FROM Table1 T1 LEFT JOIN Table2 T2 ON T2.MonitorDateTime = T1.MonitorDateTime WHERE T1.MonitorDateTime BETWEEN '04/05/2011 15:10:00' AND '04/05/2011 16:00:00' UNION ALL SELECT T2.MonitorDateTime, '', T2.loc4 FROM Table2 T2 LEFT OUTER JOIN Table1 T1 ON T1.MonitorDateTime = T2.MonitorDateTime WHERE T1.MonitorDateTime IS NULL AND T2.MonitorDateTime BETWEEN '04/05/2011 15:10:00' AND '04/05/2011 16:00:00' ```
Based on your comment: > For a given date time there can be even empty values for fields in table 2 my understanding would be that you are not interested in orphaned records in table 2 (based on date) so in that case a LEFT JOIN would do it: ``` SELECT table1.t1DateTime, table1.tiID, table1.loc2, table2.t2id, table2.loc4 FROM table1 LEFT JOIN table2 ON table2.t2DateTime = table1.t1DateTime ``` However if there are also entries in table2 with no matching dates in table1 that you need to return you could try this: ``` SELECT table1.t1DateTime, table1.tiID, table1.loc2, ISNULL(table2.t2id, 0), ISNULL(table2.loc4, 0.0) FROM table1 LEFT JOIN table2 ON table2.t2DateTime = table1.t1DateTime WHERE (T1.t1DateTime BETWEEN 'somevalue1' AND 'somevalue2') UNION ALL SELECT table2.t2DateTime, '0', '0.0', table2.t2id, table2.loc4 FROM table2 LEFT OUTER JOIN table1 on table1.t1DateTime=table2.t2DateTime WHERE table1.t1Datetime IS NULL AND T2.t2DateTime BETWEEN 'somevalue1' AND 'somevalue2' ```
sql query to Combine different fields from 2 different tables with no relations except a common field - SQL Server Compact 3.5 SP2
[ "", "sql", "join", "database-design", "sql-server-ce", "union", "" ]
I have a DATETIME Column in a table and I need to show the date in the following format "DD/MM/YYYY H:MM AM/PM" ``` CreatedBy 2013-07-30 12:44:06.000 2013-07-30 12:45:57.000 2013-08-05 16:51:26.000 2013-08-05 19:08:18.000 2013-08-05 19:11:46.000 2013-09-12 12:44:27.000 ``` I need the date like this--> `"30/07/2013 12.44 PM"`
Use this in SQL: ``` print convert(nvarchar(10), getdate(), 103) + right(convert(nvarchar(30), getdate(), 0), 8) ```
``` declare @Date datetime = '2013-07-30 12:44:06.000', @Time time = '12:44:06.000', @VarcharDate varchar(100), @VarcharTime varchar(100) set @VarcharDate= CONVERT(varchar,@Date,111) -- get the date in the format you want set @VarcharTime = CONVERT(varchar,@Date) -- get the time in format you want print @VarcharDate + ' '+SUBSTRING(@VarcharTime,13,LEN(@VarcharTime)) ``` This will give you the desire output.
How to display Date in DD/MM/YYYY H:MM AM/PM format in SQL Server
[ "", "sql", "sql-server", "datetime", "" ]
Here's my schema: ``` CREATE TABLE T (A CHAR(1), B CHAR(1)); INSERT INTO T (A, B) VALUES('1', '1'); INSERT INTO T (A, B) VALUES('2', '2'); INSERT INTO T (A, B) VALUES('1', '2'); ``` I wan't to select rows where columns A and B contain a combination of values. For example, lets say I want to find combinations A=1,B=1, and A=2,B=2, and nothing else. If it were a single value, I could use the IN statement, but when I try this: ``` SELECT * FROM T WHERE A IN ('1', '2') AND B IN ('1', '2') ``` I get back all three rows. How can I match a combination of values?
You can check this query : ``` SELECT * FROM T WHERE (A,B) IN (('1', '1'),('2', '2')); ``` see given link below Click [here](http://sqlfiddle.com/#!2/fc2535/11 "sql fiddle")
Is this what you're looking for using `OR` with parentheses? ``` select * from T where ( A = '1' AND B = '1' ) OR ( A = '2' AND B = '2' ) ``` * [SQL Fiddle Demo](http://sqlfiddle.com/#!2/fc2535/1)
Select all rows where two columns contain a combination of values
[ "", "sql", "" ]
I have two tables that i need to consult with one query, i have the `Participante` table that will store some data from the user and the `ParticipanteResultado` that will store the answers of that user. Table `Participante` ``` participanteID, tipo, **other fields** ``` The field `tipo` from `Participante` is used by the EF so i could receive the data from the query Table `ParticipanteResultado` ``` participanteResultadoID, participanteID, tipo, quantidadeValidas ``` My `ParticipanteResultado` table can have around 10 results per user, which will be different on the `tipo` that can be (A, B, C, D...). ``` participanteResultadoID participanteID tipo quantidadeValidas 4 88 S 5 5 88 E 5 ``` What i need is to display the user data and his best result. By best result i mean the result that have the highest value on `quantidadeValidas` field. I already managed to do that with the following query: ``` SELECT participanteID, nome, email, unidadeCE, telefone, (SELECT TOP(1) tipo FROM ParticipanteResultado pr WHERE p.participanteID = pr.participanteID ORDER BY quantidadeValidas DESC) AS tipo FROM Participante p ``` My problem is, in the example i gave of the `ParticipanteResultado` data i have the user `88` with a tie and i need to bring those two values, they can be in the same column like (S, E). UPDATE This is my result from the query: ``` participanteID nome email unidadeCE telefone tipo 69 teste teste teste@teste.com.br 42657 (11) 11111-1111 NULL 70 ana paula teste teste@pearson.com 42182 (19) 1111-11111 NULL 71 testes testes@testes.com 36513 (11) 11111-1111 NULL 83 teste teste teste@teste.com.br 36513 (11) 11111-1111 NULL 84 ana teste teste@hotmail.com 36921 (11) 11111-1111 NULL 85 Ana Paula teste@pearson.com 36503 (11) 11111-1111 NULL 86 Claudio teste@raioxfilmes.com.br 36830 (11) 1111-11111 NULL 87 Joana D'Arc teste@ig.com.br 42855 (11) 1111-11111 NULL 88 teste teste teste@teste.com.br 41925 (11) 11111-1111 E 89 Claudia Caroline teste@gmail.com 36355 (11) 11111-1111 NULL 90 Aline Souza teste@gmail.com 36888 (11) 11111-1111 NULL 91 Samuel Oliveira teste@ig.com.br 39401 (11) 11111-1111 NULL ``` P.S. I'm using this query on c# Entity Framework with my db context
Think of the data in terms of sets. You need a set of data which is of all participants and all of their scores and a set of the max quantidadeValidas by participant So first get a list of the max quantidadeValidas for each participant ``` SELECT MAX(PR.quantidadeValidas), PR.participanteID FROM ParticipanteResultado PR GROUP BY participanteID ``` Now get the participants and scores. ``` SELECT participanteID, nome, email, unidadeCE, telefone, FROM Participante p INNER JOIN ParticipanteResultado PR on p.participanteID = pr.participanteID ``` So now that we have both sets of data we need to limit all the scores to only the max, so join both sets together. Using common table expressions for each of the above queries, and then the join should give you the desired results keeping the ties. ``` with maxScores as ( SELECT MAX(PR.quantidadeValidas) maxScore, PR.participanteID FROM ParticipanteResultado PR GROUP BY participanteID ), CompleteSet as ( SELECT participanteID, nome, email, unidadeCE, telefone, PR.tipo PR.quantidadeValidas FROM Participante p INNER JOIN ParticipanteResultado PR on p.participanteID = pr.participanteID) SELECT CS.participanteID, nome, email, unidadeCE, telefone, tipo FROM maxScores MS INNER JOIN CompleteSet CS on MS.MaxScore = CS.QuantidadeValidas and CS.ParticipanteID = MS.ParticipanteID ``` Or without a CTE (Common Table Expression) as... (but much harder to read In my opinion) ``` SELECT CS.participanteID, nome, email, unidadeCE, telefone, tipo FROM ( SELECT MAX(PR.quantidadeValidas) maxScore, PR.participanteID FROM ParticipanteResultado PR GROUP BY participanteID) MS INNER JOIN ( SELECT participanteID, nome, email, unidadeCE, telefone, PR.tipo PR.quantidadeValidas FROM Participante p INNER JOIN ParticipanteResultado PR on p.participanteID = pr.participanteID) CS on MS.MaxScore = CS.QuantidadeValidas and CS.ParticipanteID = MS.ParticipanteID ``` Now if you want only one row when there is a tie, you'd have to use Stuff, or forXMLpath to combine multiple rows into one... but as you're expected result doesn't show how you envision ties to work, I went with the multiple row concept. instead of combining values into Tipo column.
Since you only need the top result for each participant, I'd go with the rank function: ``` WITH cte AS ( SELECT participanteID , nome , email , unidadeCE , telefone , tipo , RANK() OVER ( PARTITION BY participanteID ORDER BY tipo DESC ) AS rnk FROM ParticipanteResultado ) SELECT DISTINCT participanteID , nome , email , unidadeCE , telefone , SUBSTRING(( SELECT ',' + t1.tipo AS [text()] FROM cte t1 WHERE t1.participanteID = t2.participanteID AND t1.rnk = 1 ORDER BY t1.participanteID FOR XML PATH('') ), 2, 1000) [tipos] FROM ParticipanteResultado t2 WHERE t2.rnk = 1 ```
SQL consult joining two tables, group and order by
[ "", "sql", "sql-server", "entity-framework", "" ]
I have 4 tables: booking, address, search\_address & search\_address\_log **Tables: (relevant cols)** booking: (pickup\_address\_id, dropoff\_address\_id) address: (address\_id, postcode) search\_address: (address\_id, postcode) search\_address\_log: (id, from\_id, to\_id) --- What I need to do is have a count from both booking and search\_address\_log grouped by the pickup/dropoff & from/to postcodes. I can do this individually for each i.e.: booking: ``` SELECT a1.postcode b_From, a2.postcode b_to, COUNT(*) b_count FROM booking b INNER JOIN address a1 ON b.pickup_address_id = a1.address_id INNER JOIN address a2 ON b.destination_address_id = a2.address_id GROUP BY b_From, b_To ORDER BY COUNT(*) DESC LIMIT 10 ``` search\_address\_log: ``` SELECT sa1.postcode s_From, sa2.postcode s_To, COUNT(*) s_count FROM search_address_log sal INNER JOIN search_address sa1 ON sal.from_id=sa1.address_id INNER JOIN search_address sa2 ON sal.to_id=sa2.address_id GROUP BY s_From, s_To ORDER BY COUNT(*) DESC LIMIT 10 ``` Returning tables like: ``` | b_To b_From b_count || s_To s_From s_count | | x y 10 || x y 50 | | a b 5 || a b 60 | ``` WHAT I NEED: ``` | To From b_count s_count | | x y 10 50 | | a b 5 60 | ``` Thanks, George
Technically, what you want is a `full outer join`, but MySQL doesn't support that. However, the following should do what you want -- getting summaries for each `from` and `to` value for the two columns: ``` SELECT b_from, b_to, sum(b_count) as b_count, sum(s_count) as s_count FROM ((SELECT a1.postcode as b_From, a2.postcode as b_to, COUNT(*) as b_count, 0 as s_count FROM booking b INNER JOIN address a1 ON b.pickup_address_id = a1.address_id INNER JOIN address a2 ON b.destination_address_id = a2.address_id GROUP BY b_From, b_To ) UNION ALL (SELECT sa1.postcode as s_From, sa2.postcode as s_To, 0, COUNT(*) as s_count FROM search_address_log sal INNER JOIN search_address sa1 ON sal.from_id = sa1.address_id INNER JOIN search_address sa2 ON sal.to_id = sa2.address_id GROUP BY b_From, b_To ) ) ft GROUP BY s_From, s_to; ```
Proposal: select on the addresses -- get counts of the to/from pairs, and then add them up by postcode ``` SELECT t.postcode, f.postcode, SUM(sal.count), SUM(b.count) FROM search_address t, search_address f LEFT JOIN ( SELECT from_id, to_id, COUNT(*) count FROM search_address_log GROUP BY from_id, to_id ) sal ON sal.from_id=f.address_id AND sal.to_id=t.address_id LEFT JOIN ( SELECT pickup_address from_id, destination_address_id to_id, COUNT(*) count FROM booking GROUP BY from_id, to_id) b ON b.from_id=f.address_id AND b.to_id=t.address_id WHERE sal.count > 0 OR b.count > 0 GROUP BY t.postcode, f.postcode; ``` This will scale based on number of addresses squared, which may end up worse than the "generate independent summaries and then union them" scheme outlined in another answer. It's a bit more concise, however.
Complex Joins (Joining Joins)
[ "", "mysql", "sql", "join", "left-join", "inner-join", "" ]
I have the following schema: ``` |partner| --------- id |contract| ---------- id partner_id termination_date |comtract_items| --------------- id contract_id product_id ``` I want to select all partners which have a valid contract (`termination_date is NULL`) and with no contracts with the `product_id`'s 392 and 393. Here is my Query so far: ``` SELECT c.id, c.subject FROM contract_contract c WHERE c.termination_date is NULL or c.termination_date > '2014-12-11' and c.id NOT IN (SELECT contract_id FROM contract_item WHERE product_id IN (392, 393)) ``` Any ideas how to build the query?
It's now clear that `"and with no contracts with the product_id's 392 and 393"` is supposed to rule out partners that have ***any*** contract with any of those blacklisted `product_id`s: ``` SELECT * -- unclear which columns you want FROM partner p WHERE EXISTS ( SELECT 1 FROM contract_contract c WHERE (c.termination_date IS NULL OR c.termination_date > '2014-12-11') AND c.partner_id = p.id ) AND NOT EXISTS ( SELECT 1 FROM contract_contract c JOIN contract_item ci ON ci.contract_id = c.id WHERE c.partner_id = p.id AND ci.product_id IN (392, 393) ); ``` In your original query you would probably need **parentheses** to make `OR` bind before `AND`. With standard **[operator precedence](http://www.postgresql.org/docs/current/interactive/sql-syntax-lexical.html#SQL-PRECEDENCE)** `AND` would bind first - probably not what you want there. However, in the updated query the necessity is gone, because the additional predicate moved to a separate `EXISTS` expression.
I think you were very close, but forgot some parentheses: ``` WHERE (c.termination_date is NULL or c.termination_date > '2014-12-11') and ... ``` Here's how I would do this:: ``` SELECT DISTINCT c.partner_id FROM Contract c WHERE (c.TerminationDate IS NOT NULL OR c.TerminationDate > CURRENT_DATE) AND NOT EXISTS (SELECT 1 FROM ContractItem WHERE contract_id = c.id AND product_id IN (392, 393)) ```
Query over three tables
[ "", "jquery", "sql", "database", "postgresql", "" ]
I have a problem to write a query. Please help me I am not a database specialist. ## My Goal is: Select last modified date for some set of observed fields for each unique object of type ProjectA. I have available only AuditLog table which is audit trail table and contains all modifications made on all objects (the old and new values are not important so i removed them from the table). Based on that table I can find all modification dates on objects of type ProjectA. ## AuditLog table: ``` +-----------+--------------+---------------------+---------------+-----+ | object_id | object_class | created_date | field | id | +-----------+--------------+---------------------+---------------+-----+ | 1000 | ProjectA | 2014-12-12 10:45:49 | text3 | 105 | | 1000 | ProjectA | 2014-12-11 12:45:19 | text3 | 104 | | 1000 | ProjectA | 2014-12-10 12:45:19 | listValue5 | 104 | | 12000 | ProjectA | 2014-12-09 20:44:27 | largeText6 | 103 | | 12000 | ProjectA | 2014-12-09 19:44:20 | largeText7 | 102 | | 100 | ProjectB | 2014-12-08 19:42:37 | otherBfield1 | 101 | | 100 | ProjectB | 2014-12-08 19:41:11 | otherBfield1 | 100 | +-----------+--------------+---------------------+---------------+-----+ ``` ## Test for one object\_id: For getting last modified date for one object (object\_id = 1000) the query can looks like this: ``` select created_date, object_class, object_id from ( select * from AuditLog where object_class = 'ProjectA' and object_id = 1000 and created_date >= sysdate-30 ---- This is just so we have more results and field in ('text3', 'listValue5', 'largeText6', 'largeText7', 'largeText8', 'listValue9') order by created_date desc ) where ROWNUM = 1; ``` Result: ``` CREATEDDATE OBJECTCLASS OBJECTJDOID ------------------- ----------- ----------- 2014-12-12 10:45:49 ProjectA 1000 ``` The result is ok. We can assume that latest created\_date in audit log for particular object is our last modified date. So, thanks to **order by created\_date desc** and **ROWNUM = 1** I have the unique result which is the newest created\_date for object\_id = 1000. ## Problem I need more generic query. The problem is when I try to remove from the query the condition: **object\_id = 1000**, because then we would get only one latest result from **all** objects. What I need is to know latest created\_date for **each of them** of type ProjectA (so in my example it should be for object\_id=1000 and for object\_id=12000). Of course I cannot list these ID's in where condition because I do not know all of them existing in DB and the new ones which will be created in the future. ## Question: Now I need to retrieve last modified date for all of objects of type 'ProjectA' not only for object\_id = 1000 but also for object\_id = 12000 in my example table. How to do it? How to select from AuditLog table the only one last modification date for each unique object of type 'ProjectA'? Do we need to do some grouping and then find the newest created\_date for that grouped objects? How to do it? Please help in creating such a sql query. Expected result should be: ``` CREATEDDATE OBJECTCLASS OBJECTJDOID ------------------- ----------- ----------- 2014-12-12 10:45:49 ProjectA 1000 2014-12-09 20:44:27 ProjectA 12000 ``` How to build generic SQL query which will return such a result?
``` select max(created_date), object_class, object_id from AuditLog where object_class = 'ProjectA' and created_date >= sysdate-30 ---- This is just so we have more results and field in ('text3', 'listValue5', 'largeText6', 'largeText7', 'largeText8', 'listValue9') group by object_id, object_class order by created_date desc ```
So, in order by grab the latest date for an object\_id, it's a fairly straightforward query: ``` select object_id, max(created_date) from auditlog group by object_id; ``` And you get ``` object_id max(created_date) ---------- ------------------- 1000 2014-12-12 10:45:49 12000 2014-12-09 20:44:27 100 2014-12-08 19:42:37 ``` But if you want more information, like the id, that presents a small problem. ``` select object_id, id, max(created_date) from auditlog group by object_id, id; object_id id max(created_date) --------- --- ------------------- 1000 104 2014-12-11 12:45:19 100 100 2014-12-08 19:41:11 12000 102 2014-12-09 19:44:20 1000 105 2014-12-12 10:45:49 12000 103 2014-12-09 20:44:27 100 101 2014-12-08 19:42:37 ``` You get around that by nesting another query to the table to get the correct date: ``` select object_id, id, created_date from auditlog a where created_date = (select max(created_date) from auditlog b where a.object_id = b.object_id); object_id id max(created_date) --------- --- ------------------- 1000 105 2014-12-12 10:45:49 12000 103 2014-12-09 20:44:27 100 101 2014-12-08 19:42:37 ```
How to select last modified date for set of observed fields for each unique objects of some type?
[ "", "sql", "database", "oracle", "" ]
I have a many to many relationship between `stories` and `tags`, the table created to hold the relationship is `taxonomies`. I want to search all the stories that have all the given tags. The query I'm using so far is this one, of course it doesn't work for me, it returns all stories that contain **any** of the given tags, I want to show stories that contain **all** the defined tags. ``` SELECT stories.* FROM `stories` JOIN `taxonomies` ON `stories`.id = `taxonomies`.`story_id` JOIN `tags` ON `tags`.id = `taxonomies`.tag_id WHERE `tags`.name = 'tag 1' OR `tags`.name = 'tag 2' /* ... */ GROUP BY stories.id ```
I think you want this query ``` SELECT stories.id FROM `stories` JOIN `taxonomies` ON `stories`.id = `taxonomies`.`story_id` JOIN `tags` ON `tags`.id = `taxonomies`.tag_id GROUP BY stories.id HAVING SUM(`tags`.name = 'tag 1')>0 AND SUM(`tags`.name = 'tag 2')>0 ```
Do you know the number of tags you are checking? If so, could you use a [HAVING](http://dev.mysql.com/doc/refman/5.0/en/group-by-handling.html) to confirm that the number of matches for `stories.id` matches the total number of tags? Something like this: ``` SELECT stories.id, COUNT(stories.id) FROM `stories` JOIN `taxonomies` ON `stories`.id = `taxonomies`.`story_id` JOIN `tags` ON `tags`.id = `taxonomies`.tag_id WHERE `tags`.name = 'tag 1' OR `tags`.name = 'tag 2' /* ... */ GROUP BY stories.id HAVING COUNT(stories.id) = @numberOfTags ```
Searching by tag, inclusive
[ "", "mysql", "sql", "" ]
I am looking for a solution to (something I imagine to be) a common and trivial problem, but I couldn't find the correct words to find solutions on Google. # Starting situation I have a table `orders` that is associated to a `product` and a `customer`: `orders (id, product_id, customer_id)` # The problem Each order must have a payment associated. These payments come from different payment processors (e.g. Paypal, stripe, Google Wallet, Amazon Payments) and thus have different types of data and data fields associated to them. I'm looking to find a clean and normalized database design for this. # My own attempt/idea I could create separate tables for the different types of payments and associate the `order` from there: ``` paypal_payment (id, order_id, currency, amount, [custom paypal fields]) stripe_payment (id, order_id, currency, amount, [custom stripe info]) direct_debit_payment (id, order_id, currency, amount, [custom direct debit info]) ``` **The problem: With this approach I would need to `SELECT` from each table for every payment type to find an associated payment to an order, so that doesn't seem very efficient to me.** What is a clean solution to this problem? I'm using MySQL if that is relevant at all. Thanks for your help! :)
Your `Payment` table should have all fields that are common to all payments (amount, type of payment, etc) as well as a unique ID. Variable fields would then be stored in a second table with three columns: * Payment UID (foreign key to the Payment table) * Type (what kind of data this current row is storing, i.e. the name of a custom field for the payment type) * Value This allows you to associate any arbitrary number of custom fields with each payment record. Obviously, each payment could have any number of entries in this secondary table (or none if none are needed). This works quite well as most of the time you wont need the payment type specific info and can do queries that ignore this table. But the data will still be there when you need it.
If you're never going to be using those fields' contents in a where or join clause (which it usually is): 1. Add a payment method field (an enum or varchar) 2. Serialize the paypal, stripe, or whatever the client used as json 3. Store the thing using the most appropriate database type -- text in MySQL, json in Postgres.
Database design: Associating tables of variable types
[ "", "mysql", "sql", "database", "postgresql", "rdbms", "" ]
I have two tables: users and photos. Users have many photos. Photos have a column called user\_id, photos have one user. Photos also have a column called reported which is 0 or 1. I need to know the number of users who have at least 1 photo with reported = 1. I also need to get the number of users who have at least 2 photos with reported = 1. How would I do this? Here's what I'd like to do, but it obviously doesn't work: ``` select count(*) from users join (select * from photos where photos.reported = 1) as p2 on users.photo_id = p2.id; ```
Just get a histogram of the counts: ``` select numreported, count(*), min(user_id), max(user_id) from (select p.user_id, sum(p.reported = 1) as numreported from photos p group by p.user_id ) p group by numreported order by numreported; ``` This gives you the number of users that have all counts of `numreported`, including 0.
This is at least 1 ``` select count(distinct userid) from photos where reported = 1 ``` This is at least 2. ``` select count(distinct userid) from photos where reported = 1 group by userid having count(userid) > 2 ```
Comparing counts in a SQL where clause
[ "", "mysql", "sql", "" ]
Is there a command in PostgreSQL to select active connections to a given database? `psql` states that I can't drop one of my databases because there are **active connections to it**, so I would like to see what the connections are (and from which machines)
Oh, I just found that command on PostgreSQL forum: ``` SELECT * FROM pg_stat_activity; ```
Following will give you active connections/ queries in postgres DB- ``` SELECT pid ,datname ,usename ,application_name ,client_hostname ,client_port ,backend_start ,query_start ,query ,state FROM pg_stat_activity WHERE state = 'active'; ``` You may use 'idle' instead of active to get already executed connections/queries.
How to list active connections on PostgreSQL?
[ "", "sql", "postgresql", "database-connection", "listings", "" ]
I want to update the table called "user\_star\_rate" if the row is exists, otherwise insert data into table. It works well, but it is creating a duplicate row when updating. I want to prevent this problem. Here is my SQL code. ``` $rate = "INSERT INTO user_star_rate (email, product_id, rate_value) VALUES ('$visit_email','$product_id','$rate_value') ON DUPLICATE KEY UPDATE rate_value = '$rate_value'"; ```
`ON DUPLICATE KEY UPDATE` query works when inserted date have any unique key, for example you are inserting values `(email, product_id, rate_value)` so there `(email, product_id)` should combination unique so first add `unique` key of both fields and then try `ON DUPLICATE KEY UPDATE` Add Unique key, if you have not added ``` ALTER TABLE `user_star_rate` ADD UNIQUE `unique_key` ( `email`, `product_id` ) ``` query seems fine
My suggestion would be INSERT.. ON DUPLICATION KEY UPDATE Refer this [link](http://dev.mysql.com/doc/refman/5.0/en/insert-on-duplicate.html) for details
How to do update if exist without creating a duplicate row in SQL?
[ "", "mysql", "sql", "" ]
I am trying to figure out the best way to join 3 table with unique IDs where 2 of the tables need to be the 'Left' Tables compared to the 3rd table. The 3rd table would provide all the nulls that I need for my analysis. For example: table 1 = table\_r, table 2 = table\_n, table 3 = table\_t ``` unique_r unique_n unique_t match abc abc yes cde null no efg efg yes jkl null no ``` This is an example result that I want to get, where table\_r compared to table\_t gives me the matches and the nulls and table\_n compared to table\_t gives me the matches and the null. Then I would do a simple case statement to compare the result into one 'match' column and I would know what is missing. My SQL of sorts looks like this which only give me the one left side. ``` select * from table_r left join table_t on unique_r = unique_t left join table_n on unique_n = unique_t; ``` Thanks for any advice :)
Based on your example query, you want the results to contain all the columns from all three tables. Furthermore `table_r` and `table_n` seem to be unrelated, but I suppose you don't want a cross product of their rows. This is a rather strange scenario, but you should be able to achieve it like this: ``` SELECT * FROM table_r FULL OUTER JOIN table_n ON 1 = 0 LEFT JOIN table_t ON unique_r = unique_t OR unique_n = unique_t ``` Alternatively, this might perform better: ``` SELECT * FROM ( SELECT * FROM table_r LEFT JOIN ( SELECT * FROM table_n WHERE unique_n IS NULL ) UNION ALL SELECT * FROM ( SELECT * FROM table_r WHERE unique_r IS NULL ) RIGHT JOIN table_n ) LEFT JOIN table_t ON unique_r = unique_t OR unique_n = unique_t ``` That supposes no `unique_r` or `unique_n` value in the base tables is `NULL`. The two innermost subqueries thus select result sets that contain all the columns of their respective base tables, but *no rows*. As a result, the `LEFT` and `RIGHT` outer joins in the middle subqueries should be very fast, yet they should produce results with the correct columns, in corresponding order, just as needed for a `UNION ALL` (which will also be very fast). Obviously, this is an ugly, muddy mess. Don't even consider it if the first alternative is fast enough.
If you have the same fields in table\_r and table\_n, you can do a UNION with \*, otherwise, specify the fields you want to select, e.g an unique\_n IS NOT NULL as match ``` select * from table_r left join table_t on unique_r = unique_t Union select * from table_n left join table_t on unique_n = unique_t ```
How to do a Oracle SQL Left Outer Join on 3 tables with two left tables?
[ "", "sql", "oracle", "" ]
Course (one table) and module (another table). Modules can be part of different courses. Joined in this table called coursemodule ``` CourseCode ModuleID BS BS2029 CN CN5485 CS CN5485 BS CS1004 CN CS1004 CS CS1004 CS CS2017 BS CS2026 CS CS2026 ``` I want to select the modules that appear in both the CS and CN courses but not ones that also appear in the BS course. If I run this: ``` SELECT m.ModuleID, m.ModuleDescription FROM Module as m INNER JOIN (SELECT coursecode, ModuleID FROM CourseModule WHERE CourseCode = 'CS') AS CodeCS ON m.ModuleID = CodeCS.ModuleID INNER JOIN (SELECT coursecode, ModuleID FROM CourseModule WHERE CourseCode = 'CN') AS CodeCN ON m.ModuleID = CodeCN.ModuleID ``` I get: ``` ModuleID ModuleDescription CN5485 Managing Networks CS1004 Introduction to Programming ``` which based on this query is correct but I only want to return CN5485 as CS1004 is also in the BS course. Tried not in, <>, except variations all with terrible success! What do I need to be adding? amending in the query?
You can try this query - it's easy to read and uderstand: ``` SELECT m.ModuleID, m.ModuleDescription FROM Module as m WHERE m.ModuleID in ((SELECT ModuleID FROM CourseModule WHERE CourseCode = 'CN' INTERSECT SELECT ModuleID FROM CourseModule WHERE CourseCode = 'CB') EXCEPT SELECT ModuleID FROM CourseModule WHERE CourseCode = 'BS') ```
I'd probably go with something a little different - since you want two "in" and one "not in", I'd write it that way, which would make it clear what you're doing: ``` SELECT m.ModuleID, m.ModuleDescription FROM Module as m WHERE m.ModuleID in (SELECT ModuleID FROM CourseModule WHERE CourseCode = 'CS') AND m.ModuleID in (SELECT ModuleID FROM CourseModule WHERE CourseCode = 'CN') AND m.ModuleID not in (SELECT ModuleID FROM CourseModule WHERE CourseCode = 'BS') ``` The `in` syntax in my opinion is a little clearer than joining - joining to me implies that you want something from that table, when in fact you just want to make sure a match exists, not retrieve anything from it. Also joins can get you in trouble with duplicates if you're not careful - `in` won't.
Excluding results
[ "", "sql", "sql-server", "" ]
![ER Sample](https://i.stack.imgur.com/aCMyc.jpg) Considering the diagram above I am trying to select bulletins along with related info. 1. A bulletin can have only one associated user (the creator) 2. A bulletin can have only one state (the creator's home state) 3. A bulletin can have only one bulletin type (E.G. Announcement, for sale, etc) 4. A bulletin can have 0 or 1 event tied to it 5. A bulletin can have many likes 6. A bulletin can have many comments As far as the states go a region can have many states Using the query below causes it to run for 2 minutes before I hit the cancel button. I have not tried to run it for more than that. ``` SELECT TOP 10 Bulletins.Id, LEFT(Bulletins.Body, 350) AS BodySnippet, Bulletins.CreationDateTime , Bulletins.UserId AS PosterId, Bulletins.StateId, Bulletins.EventId, Bulletins.BulletinTypeId, Bulletins.[Views], Users.UserName, Users.Zipcode as ZipCode, Users.StateId as StateId, Users.City, States.Name, States.UnitedStatesRegionId, RegionsOfTheUnitedStates.Name, COUNT(BulletinLikes.Id) AS Likes, COUNT(BulletinComments.Id) AS Comments FROM Bulletins INNER JOIN Users ON Bulletins.UserId = Users.Id INNER JOIN States ON Bulletins.StateId = States.Id INNER JOIN RegionsOfTheUnitedStates ON States.UnitedStatesRegionId = RegionsOfTheUnitedStates.Id INNER JOIN BulletinTypes ON Bulletins.BulletinTypeId = BulletinTypes.Id LEFT JOIN [Events] ON Bulletins.EventId = [Events].Id LEFT JOIN BulletinLikes ON Bulletins.Id = BulletinLikes.BulletinId LEFT JOIN BulletinComments ON Bulletins.Id = BulletinComments.BulletinId GROUP BY Bulletins.Id, Bulletins.Body, Bulletins.CreationDateTime , Bulletins.UserId, Bulletins.StateId, Bulletins.EventId, Bulletins.BulletinTypeId, Bulletins.[Views], Users.UserName, Users.Zipcode, Users.StateId, Users.City, States.Name, States.UnitedStatesRegionId, RegionsOfTheUnitedStates.Name ``` Deleting the line that does the counting of Likes and Comments makes the query return back instantaneously. In my tables I have lots of dummy data. Some of these bulletins have hundreds or a couple thousand likes or comments. That still does not seem like enough to make the query run for 2 minutes plus+ I am no expert when it comes to TSQL so I know it is boiling down to how I'm counting or how I am grouping. **What would be the proper way to return the counted related records in my specific scenario?** \*\*EDIT 1\* My ER is c\*ompletely off on one part. I closed out of the website I was using to create it and lost it. Here are some corrections * Bulletins is tied to BulletinTypes with a BulletinTypeFK inside of the Bulletins table (reason being is we use Bulletintypes for a drop down) **EDIT 2** I just found out you can do some profiling on SQL Azure and came up with these two sreenshots of information; however, I'm not 100% sure what to gain from these. ![enter image description here](https://i.stack.imgur.com/LJ2Ym.jpg) ![enter image description here](https://i.stack.imgur.com/yaHP9.jpg) It looks as if the first sort operation is taking up 54.2% of resources. The first index seek looks pretty high too @ 32.2%
Without the counts then those left joins don't even need to be performed and the query optimizer probably figures it out. And you don't even user Events with the count - drop it Make sure you have indexes on all those join conditions (BullitinID) and they are not fragmented. When these two queries run fast your query will run fast ``` select count(distinct(BulletinId)) from BulletinLikes select count(distinct(BulletinId)) from BulletinComments ``` (and you may need an index on regionId) ``` SELECT TOP 10 Bulletins.Id, LEFT(Bulletins.Body, 350) AS BodySnippet , Bulletins.CreationDateTime , Bulletins.UserId AS PosterId, Bulletins.StateId, Bulletins.EventId , Bulletins.BulletinTypeId, Bulletins.[Views] , Users.UserName, Users.Zipcode as ZipCode, Users.StateId as StateId, Users.City , States.Name, States.UnitedStatesRegionId , RegionsOfTheUnitedStates.Name , COUNT(BulletinLikes.Id) AS Likes , COUNT(BulletinComments.Id) AS Comments FROM Bulletins INNER JOIN Users ON Bulletins.UserId = Users.Id INNER JOIN States ON Bulletins.StateId = States.Id INNER JOIN RegionsOfTheUnitedStates ON States.UnitedStatesRegionId = RegionsOfTheUnitedStates.Id INNER JOIN BulletinTypes ON Bulletins.BulletinTypeId = BulletinTypes.Id LEFT JOIN [Events] ON Bulletins.EventId = [Events].Id LEFT JOIN BulletinLikes ON Bulletins.Id = BulletinLikes.BulletinId LEFT JOIN BulletinComments ON Bulletins.Id = BulletinComments.BulletinId GROUP BY Bulletins.Id, Bulletins.Body, Bulletins.CreationDateTime , Bulletins.UserId, Bulletins.StateId, Bulletins.EventId , Bulletins.BulletinTypeId, Bulletins.[Views] , Users.UserName, Users.Zipcode, Users.StateId, Users.City , States.Name, States.UnitedStatesRegionId , RegionsOfTheUnitedStates.Name ```
The first thing I'd try to check performance of much simpler query that touches tables that have the most effect (you mentioned BulletinLikes and BulletinComments are the biggest offenders of performance) : ``` SELECT TOP 10 b.id, COUNT(bl.Id) AS likes, COUNT(bc.Id) AS Comments FROM Bulletins b LEFT JOIN BulletinLikes bl ON b.Id = bl.BulletinId LEFT JOIN BulletinComments ON b.Id = bc.BulletinId GROUP BY b.id ``` If that gives decent performance, I'd make it subquery or CTE, whatever syntax you prefer, and join the rest to the result of subquery. The general idea is to get rid of huge `GROUP BY` ... Side note : `TOP` without `ORDER BY` is not guaranteed to give consistent results.
Counting Related Records : Query Taking Over 2 Minutes To Run
[ "", "sql", "sql-server", "t-sql", "" ]
I am trying to get a count of the field WOID for a specific date the trouble is that the date field [opendate] is a date time field so I decide to use cast to change [opendate] to a date field ``` SELECT distinct cast([OPENDATE] as date) as [Opened Date] ,count([WOID]) as ID FROM [Tasks] Group by [OPENDATE] ``` The problem is I get output like this ``` Opened date ID 2014-05-01 1 2014-05-01 1 2014-05-21 1 2014-05-20 1 ``` where I would expect to get the proper count like this ``` Opened date ID 2014-05-01 4 2014-05-02 5 ``` Any thoughts
You are grouping by something different than what you are selecting, change to: ``` SELECT cast([OPENDATE] as date) as [Opened Date] ,count([WOID]) as ID FROM [Tasks] Group by cast([OPENDATE] as date) ``` You may be after a count of distinct `WOID` per date, in which case you'd use: ``` SELECT cast([OPENDATE] as date) as [Opened Date] ,count(DISTINCT [WOID]) as ID FROM [Tasks] Group by cast([OPENDATE] as date) ``` As a general rule, any non-aggregate items in your `SELECT` should be included in your `GROUP BY`. `DISTINCT` becomes redundant and can be excluded.
Give: ``` SELECT distinct cast([OPENDATE] as date) as [Opened Date] ,count([WOID]) as ID FROM [Tasks] Group by cast([OPENDATE] as date) ``` a shot - you want to group it on the `date`, not the `datetime` - mimicking your selected column
Stumped on Group By
[ "", "sql", "sql-server", "" ]
If I have table with a two columns: * date (timestamp) * milliseconds (int) How could I write a query that would return the two columns and a third column that represents a sum of the first two as a timestamp. Like this: ``` date | milliseconds | sum ----------------------------+---------------+---------------------------- 2014-12-10 17:43:47.554989 | 11882 | 2014-12-10 17:43:59.436989 ``` Thanks!
I think you answer is as follows: ``` # select date, milliseconds, date+milliseconds*interval '1 milliseconds' as sum from temp; date | milliseconds | sum ----------------------------+--------------+---------------------------- 2014-12-10 17:43:47.554989 | 11882 | 2014-12-10 17:43:59.436989 (1 row) ``` I created a temp table with your timestamp field called date, and your int field called milliseconds. Then ran the above select off the table with your values in it. Hope this helps.
Simply add it as an interval: ``` select stamp + (ms / 1000) * '1 second'::interval ```
PostgreSQL get column with date + interval
[ "", "sql", "postgresql", "" ]
I have three tables in MS Access as follow: ``` --Students(ID, Name, Class) --Subjects (ID, Name) --Marks (ID, StudentID, Subject.ID) ``` Relation is as follow: ``` Marks.Subject = Subjects.ID Marks.StudentID = Students.ID ``` Please Help me write a query that can return Name of Students and All SubjectNames and Marks of that student. Currently I have this query but it returns marks separately. ``` select Students.Name, Marks.Obtained, Subjects.Name from Marks inner join Students on Marks.StudentName = Students.ID ```
You have joined `students` and `marks` table with that Join `Subjects` table too ``` SELECT students.NAME, marks.obtained, subjects.NAME FROM ( marks INNER JOIN students ON marks.studentname = students.id ) INNER JOIN subjects ON marks.subject = subjects.id ```
try this: ``` select Students.Name, Marks.Obtained, Subjects.Name from Marks inner join Students on Marks.Subject = Subjects.ID Marks.StudentID = Students.ID ```
SQL Join operation in multiple tables
[ "", "sql", "ms-access", "" ]
I tried to select a random data from table employees with query `rand()` but it can happen ``` SELECT email FROM employees ORDER BY RAND() LIMIT 1; ``` and the output is: > ORA-00933: SQL command not properly ended > 00933. 00000 - "SQL command not properly ended" Can somebody tell me why??
The Oracle equivalent of `rand()` is `dbms_random.value`. The Oracle equivalent of `limit` is either a subquery with `rownum` or (in Oracle 12) `fetch first xx row only`. So, one of these should work: ``` select email from employees order by dbms_random.value fetch first 1 row only; ``` or: ``` select email from (select email from employees order by dbms_random.value ) e where rownum = 1 ```
To speed-up operations on large table, you might use the `SAMPLE` clause to randomly extract a sample of data from your table, and then randomly pick one data from that sample: ``` select email from ( SELECT email,ROWNUM rn FROM employees SAMPLE(5) -- ^ -- each row has 5% chance of being picked-up -- adjust that depending your table size and/or your needs ORDER BY dbms_random.value) where rn = 1; ``` --- An other idea is that you don't need a full sort just to extract *one* random row. For example, you might want to try that alternate approach: ``` with cte as ( SELECT email, ROWNUM rn FROM employees ), rnd as ( SELECT TRUNC(DBMS_RANDOM.VALUE(1, (SELECT COUNT(*) FROM CTE))) AS value FROM DUAL ) SELECT cte.email FROM cte JOIN rnd ON cte.rn = rnd.value; ``` I don't know if Oracle is able to "properly" optimize such queries though.
Oracle rand() function
[ "", "sql", "oracle", "random", "" ]
This runs in 2 minutes: ``` SELECT G.GKey, Amount = SUM(fct.AmountEUR) FROM WH.dbo.vw_Fact fct INNER JOIN #g G ON fct.DateKey >= G.Livedate AND fct.GKey = G.GKey GROUP BY G.GKey; ``` This runs in 8 mins: ``` SELECT G.GKey, C.Amount FROM #g G CROSS APPLY ( SELECT Amount = SUM(fct.AmountEUR) FROM WH.dbo.vw_Fact fct WHERE fct.DateKey >= G.Livedate AND fct.GKey = G.GKey ) C; ``` These are both quite simple scripts and they look logically the same to me. Table #G has 50 rows with a clustered index `ON #G(Livedate,GKey)` Table `WH.dbo.vw_Fact` has a billion rows. I actually felt initially that applying the bigger table to the small table was going to be more efficient. My experience using `CROSS APPLY` is limited - is there an obvious reason (without exploring execution plans) for the slow time? Is there a 'third way' that is likely to be quicker?
Here's the logical difference between the two joins: `CROSS APPLY`: yields the Cartesian cross product of an aggregation on a given value of `LiveDate` and `GKey`, this gets re-executed for every row. `INNER JOIN`: yields a 1-to-1 match on vw\_Fact for every value of `LiveDate` and `GKey`, then sum accross common values of `GKey`, this creates the joined set first, then applies the aggregate. As some of the other answers mentioned, cross apply is convenient when you join to a table valued function that is parameterized by some row level data from another table. Is there a third way, that is faster? I would generally suggest not using open ended operators in joins (such as `>=`). Maybe try to pre-aggregate the large table on `GKey` and some date bucket. Also, set up a non-clustered key on `LiveDate` including `AmountEUR`
I think you trying to get `Rolling sum`. use [**Over() clause**](http://msdn.microsoft.com/en-IN/library/ms189461.aspx) Try this. ``` SELECT G.GKey, Amount = Sum(fct.AmountEUR) OVER( partition BY G.GKey ORDER BY id rows UNBOUNDED PRECEDING) FROM WH.dbo.vw_Fact fct INNER JOIN #g G ON fct.GKey = G.GKey ```
Why would CROSS APPLY not be equivalent to INNER JOIN
[ "", "sql", "sql-server", "sql-server-2012", "" ]
Is it possible to create a SQL query returning a row which has any column matching input. For example: ``` SELECT row WHERE ANY column LIKE input ``` Thank you
Yes it is: ``` SELECT * FROM MyTable WHERE ColumnA LIKE '%input%' OR ColumnB LIKE '%input%' ``` --- If you want literally to dynamically check *any* column then you need to dynamically build up the list of columns and use a [prepared statement](http://dev.mysql.com/doc/refman/5.0/en/sql-syntax-prepared-statements.html). This will use the [`INFORMATION_SCHEMA.COLUMNS`](http://dev.mysql.com/doc/refman/5.0/en/columns-table.html) table to find the columns in the table, then use [`GROUP_CONCAT()`](http://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html#function_group-concat) to put them into a string separated by the rest of the where clause. By itself this will give you: ``` ColumnA LIKE '%input%' OR ColumnB ``` Then you just need to [`CONCAT()`](http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_concat) that with the start of the statement (`SELECT .. WHERE`) and the `LIKE` for the last column. All this gets stored in a variable `@sql` and used as the SQL for the prepared statement: ``` SET @sql:=(SELECT CONCAT('SELECT * FROM MyTable WHERE ', GROUP_CONCAT(COLUMN_NAME SEPARATOR ' LIKE ''%test%'' OR '), ' LIKE ''%test%''') from INFORMATION_SCHEMA.COLUMNS where TABLE_NAME = 'MyTable'); # for debugging only SELECT @sql; PREPARE dynamic_statement FROM @sql; EXECUTE dynamic_statement; DEALLOCATE PREPARE dynamic_statement; ``` Here is a working SQL Fiddle to demonstrate this: <http://sqlfiddle.com/#!2/f475d/22>
I think you are searching for this... ``` SELECT * from table WHERE column a LIKE '%input%' or column b LIKE '%input%' ```
Return row where any column matches input
[ "", "mysql", "sql", "database", "" ]
When running a SQL command to search a small database (using for testing as I'm learning) it gives me very strange results. Can you see what's wrong with my command? ``` SELECT * FROM users,departments WHERE name LIKE '%Alex%' OR lastname LIKE '%Alex%' OR email LIKE '%Alex%' AND departments.departmentid = users.departmentid ``` As you can see below it shows the users it searches for in all departments, when each user is only registered to one. Search Query and results ![Search Query](https://i.stack.imgur.com/aNn1e.png) Users Table ![Users Table](https://i.stack.imgur.com/8yvfp.png) Departments table ![Departments Table](https://i.stack.imgur.com/yxWp4.png)
Try this: ``` SELECT * FROM users,departments WHERE (name LIKE '%Alex%' OR lastname LIKE '%Alex%' OR email LIKE '%Alex%') AND departments.departmentid = users.departmentid ``` You have to make brackets arround the `or` statements.
First, use explicit `join` conditions with an `on` clause. If you are learning SQL, you should learn it right. ``` SELECT * FROM users JOIN departments ON departments.departmentid = users.departmentid WHERE name LIKE '%Alex%' OR lastname LIKE '%Alex%' OR email LIKE '%Alex%'; ``` That will fix your problem, which also could have been fixed by using parentheses to group the comparisons. But, fixing the `join` is the right thing to do.
mySQL search function returning invalid results
[ "", "mysql", "sql", "phpmyadmin", "" ]
Hi i am new to android programming, i have make an Http post request to get json data from an external sql database and displayed my result in a lisView. i want to be able to retrieve the string value from a clicked item in the listView. Please any help with this will be much appreciated
I would try something like this, which has worked for me in the past: ``` String itemValue = (String) listView.getItemAtPosition(position); ``` This is from within the `listView.setOnItemClickListener(new OnItemClickListener()` portion of your code. If your strings are aggregated into one string, then try this: ``` //Let itemValue = "item1 item2 item3" for example: String[] parts = itemValue.split(" "); String part1 = parts[0]; // item1 String part2 = parts[1]; // item2 ```
Set OnItemClickListener on listview. See on below. ``` listView.setOnItemClickListener(new AdapterView.OnItemClickListener() { @Override public void onItemClick(AdapterView<?> adapterView, View view, int position, long l) { String itemString=listView.getSelectedItem().toString(); } }); ``` Enjoy!!!...
get string value from item clicked in a listView
[ "", "android", "mysql", "sql", "listview", "onitemclick", "" ]
I want to sort a table in descending order based on the maximum value of id for each name but each set of rows by name itself should be sorted in ascending order. I have sorted my table descending on id. ``` id | name | other cloumns ----|------|--------------- 11 | yv | 10 | abc | 9 | abc | 8 | zx | 7 | tv | 6 | tv | 5 | tv | 4 | yv | 3 | yv | ``` I want it to be sorted as ``` id | name | other cloumns ----|------|--------------- 3 | yv | /*yv is on top because it had max id i.e. 11*/ 4 | yv | 11 | yv | 9 | abc | /*abc is second because it has 10*/ 10 | abc | 8 | zx | /*zx is third because it has 8*/ 5 | tv | /*tv is fourth because it has 7*/ 6 | tv | 7 | tv | ``` How can I do that? I am using MySQL
You want: ``` order by name desc, id asc ``` You can put multiple keys into the `order by`. The first key is used for the sorting. When the key values are the same, the second gets used, and so on. EDIT: I see, you want the names with the smallest id first. For this, use a `join`: ``` select t.* from table t join (select name, max(id) as maxid from table t group by name ) n on t.name = n.name order by n.maxid, t.name, t.id ```
you can combine **Order by** with **Union** like this: ``` select * from (select * from table1 order by id where name='yv') union all select * from (select * from table1 order by id where name='abc') union all select * from (select * from table1 order by id where name='zx') union all select * from (select * from table1 order by id where name='tv') ``` **EDIT** ``` select tt.id,tt.name from (select max(t.id) tid,t.name from table1 t group by t.name order by tid desc) tb join (select * from table1 tb1 order by tb1.id) tt on tb.name=tt.name ```
Order and reverse order by group?
[ "", "mysql", "sql", "" ]
I've been able to get the query result I need using the following raw sql: ``` select `person`.`id`, `full_name`, count(actions.user_id) as total from `persons` left join `actions` on `actions`.`person_id` = `persons`.`id` and `actions`.`user_id` = $user where `type` = 'mp' group by `persons`.`id` ``` But I haven't been able to get it working in eloquent yet. Based on some similar answers, I'd tried functions within `->where()` or `leftJoin()`, but the `count` of each person's actions isn't yet being filtered by `$user`. As it stands: ``` $query = Person::leftJoin('actions', function($q) use ($user) { $q->on('actions.person_id', 'persons.id') ->where('actions.user_id', $user); }) ->groupBy('persons.id') ->where('type', 'foo') //->where('actions.user_id', '=', $user) ->get(['persons.id', 'full_name', DB::raw('count(actions.id) as total')]); ``` I'm at least heading in *roughly* the right direction, right...? If it's relevant, the `Persons.php` model has two `actions` relationships: ``` public function actions() { return $this->hasMany('Action'); } public function actionsUser($id) { return $this->hasMany('Action')->where('user_id', $id); } ```
So, for reference, I solved it like so: ``` $query = Person::leftJoin('actions', function($q) use ($user) { $q->on('actions.person_id', '=', 'persons.id') ->where('actions.user_id', '=', "$user"); }) ->groupBy('persons.id') ->where('type', 'foo') ->get(['persons.id', 'full_name', DB::raw('count(actions.id) as total')]); ``` The `->where()` clause within `leftJoin`, oddly, needs the speech marks for the variable to be passed through the sql query correctly (likewise, '2' doesn't seem to work while "2" does).
I found that the `where` doesn't always work on the `leftJoin` clause If in the future you get any trouble with it, I'd suggest you using this: ``` $query = Person::leftJoin('actions', function($q) use ($user) { $q->on('actions.person_id', '=', 'persons.id') ->on('actions.user_id', '=', "$user"); }) ->groupBy('persons.id') ->where('type', 'foo') ->get(['persons.id', 'full_name', DB::raw('count(actions.id) as total')]); ``` Hope it helps someone.
mysql join ON and AND to laravel eloquent
[ "", "mysql", "sql", "laravel", "laravel-4", "eloquent", "" ]
I want to create a view of my database. \in fact, my database contains around 20 tables and i want to join the tables based on the foreign keys but there are some duplicates columns. Is there an easy and fast way to do it ?
Here's something that might help. You'll have to list out the columns if you do not want the dupes. You can run **[SQL Fiddle Example](http://sqlfiddle.com/#!2/7913a/15)** ``` select group_concat(concat(table_name,'.', column_name) separator ', ') from information_schema.columns where table_name in ('Students', 'ClassRoster'); ``` This will list [table\_name].[column\_name] for all the columns in the tables you specify in one long string. You can then copy the result and paste it next to the `SELECT` in your query and remove all the dupe columns if you know them. This query will show columns where there is more than one and show the table as well. ``` select table_name, column_name from INFORMATION_SCHEMA.COLUMNS where column_name in ( select COLUMN_NAME from INFORMATION_SCHEMA.COLUMNS group by COLUMN_NAME having count(1) > 1) order by column_name ``` It still is manual but will save you some time. With a little more effort you could tweak the queries to where the whole process is just one query and not manually remove anything.
You Should Use Alias in joining like this... ``` Select E.Name , U.Name,E.Age,U.Age From TblEmployee E inner join TblUser U on E.EmployeeId = U.EmployeeId ``` **Note : The above Query is only a Example So that you can go ahead** **Edit:** you can also do your column name what you want like this ``` Select E.* , U.*, E.Age as Employee Age ,U.Age as UserAge /* you can specify some column by E.Name and if you do not want then simplye do E.* by doing thing every column of your that table will be selected */ From TblEmployee E inner join TblUser U on E.EmployeeId = U.EmployeeId ```
Join many tables sql
[ "", "mysql", "sql", "join", "" ]
what i need is to run a sql query something like : ``` select * from table where alpahbetcolumn="A" and numbercolumn="10" and shelfcolumn="upper"; ``` i want to know how to do this query in hibernate using `EntityManager` currently this is my own try out, but not working.... ``` @PersistenceContext EntityManager em; @Transactional public List<Item> listItems(String alpahbet, String number, String shelf) { CriteriaQuery<Item> c = em.getCriteriaBuilder().createQuery(Item.class); c.from(Item.class); c..where( em.equal( alpahbet, "alpahbetcolumn" ) && em.equal( number, "numbercolumn" ) && em.equal( shelf, "shelfcolumn" )); return em.createQuery(c).getResultList(); } ``` i only have a very vague understanding on spring hibernate topic..still learning... can someone please point me out how to do this sql query properly, with code example. thanks
i have worked out using criteriaquery to fullfill my needs, here is the code. it works ``` @Transactional public List<Item> listItems(String alpahbet, String number, String shelf) { CriteriaBuilder criteriaBuilder = em.getCriteriaBuilder(); CriteriaQuery<Item> criteriaQuery = criteriaBuilder.createQuery(Item.class); Root<Item> itemRoot = criteriaQuery.from(Item.class); criteriaQuery.select(itemRoot).where(criteriaBuilder.equal(itemRoot.get("alpahbetField"), alpahbet), criteriaBuilder.equal(itemRoot.get("numberField"), number), criteriaBuilder.equal(itemRoot.get("shelfField"), shelf)); return em.createQuery(criteriaQuery).getResultList(); } ```
Try this ``` Query q = em.createNativeQuery("select * from table where alpahbetcolumn='A' and numbercolumn= 10 and shelfcolumn='upper'"); q.getResultList(); ``` `createNativeQuery()` accepts plain SQL as parameter. If you expect `Item` as result, you can use this ``` em.createNativeQuery("select * from table where alpahbetcolumn='A' and numbercolumn= 10 and shelfcolumn='upper'", Item.class); ``` If you want to use JPQL (JPA Query Language), then we need your entity code to be sure, but it would be something like this ``` em.createQuery("select i from Item i where i.alphabetColumn = 'A' "); ``` For parameterized queries, use this ``` Query q = em.createNativeQuery("select * from table where alpahbetcolumn=? and numbercolumn=? and shelfcolumn=?"); q.setParameter(1, "A"); q.setParameter(2, 10); q.setParameter(3, "upper"); ```
spring mvc hiberate, use CriteriaQuery to run select * from table where
[ "", "sql", "hibernate", "spring-mvc", "criteria", "hibernate-criteria", "" ]
Our application has a table called cargo\_items. It can be seen as kind of a queue to later process these items. Initially there was a SINGLE job which took 3000 entries and processed them one after another. Later on, somebody decided to start 3 other instances of the same job. What happened is quite obvious, many of the items were processed twice. My job is to make these processes working correctly if at the same time many instances are running. The solution I am going for now is to mark 3000 entries in the database with a job\_id and later on fetch all of these entities and process them isolated from the other processes. My current approach for flagging this rows is the following: ``` UPDATE cargo_item item SET job_id = 'SOME_UUID', job_ts = now() FROM ( SELECT id FROM cargo_item WHERE state='NEW' AND job_id is null LIMIT 3000 FOR UPDATE ) sub WHERE item.id = sub.id; ``` Basically this approach locks the 3000 rows for a update. I am not sure if it is a good approach though. On another [thread](https://dba.stackexchange.com/questions/69471/postgres-update-limit-1) I read about using advisory locks for this scenario. What do you guys think about the current approach and using advisory lock instead? ## **UPDATE** As suggested, I'd adapt the update statement like this: ``` UPDATE cargo_item item SET job_id = 'SOME_UUID', job_ts = now() FROM ( SELECT id FROM cargo_item WHERE state='NEW' AND job_id is null ORDER BY id LIMIT 3000 FOR UPDATE ) sub WHERE item.id = sub.id; ``` Thx Erwin and Tometzky for the hint. Nevertheless I wonder if the way I am trying to solve the problem is a good one? Are there different approaches you'd think of?
In the related answer you are referring to: * [Postgres UPDATE ... LIMIT 1](https://dba.stackexchange.com/a/69497/3684) The objective is to lock ***one*** row at a time. This works fine with or without advisory locks, because there is **no chance for a deadlock** - as long as you don't try to lock more rows in the same transaction. Your example is different in that you want to lock **3000 rows at a time**. There ***is*** potential for deadlock, except if all concurrent write operations lock rows in the same consistent order. [Per documentation:](http://www.postgresql.org/docs/current/interactive/explicit-locking.html#LOCKING-DEADLOCKS) > The best defense against deadlocks is generally to avoid them by being > certain that all applications using a database acquire locks on > multiple objects in a consistent order. Implement that with an ORDER BY in your subquery. ``` UPDATE cargo_item item SET job_id = 'SOME_UUID', job_ts = now() FROM ( SELECT id FROM cargo_item WHERE state='NEW' AND job_id is null ORDER BY id LIMIT 3000 FOR UPDATE ) sub WHERE item.id = sub.id; ``` This is safe and reliable, as long as *all* transactions acquire locks in the same order and concurrent updates of the ordering columns are not to be expected. ([Read the yellow "CAUTION" box at the end of this chapter in the manual](http://www.postgresql.org/docs/current/interactive/sql-select.html#SQL-FOR-UPDATE-SHARE).) So this should be safe in your case, since you are not going to update the `id` column. Effectively only one client at a time can manipulate rows this way. Concurrent transactions would try to lock the same (locked) rows and wait for the first transaction to finish. **Advisory locks** are useful if you have many or very long running concurrent transactions (doesn't seem you do). With only a few, it will be cheaper overall to just use above query and have concurrent transactions wait for their turn. ## All in one UPDATE It seems concurrent access isn't a problem per se in your setup. Concurrency is an issue created by your current solution. Instead, do it **all in a single `UPDATE`**. Assign batches of `n` numbers (3000 in the example) to each UUID and update all at once. Should be fastest. ``` UPDATE cargo_item c SET job_id = u.uuid_col , job_ts = now() FROM ( SELECT row_number() OVER () AS rn, uuid_col FROM uuid_tbl WHERE <some_criteria> -- or see below ) u JOIN ( SELECT (row_number() OVER () / 3000) + 1 AS rn, item.id FROM cargo_item WHERE state = 'NEW' AND job_id IS NULL FOR UPDATE -- just to be sure ) c2 USING (rn) WHERE c2.item_id = c.item_id; ``` ### Major points * Integer division truncates. You get 1 for the first 3000 rows, 2 for the next 3000 rows. etc. * I pick rows arbitrarily, you could apply `ORDER BY` in the window for `row_number()` to assign certain rows. * If you don't have a table of UUIDs to dispatch (`uuid_tbl`), use a `VALUES` expression to supply them. [Example.](https://stackoverflow.com/questions/22049710/postgres-regular-expression-for-a-number-of-elements/22049856#22049856) * You get batches of 3000 rows. The last batch will be short of 3000 if you don't find a multiple of 3000 to assign.
You will have deadlocks with this approach. You could avoid them by simply using `order by id` in subquery. But it will prevent any concurrent running of this queries, as concurrent queries will always try first to mark the lowest free id, and block until the first client will commit. I don't think this is a problem if you process say less than one batch per second. You don't need advisory locks. Avoid them if you can.
How to mark certain nr of rows in table on concurrent access
[ "", "sql", "postgresql", "locking", "sql-update", "" ]
I have 3 tables, this is Sql Fiddle Demo <http://www.sqlfiddle.com/#!15/89ac5/3/0> ``` create table entities (id int, credit int, debit int, value int,etype int, date date); insert into entities values (1,101,100,5000,1,'01/01/2014'), (1,101,100,1000,2,'01/01/2014'), (1,102,100,2000,1,'01/01/2014'), (1,102,100,4000,2,'01/01/2014'); create table accounts (id int, name varchar(20)); insert into accounts values (100, 'Clinic'), (101, 'Mark'), (102, 'Jone'); create table etype (id int, name varchar(20)); insert into etype values (1, 'Medicine'), (2, 'Diagnoise'); ``` --- when i run this query : ``` select e.id, credit_account.name as CreditName, debit_account.name as DebitName, t.name, e.date from entities e join accounts as credit_account on e.credit = credit_account.id join accounts as debit_account on e.debit = debit_account.id Join etype as t on e.etype = t.id ``` --- I have this result: ``` ID CREDITNAME DEBITNAME VALUE NAME DATE 1 Mark Clinic 5000 Medicine January, 01 2014 00:00:00+0000 2 Mark Clinic 1000 Diagnoise January, 01 2014 00:00:00+0000 3 Jone Clinic 2000 Medicine January, 01 2014 00:00:00+0000 4 Jone Clinic 4000 Diagnoise January, 01 2014 00:00:00+0000 ``` --- finally, I want a view to show this result: ``` ID CREDITNAME DEBITNAME Medicine Diagnoise DATE 1 Mark Clinic 5000 1000 January, 01 2014 00:00:00+0000 2 Jone Clinic 2000 4000 January, 01 2014 00:00:00+0000 ``` if we can make it Dynamically, example if we add 'Lab'
You can use conditional aggregation: ``` select row_number() over (order by creditname, debitname) as id, creditname, debitname, sum(case when name = 'Medicine' then value end) as medicine, sum(case when name = 'Diagnoise' then value end) as diagnoise, date from table t group by creditname, debitname, date; ``` EDIT: Based on your query: ``` select row_number() over (order by creditname, debitname) as id, credit_account.name as CreditName, debit_account.name as DebitName, sum(case when t.name = 'Medicine' then e.value end) as medicine, sum(case when t.name = 'Diagnoise' then e.value end) as diagnoise, e.date from entities join accounts credit_account on e.credit = credit_account.id join accounts debit_account on e.debit = debit_account.id Join etype t on e.etype = t.id group by credit_account.name, debit_account.name, e.date ```
You can do: ``` select distinct e.id, credit_account.name as CreditName, debit_account.name as DebitName, (select e1.value from entities e1 where e1.etype = 1 //Medicine type and e1.credit = e.credit) as Medicine, (select e1.value from entities e1 where e1.etype = 2 //Diagnoise type and e1.credit = e.credit) as Diagnoise, e.date from entities e join accounts as credit_account on e.credit = credit_account.id join accounts as debit_account on e.debit = debit_account.id ``` Of course if you don't want to hardcode the types with 1 and 2 you can modify the sub queries with: ``` select e1.value from entities e1, etypes et where e1.etype = et.id and et.name = 'Medicine' and e1.credit = e.credit ``` But I don't think this is the best way to do it.
Show same columns with different values in same row
[ "", "sql", "postgresql-9.3", "" ]
I want to compare row count of two tables and then return `0` or `1` depending on whether its same or not. I am thinking of something like this but can't move ahead and need some help. ``` SELECT CASE WHEN (select count(*) from table1)=(select count(*) from table2) THEN 1 ELSE 0 END AS RowCountResult FROM Table1,Table2 ``` I am getting multiple rows instead of a single row with `0` or `1`
you have to remove : ``` FROM Table1,Table2 ``` Otherwise it will consider the result of the Case-When for each row of this FROM clause.
Or Simply remove the CASE WHEN CLAUSE and write: ``` SELECT (SELECT count(*) from table1)=(SELECT count(*) from table2) AS RowCountResult; ``` as a boolean result will be returned.
Compare row count of two tables in a single query and return boolean
[ "", "sql", "database", "sqlite", "" ]
I have the following query ``` Select Date, Item_Code, SUM(In_Quantity) as In_Quantity, SUM(Issue_Quantity) as Issue_Quantity, (SUM(In_Quantity) - SUM(issue_Quantity)) as BalanceQty from (select tbl_add_product.Date as Date, tbl_add_product.Item_Code, tbl_add_product.In_Quantity, 0 as Issue_Quantity from tbl_add_product where Item_Code = 'pen' union ALL select tbl_issue_product.Date as Date, tbl_issue_product.Item_Code, 0 as In_Quantity, Issue_Quantity from tbl_issue_product where Item_Code = 'pen') X group by Item_Code, Date ``` which returns the following result: ``` Date Item_Code In_Quantity Issue_Quanitity BalanceQty -------------------------------------------------------------- 2014-12-02 pen 100 0 100 2014-12-03 pen 50 50 0 ``` I want to 100 in second row. the logic is that balance `Qty` from first row should be added to `In_Qty` so that when `Issue_Quantity` is subtracted from it, it gives `BalanceQty`
Wrap it in another subquery and then use: ``` SELECT *, SUM(BalanceQty) OVER (PARTITION BY Item_Code ORDER BY [Date]) FROM ( ... ) o ```
``` --variant using LAG() OVER () function SELECT [Date] , Item_Code , SUM(In_Quantity) AS In_Quantity , SUM(Issue_Quantity) AS Issue_Quantity , ( SUM(In_Quantity) - SUM(issue_Quantity) ) + COALESCE(LAG(SUM(In_Quantity) - SUM(issue_Quantity)) OVER ( PARTITION BY item_code ORDER BY [date] ), 0) AS BalanceQty FROM ( SELECT tbl_add_product.Date AS Date , tbl_add_product.Item_Code , tbl_add_product.In_Quantity , 0 AS Issue_Quantity FROM tbl_add_product WHERE Item_Code = 'pen' UNION ALL SELECT tbl_issue_product.Date AS Date , tbl_issue_product.Item_Code , 0 AS In_Quantity , Issue_Quantity FROM tbl_issue_product WHERE Item_Code = 'pen' ) X GROUP BY Item_Code , Date ```
Getting grouped by record value in second row
[ "", "sql", "sql-server", "t-sql", "" ]
I'm still very new to SQL queries and can't quite figure this one out. I have two tables, one table I'm running a general `SELECT ... WHERE`, super easy SQL statement. Ex: ``` SELECT * from maindata where somedata4 LIKE "%data4.%" ``` This gives me back a list of all 6 entries below, however I want an additional column to show me if the current `userdata.userId` has a matching row and to include the amount column of that. If it doesn't have that row to default to a value of 0. **Table: maindata** ``` id | somedata | somedata2 | somedata3 | somedata4 _________________________________________________ 1 | data1.1 | data2.1 | data3.1 | data4.1 2 | data1.2 | data2.2 | data3.2 | data4.2 3 | data1.3 | data2.3 | data3.3 | data4.3 4 | data1.4 | data2.4 | data3.4 | data4.4 5 | data1.5 | data2.5 | data3.5 | data4.5 6 | data1.6 | data2.6 | data3.6 | data4.6 ``` **Table: userdata** ``` id | itemId | amount | userId _____________________________ 1 | 6 | 4 | 1 2 | 4 | 4 | 26 3 | 4 | 2 | 1 ``` It should search table **maindata** for `WHERE somedata4 LIKE "%data4.%"` and on each of those entries look in table **userdata** for `userdata.amount` with `maindata.id = userdata.itemId WHERE maindata.userId = 1` Here's what I currently have for SQL ``` SELECT m.*, IFNULL(u.amount,0) from maindata m LEFT OUTER JOIN userdata u ON m.id = u.itemId WHERE m.somedata4 LIKE "%data4.%" ``` What I'm missing is the filtering of only amounts from `userdata.userId = 1`, I want the entire list to show as it is in that query. Expected Results: ``` id | somedata | somedata2 | somedata3 | somedata4 | amount __________________________________________________________ 1 | data1.1 | data2.1 | data3.1 | data4.1 | 0 2 | data1.2 | data2.2 | data3.2 | data4.2 | 0 3 | data1.3 | data2.3 | data3.3 | data4.3 | 0 4 | data1.4 | data2.4 | data3.4 | data4.4 | 4 5 | data1.5 | data2.5 | data3.5 | data4.5 | 0 6 | data1.6 | data2.6 | data3.6 | data4.6 | 2 ```
Here is the SQL Query I was looking for (I think): ``` SELECT m.*, IFNULL(u.amount,0) AS "Amount" FROM maindata m LEFT OUTER JOIN userdata u ON m.id = u.itemId AND userid = 1 WHERE m.somedata4 LIKE "%data4.%" ``` It's giving me the desired results listed above, I just don't know if it's the most efficient way of handling this request. [SQLFiddle Here](http://sqlfiddle.com/#!2/0dc82/1)
``` SELECT m.*, IFNULL(u.amount,0) from maindata m LEFT OUTER JOIN userdata u ON m.id = u.itemId WHERE m.userId = 1 ```
MySQL Join and Default
[ "", "mysql", "sql", "" ]
In MySql I need to select top student in every class in a school in `termid=10` to get discount for next term enrollment . Please notice that total is not in table(I put in below for clearing problem) I have this workbook table for all students workbook: ``` id studentid classid exam1 exam2 total termid 1 2 11 20 40 60 10 2 1 22 40 20 60 10 3 4 11 40 20 60 10 4 5 33 10 60 70 10 5 7 22 10 40 50 10 6 8 11 10 30 40 10 7 9 33 20 45 65 10 8 11 11 null null null 10 9 12 54 null null null 02 10 13 58 null null null 02 ``` 1st challenge is : `exam1` and `exam2` are `VARCHAR` and total is not in table (as i explained). 2nd challenge is : as you can see in `id=8` std #11 has not numbers 3rd challenge is : may be two students have top level so they must be in result. I need result as : ``` id studentid classid exam1 exam2 total termid 1 2 11 20 40 60 10 3 4 11 40 20 60 10 4 5 33 10 60 70 10 2 1 22 40 20 60 10 ``` i have this query but not work good as i mention. ``` SELECT DISTINCT id,studentid,classid,exam1,exam2,total,termid ,(CAST(exam1 AS DECIMAL(9,2))+CAST(exam2 AS DECIMAL(9,2))) FROM workbook WHERE ClassId = '10'; ```
Another approach is to join the table with itself. You find out the max for each class and then join all students of this class which match the class max: max for each class (included in the final statement already): ``` SELECT classid, MAX(CAST(exam1 AS UNSIGNED) + CAST(exam2 AS UNSIGNED)) as 'maxtotal' FROM students WHERE NOT ISNULL(exam1) AND NOT ISNULL(exam2) GROUP BY classid ``` The complete statement: ``` SELECT s2.*, s1.maxtotal FROM (SELECT classid, MAX(CAST(exam1 AS UNSIGNED) + CAST(exam2 AS UNSIGNED)) as 'maxtotal' FROM students WHERE NOT ISNULL(exam1) AND NOT ISNULL(exam2) GROUP BY classid) s1 JOIN students s2 ON s1.classid = s2.classid WHERE s1.maxtotal = (CAST(s2.exam1 AS UNSIGNED) + CAST(s2.exam2 AS UNSIGNED)); ``` SQL Fiddle: <http://sqlfiddle.com/#!2/9f117/1>
You can get the total for the students by just adding the values (MySQL will convert the values to numbers). The following gets the max total for each class: ``` select w.classid, max(coalesce(w.exam1, 0) + coalesce(w.exam2, 0)) as maxtotal from workbook w group by w.classid; ``` You can then `join` this back to the original data to get information about the best students: ``` select w.*, coalesce(w.exam1, 0) + coalesce(w.exam2, 0) as total from workbook w join (select w.classid, max(coalesce(w.exam1, 0) + coalesce(w.exam2, 0)) as maxtotal from workbook w group by w.classid ) ww on w.classid = ww.classid and (coalesce(w.exam1, 0) + coalesce(w.exam2, 0)) = ww.maxtotal; ```
Mysql best students in every class in a school
[ "", "mysql", "sql", "" ]
I have to write the script that will pull IDs of all members that email has been hard bounced. In order to do it I wrote this ``` Select id FROM Members m Join tbl_memberlanguageid mli on m.ID = mli.MLI_MemberID Where Cast(dateCreated as date) >= '01-Dec-2014' and mli.MLI_LanguageID = 3 and EmailHardBounces = 1 ``` Then I need to use this list of IDs to remove them from the table Members. How can I write this script so that it removes everything related to ID from Members if the ID is in that list ?
Another way: ``` DELETE FROM Members WHERE ID IN(Select id FROM Members m Join tbl_memberlanguageid mli on m.ID = mli.MLI_MemberID Where Cast(dateCreated as date) >= '01-Dec-2014' and mli.MLI_LanguageID = 3 and EmailHardBounces = 1) ```
You can just change the `Select id` to `Delete m`: ``` BEGIN TRAN DELETE m FROM Members m JOIN tbl_memberlanguageid mli ON m.ID = mli.MLI_MemberID WHERE CAST(dateCreated AS DATE) >= '01-Dec-2014' AND mli.MLI_LanguageID = 3 AND EmailHardBounces = 1 ROLLBACK COMMIT ``` Just confirm the record count is correct before committing the delete.
How to remove data from table if the ID is in the column
[ "", "sql", "sql-server", "remove-if", "" ]
I am having three tables with different data and i need to insert into one TEMP table and return that table in StoredProcedure. I tried as: ``` -- To get last 10 Days Letters count SELECT col1,col2,1 AS Type, LettersCount INTO #temp FROM tblData -- To get last 4 weeks Letters count SELECT col1,col2,2 AS Type, LettersCount INTO #temp FROM tblData -- To get month wise Letters count SELECT col1,col2,3 AS Type, LettersCount INTO #temp FROM tblData ``` Showing Error as ``` Msg 2714, Level 16, State 1, Line 16 There is already an object named '#temp ' in the database. Msg 102, Level 15, State 1, Line 24 Incorrect syntax near 'T'. Msg 2714, Level 16, State 1, Line 32 There is already an object named '#temp ' in the database. ```
You can Check it Already Exists or NOT ``` IF OBJECT_ID ('tempdb..#TempLetters') is not null drop table #TempLetters SELECT col1,col2,1 AS Type, LettersCount INTO #TempLetters FROM tblData -- To get last 4 weeks Letters count INSERT INTO #TempLetters SELECT col1,col2,2 AS Type, LettersCount FROM tblData -- To get month wise Letters count INSERT INTO #TempLetters SELECT col1,col2,3 AS Type, LettersCount FROM tblData ```
Create the temporary table once, then insert into it for the other two SELECT statements: ``` SELECT col1, col2, 1 AS Type, LettersCount INTO #temp FROM tblData; INSERT INTO #temp SELECT col1, col2, 2 AS Type, LettersCount FROM tblData; INSERT INTO #temp SELECT col1, col2, 3 AS Type, LettersCount FROM tblData; ```
How to insert multiple select statements into a temp table
[ "", "sql", "sql-server", "database", "sql-server-2008", "select", "" ]
I have two tables. Table 1 contains orders and customer codes. Table 2 contains orders with issue codes. I need to be able to return distinct order count by customer from table 1 along with a distinct count by customer of orders with issuecode of 'F' from table 2. Then the final field will be a ratio of the two. Issue count / Order count. I'm using AS400/DB2 SQL. Any help would be greatly appreciated. `Customer ORcnt IScnt IssueRatio cust1 450 37 0.082 cust2 255 12 0.047 cust3 1024 236 0.230 cust4 450 37 0.082`
You can use an `outer join` to your issues table and `count` with `distinct`. Something like this depending on your table definitions: ``` select o.customercode, count(distinct o.orderid), count(distinct i.orderid), count(distinct i.orderid)/count(distinct o.orderid) ratio from table1 o left join table2 i on o.orderid = i.orderid and i.issuecode = 'F' group by o.customercode ``` Some databases would need to convert the ratio to a decimal -- I'm not sure about `db2`. If needed, one way is to multiply the result by 1.0: `1.0*count(distinct i.orderid)/count(distinct o.orderid)` Also, you may not need the `distinct` with the `count` -- depends on your data...
I'm doing this w/ a few subqueries to make it more readable in terms of the description of the problem you presented. sgeddes' solution probably also works (and might perform better) depending on the precise structure of your data. ``` SELECT t.customer, count(t.orderID_All), count(t.orderID_F), count(t.orderID_F)/count(t.orderID_All) FROM (SELECT orders.customer, orders.orderID AS orderID_All, issues.orderID AS orderID_F /*assuming primary/unique key is customer-orderID*/ FROM table1 orders LEFT OUTER JOIN /*you want *all* orders on the left and just orders w/ 'F' on the right*/ (SELECT DISTINCT orderID FROM table2 WHERE issuecode = 'F') issues ON orders.orderID = issues.orderID) t GROUP BY t.customer; ```
SQL two tables distinct counts with join
[ "", "sql", "db2", "ibm-midrange", "" ]
``` SELECT * FROM Fruit INNER JOIN Apple ON Fruit.Id = Apple.FruitId WHERE Apple.Type = 1 AND Apple.Type = 3 ``` I need to get unique rows of Fruit that have both Apples that are of type 1 AND 3. Apple.Type is considered unique, but I wouldn't think it matters though. With these rows, this should return two rows with both Fruit #50 and #52. The most important part is the Fruit.Id, I don't need to return the Types, but just need to make sure every single Fruit returned has at least one Apple.Type = 1 and one Apple.Type = 3. ``` Apple { Id = 1, FruitId = 50, Type = 0 } Apple { Id = 2, FruitId = 50, Type = 1 } Apple { Id = 3, FruitId = 50, Type = 3 } Apple { Id = 4, FruitId = 51, Type = 1 } Apple { Id = 5, FruitId = 51, Type = 2 } Apple { Id = 6, FruitId = 52, Type = 3 } Apple { Id = 7, FruitId = 52, Type = 1 } Apple { Id = 8, FruitId = 52, Type = 2 } Fruit { Id = 50 } Fruit { Id = 51 } Fruit { Id = 52 } ``` I'm not quite sure how to use **DISTINCT** and/or **GROUP BY** in order to form this query.
Group your apples table by fruit id and pick the results that have both desired types. Use this to get your fruits. ``` SELECT * FROM Fruit WHERE id IN ( SELECT FruitId FROM Apple WHERE Type IN (1,3) GROUP BY FruitId HAVING COUNT(DISTINCT Type) = 2 ); ```
This would return the fruits with ID 50 and 52. ``` SELECT * FROM Fruit WHERE EXISTS ( SELECT 1 FROM Apple WHERE Type = 1 AND Apple.FruitId = Fruit.Id ) AND EXISTS ( SELECT 1 FROM Apple WHERE Type = 3 AND Apple.FruitId = Fruit.Id ) ```
How to retrieve unique rows where multiple children that reference it exist for different types?
[ "", "sql", "sql-server", "group-by", "distinct", "" ]
I would like get the ID from the query, but I am getting a NULL, where is my mistake? ``` DECLARE @TblZimz NVARCHAR(256) DECLARE @IdModul INTEGER DECLARE @Id INTEGER SET @TblZimz = '_ZIMZ000001' SET @IdModul = 1 --SET @Id = -1 EXECUTE [InsertZimz] @TblZimz, @IdModul, @Id OUTPUT ALTER PROCEDURE [InsertZimz] @TblZimz NVARCHAR(256) , @IdModul NVARCHAR(256) , @Id INTEGER OUTPUT DECLARE @SqlQuery NVARCHAR(MAX) SET @SqlQuery = 'SELECT TOP (1) ([ID]) FROM ' + @TblZimz + ' WHERE [ModulId] = ' + @IdModul EXEC SP_EXECUTESQL @SqlQuery, N'@Id INTEGER OUTPUT', @Id OUTPUT ``` why the @Id Paramter is alwasy null? I cant see my mistake?
First, select the desired `id` in an output variable using `@Id = ([ID])` then assign this `@Id OUTPUT` value in the `@Id` variable using `@Id = @Id OUTPUT`. Also, you should pass data in where clause using a variable to avoid sql injection problem like `[ModulId] = @IdModul` (i.e. you should not concatenate it like `[ModulId] = ' + @IdModul`). try this : ``` DECLARE @SqlQuery NVARCHAR(MAX) SET @SqlQuery = 'SELECT TOP (1) @Id = ([ID]) FROM ' + @TblZimz + ' WHERE [ModulId] = @IdModul' EXEC SP_EXECUTESQL @SqlQuery, N'@Id INT OUTPUT, @IdModul INT', @IdModul = @IdModul, @Id = @Id OUTPUT ``` Check details of `SP_EXECUTESQL` [here](http://msdn.microsoft.com/en-us/library/ms188001.aspx)
Like [Deepak's](https://stackoverflow.com/a/27486184/1501497) answer, but easier: > EXEC SP\_EXECUTESQL @SqlQuery, > N'@Id INT OUTPUT, @IdModul INT', > @IdModul OUTPUT, @Id
SP_EXECUTESQL and Output Parameter
[ "", "sql", "sql-server", "t-sql", "stored-procedures", "" ]
I have this query that I have to manually change the date daily but not the time. Below you can see that I have 2014-12-11 the next day I have to change the date ONLY to 2014-12-12. Any ideas how to make this dynamic ? ``` UPDATE tblData SET Issued = '2014-12-11 08:00:00.000' where issued < '2014-12-11 08:00:00.000' and called > '2014-12-11 08:00:01.000' and tbldate = '2014-12-11' ```
``` DECLARE @Date DATE = '2014-12-11' --<-- Date variable UPDATE tblData SET Issued = CAST(@Date AS DATETIME) + CAST('08:00:00.000' AS DATETIME) where issued < CAST(@Date AS DATETIME) + CAST('08:00:00.000' AS DATETIME) and called > CAST(@Date AS DATETIME) + CAST('08:00:01.000' AS DATETIME) and tbldate = @Date ```
Can you just use the DATEADD function here? Demo: ``` DECLARE @TestData TABLE (Issued DATETIME); INSERT INTO @TestData VALUES(DATEADD(day, -1, GETDATE())); SELECT * FROM @TestData; UPDATE @TestData SET Issued = DATEADD(day, 1, Issued); SELECT * FROM @TestData; ```
Make Date dynamic but keep Time static
[ "", "sql", "sql-server-2008", "t-sql", "" ]
I have a database that stores the log-in and log-out of the employees but we don't have work on weekends. My supervisor want the DTR report format(I'm using RDLC report) include the weekends. (*see attached image*) ![enter image description here](https://i.stack.imgur.com/IrOoa.jpg) The image above is the expected output format for DTR. I just want to know how to include Weekends though my data are on weekdays only. Is it possible to do this using SQL Query? If yes, should I use looping in sql here? SQL Code: ``` select user_id,log_date,login_time,logout_time from table_DTR where user_id = 'USER1' AND log_date BETWEEN '11/21/2014' AND '12/09/2014' ```
Use common table expression and generate date range with from and to date and than use CTE as left join to actual table. I haven't used user\_id filter in left join so apply it to your query: ``` DECLARE @TMEP TABLE ( [Date] DATE, [IN] VARCHAR(10), [OUT] VARCHAR(10) ) INSERT INTO @TMEP VALUES ('2014-11-11','7:30','5:30') INSERT INTO @TMEP VALUES ('2014-11-12','7:30','5:30') INSERT INTO @TMEP VALUES ('2014-11-13','7:30','5:30') INSERT INTO @TMEP VALUES ('2014-11-14','7:30','5:30') INSERT INTO @TMEP VALUES ('2014-11-15','7:30','5:30') INSERT INTO @TMEP VALUES ('2014-11-18','7:30','5:30') INSERT INTO @TMEP VALUES ('2014-11-19','7:30','5:30') INSERT INTO @TMEP VALUES ('2014-11-20','7:30','5:30') INSERT INTO @TMEP VALUES ('2014-11-21','7:30','5:30') INSERT INTO @TMEP VALUES ('2014-11-22','7:30','5:30') INSERT INTO @TMEP VALUES ('2014-11-25','7:30','5:30') INSERT INTO @TMEP VALUES ('2014-11-26','7:30','5:30') INSERT INTO @TMEP VALUES ('2014-11-27','7:30','5:30') INSERT INTO @TMEP VALUES ('2014-11-28','7:30','5:30') INSERT INTO @TMEP VALUES ('2014-11-29','7:30','5:30') INSERT INTO @TMEP VALUES ('2014-12-1','7:30','5:30') INSERT INTO @TMEP VALUES ('2014-12-2','7:30','5:30') INSERT INTO @TMEP VALUES ('2014-12-3','7:30','5:30') INSERT INTO @TMEP VALUES ('2014-12-4','7:30','5:30') DECLARE @FromDate DATE SET @FromDate = '2014-11-11 06:00:00.000' DECLARE @ToDate DATE SET @ToDate = '2014-12-11 06:00:00.000' ;WITH CTE_TableDate ([CTEDate]) as ( SELECT @FromDate UNION ALL SELECT DATEADD(DAY,1,CTEDate) FROM CTE_TableDate WHERE [CTEDate] < @ToDate ) SELECT CTE_TableDate.CTEDate, CASE WHEN DATEPART(DW, CTE_TableDate.CTEDate) = 7 THEN 'SATURDAY' WHEN DATEPART(DW, CTE_TableDate.CTEDate) = 1 THEN 'SUNDAY' ELSE TEMP.[In] END AS [IN], CASE WHEN DATEPART(DW, CTE_TableDate.CTEDate) = 7 THEN 'SATURDAY' WHEN DATEPART(DW, CTE_TableDate.CTEDate) = 1 THEN 'SUNDAY' ELSE TEMP.[OUT] END AS [OUT] FROM CTE_TableDate LEFT JOIN ( select [Date], [IN], [OUT] from @TMEP) TEMP ON CTE_TableDate.CTEDate = TEMP.[Date] ```
try below solution : ``` DECLARE @startdate DATE = '11/21/2014' -- your start date DECLARE @enddate DATE = '12/09/2014' -- your start date -- create list of all dates between min(log_date) and MAX(log_date) ;WITH cte AS (SELECT @startdate AS log_date UNION ALL SELECT Dateadd(dd, 1, log_date) log_date FROM cte WHERE log_date < @enddate) -- select the data using left outer join so that it will return missing dates too. SELECT t1.user_id, c.log_date, t2.login_time, t2.logout_time FROM cte c CROSS JOIN (SELECT DISTINCT user_id FROM mytable) t1 LEFT OUTER JOIN mytable t2 ON t2.user_id = t1.user_id AND t2.log_date = c.log_date ORDER BY t1.user_id,c.log_date OPTION(maxrecursion 1000) ``` It will return `null` in time columns for weekends. Note : if you are getting error : `The statement terminated. The maximum recursion 100 has been exhausted before statement completion.` then try using `OPTION(maxrecursion 3000)` or greater.
include weekends on sql query
[ "", "sql", "sql-server-2008-r2", "formatting", "reporting", "rdlc", "" ]
please, i have a database, in this database i have a table (FICHENAME), this elements have several criteria (Ex : core competence and condition of access and ..) each element for the criteria is represented by a table in this database. i need to create a combination between the same table to compare FICHENAME1 with (FICHENAME1 and FICHENAME2 and .....FICHENAME N) if i have a common criteria i insert 1 in bit column. in this image you have a FICHENAME table with the criteria i need to compare with the same ![enter image description here](https://i.stack.imgur.com/hBZ1H.png) i need solution to representing this data or only create this combination (solution : SQL, SSIS, ..) ![enter image description here](https://i.stack.imgur.com/YtFJU.png)
A self join is simply when you join a table with itself. <http://www.tutorialspoint.com/sql/sql-self-joins.htm> [SQL join on multiple columns in same tables](https://stackoverflow.com/questions/16597660/sql-join-on-multiple-columns-in-same-tables) [[What is SELF JOIN and when would you use it?](https://stackoverflow.com/questions/3362038/what-is-self-join-and-when-would-you-use-it][3])
thank you for your answer, i can use Cross join to create a combination :D <http://www.sqlguides.com/sql_cross_join.php>
SQL Query: combining data for the comparaison
[ "", "sql", "sql-server", "database", "ssis", "sql-server-2012", "" ]
How to get the Max and Min length allowed in column of `varchar2`. I have to test for incoming data from the temp table which is coming from some remote db. And each row value is to be tested for specific columns that it has maximum or minimum value which can be set into the column. So I was to get the column specs using its schema details. I did make a `proc` for that: ``` PROCEDURE CHK_COL_LEN(VAL IN VARCHAR2, MAX_LEN IN NUMBER :=4000, MIN_LEN IN NUMBER :=0, LEN_OUT OUT VARCHAR2) IS BEGIN IF LENGTH(VAL)<MIN_LEN THEN LEN_OUT := 'ERROR'; RETURN; ELSIF LENGTH(VAL)>MAX_LEN THEN LEN_OUT := 'ERROR'; RETURN; ELSE LEN_OUT := 'SUCCESS'; RETURN; END IF; END; END CHK_COL_LEN; ``` But the problem is, it is not reusable and is a bit hardcoded. I have to explicitly send `MAX` and `MIN` value for each value along with the data to be checked. So at the `proc` call, it's something like: ``` CHK_COL_LEN(EMP_CURSOR.EMP_ID, 5, 1, LEN_ERROR_MSG); ``` I instead want something like: (If something like this exist!) ``` CHK_COL_LEN(EMP_CURSOR.EMP_ID, EMP.COLUMN_NAME%MAX_LENGTH, EMP.COLUMN_NAME%MIN_LENGTH, LEN_ERROR_MSG) ``` Thanks in advance. **EDIT** ``` select max(length(col)) from table; ``` This is a solution, but again I will have to run this query each time to set the two variables for MAX and MIN value. And running extra two queries for each value and then setting 2 variables will cost be significant lose in performance when in have about 32 tables, each with 5-8 varchar2 columns and average rows of about 40k-50k in each table
You can query the table 'user\_tab\_columns table' to retrieve metadata information of a specific table: ``` SELECT COLUMN_NAME, DATA_LENGTH, DATA_PRECISION FROM user_tab_columns WHERE t.table_name IN ('<YOURTABLE>'); ``` with this information you can query the metadata directly in your stored procedure: ``` ... SELECT CHAR_LENGTH INTO max_length FROM user_tab_columns WHERE table_name = '<YOURTABLE>' AND COLUMN_NAME = '<YOURCOLUMN>'; ... ``` Exmple Procedure to get **max length of table/column:** ``` create or replace PROCEDURE GET_MAX_LENGTH_OF_COLUMN( tableName IN VARCHAR2, columnName IN VARCHAR2, MAX_LENGTH OUT VARCHAR2) IS BEGIN SELECT CHAR_LENGTH INTO MAX_LENGTH FROM user_tab_columns WHERE table_name = tableName AND COLUMN_NAME = columnName; END GET_MAX_LENGTH_OF_COLUMN; ```
Try creating your procedure like this: ``` create or replace procedure Checking_size(column_name varchar2,columnvalue varchar2,state out varchar2) is begin execute immediate 'declare z '||column_name||'%type; begin z:=:param2; end;' using columnvalue; state:='OK'; exception when value_error then state:='NOT OK'; end; ``` As you can see i simulate an error assignment. If `columnvalue` length is bigger than the column i pass as `column_name` it will throws value\_error exception and return `NOT OK`, else return `OK`. For example, if `your_table.your_column` refer to a column with length (3) then return `NOT OK`. ``` declare state varchar2(10); begin Checking_size('your_table.your_column','12345',state); dbms_output.put_line(state); end; ```
Get Maximum Length allowed in column | Oracle
[ "", "sql", "oracle", "maxlength", "" ]
I have a column called "WasCancelled" in a MySQL DB, it's of a Boolean type. I save the number 0,1 and 2 to it. **I need a query that will count how many one's, two's and zeros are there.** for example: my column: ``` WasCanceled ----------- 0 0 1 1 1 2 0 0 0 2 0 0 1 ``` **I need the query to output:** ``` number | times -------|------ "0" | 7 "1" | 4 "2" | 2 ``` please help. thank you. Dave.
You can do this by grouping: ``` SELECT WasCancelled, COUNT(*) FROM <Table> GROUP BY WasCancelled ``` If you need more information, look here for details on [Group By](http://www.tutorialspoint.com/sql/sql-group-by.htm) and [Count](http://www.tutorialspoint.com/sql/sql-count-function.htm). # Update To include the question in the comments: to restrict on special values, you can add a `WHERE` clause: ``` SELECT WasCancelled, COUNT(*) FROM <Table> WHERE WasCancelled = "1" GROUP BY WasCancelled ``` In further questions, please edit your overall question to include sub-questions or open new topics. Please read [How To Ask Good Questions](https://stackoverflow.com/help/how-to-ask). # Update 2 `SQL` also allows the `HAVING` clause, which is like `WHERE` but allows the comparison of aggregated values. See [here](http://www.programmerinterview.com/index.php/database-sql/having-vs-where-clause/) for details (e.g. you want to know which value appears more than 5 times, etc.).
Use **GROUP BY** with **COUNT** fuction: Try this: ``` SELECT WasCanceled, COUNT(1) as times FROM tableA GROUP BY WasCanceled; ```
count column items in mysql
[ "", "mysql", "sql", "select", "count", "group-by", "" ]
Is there a way to do a partial string match on a string array column in postgres? I'm trying the following syntax, but it is not working as I'd expect it to: ``` SELECT * FROM example_table WHERE '%partial string' ILIKE ANY(array_column) ``` Is there a correct way of doing this?
``` drop table if exists temp_a; create temp table temp_a as ( select array['alpha','beta'] as strings union all select array['gamma','theta'] ); select * from (select unnest(strings) as string from temp_a) as sq where string ilike '%eta' ```
You need to search each subscript separately -- here's an example that can be expounded on to include more columns ``` SELECT distinct array_column FROM (SELECT array_column, generate_subscripts(array_column, 1) AS s FROM example_table) AS foo WHERE array_column[s] like '%partial string'; ``` Alternative hack: ``` select * from example_table where array_column::text like '%partial%' ``` if necessary you could hack "partial" to include the opening/closing brackets and quotes to be just a bit more precise.
How to use LIKE with ANY in Postgresql?
[ "", "sql", "postgresql", "postgresql-9.2", "" ]
Hi friends I have table from customer where in I want to get distinct values of its three columns CustomerType, CustomerReg,CustomerID. Instead of doing three separate queries for the distinct like select distinct CustomerType From Customer.. and then for CustomerReg..Can it be achieved in one query... The reason is that i would attach each column distinct values to a specific drop down list box..
You can do this with `grouping sets`. If you want the values in three separate columns: ``` select CustomerType, CustomerReg, CustomerId from Customer group by grouping sets ((CustomerType), (CustomerReg), (CustomerId)) ```
If I understand correctly, you want to show a distinct list of all the customer types, customerreg and customerIds in a list, in one column? Try this... ``` select distinct CustomerType + ' : ' + CustomerReg + ' (' + CustomerId + ')' as Name from Customer ``` This will return a string like 'External : 23423412 (2344)' You should probably order it by something meaningful too. Try adding ``` order by Name ``` Although, you shouldn't need the DISTINCT if a customer can only appear once in the customer table. Reading your question again, it looks like you want to return a distinct list of each column in one query, not a distinct combination? Then the group by grouping sets mentioned above will probably get you the closest, although depending on the structure of your data, performance might become an issue here if you have lots of customers. What language are you using in your UI? If it's .Net, you could open a datareader using 3 separate queries, and then use the datareader.nextresult There's an explanation here, <http://msdn.microsoft.com/en-us/library/microsoft.visualstudio.data.framework.datareader.nextresult.aspx> and an example here, <http://msdn.microsoft.com/en-us/library/haa3afyz(v=vs.110).aspx> Hope that helps
How can I apply distinct on multiple columns
[ "", "sql", "sql-server", "sql-server-2008", "distinct", "" ]
Please have a look at the below code. ``` SELECT Transactions.`idTransactions`, Transactions.`Transaction_TimeStamp`, Transactions.`idPortfolio`, CASE when Transactions.`Transaction_Amount`< 0 THEN END AS "Encashments" ELSE END AS "Additions" , Portfolio.Activation FROM Transactions INNER JOIN Portfolio ON Transactions.idPortfolio = Portfolio.idPortfolio WHERE Transactions.idPortfolio=1 ORDER BY `Transaction_TimeStamp` DESC ``` Here, if the `Transactions.Transaction_Amount` is >0, then I need to display this value in "Additions" column, else in "Encashment" column. However my code is generating errors as below. ``` #1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'AS "Encashments" ELSE END AS "Additions" , Portfolio.Activation FROM Transacti' at line 4 ``` How can I fix this please?
If you want 2 different columns then use two cases ``` SELECT Transactions.`idTransactions`, Transactions.`Transaction_TimeStamp`, Transactions.`idPortfolio`, CASE when Transactions.`Transaction_Amount` <= 0 THEN (Transaction_Amount*-1) END AS Encashments, CASE when Transactions.`Transaction_Amount` > 0 THEN Transaction_Amount END AS Additions, Portfolio.Activation FROM Transactions INNER JOIN Portfolio ON Transactions.idPortfolio = Portfolio.idPortfolio WHERE Transactions.idPortfolio=1 ORDER BY `Transaction_TimeStamp` DESC ```
You have no result in your case. ``` CASE when Transactions.`Transaction_Amount`< 0 THEN something_missing_here END AS ```
MySQL: Retrieving values into different columns
[ "", "mysql", "sql", "database", "select", "case", "" ]
In my `SELECT` query, I would like to put a condition in my `CASE` for a value called *IsActive* so that if the current row's column Type is *"adhoc"* then set the value of IsActive to "n/a". ``` SELECT CASE Type WHEN 'scheduled' THEN 'Scheduled' WHEN 'adhoc' THEN 'AdHoc' ELSE 'Unknown' END AS 'Type', CASE IsActive WHEN (Type) = 'adhoc' THEN 'n/a' WHEN 0 THEN 'Stopped' WHEN 1 THEN 'Active' END AS 'Status' FROM MyTable ``` But I am getting this error on the line `WHEN (Type) = 'adhoc' THEN 'n/a'`: > Incorrect syntax near '='. How can I make a decision based on a condition on the current row?
Your second `case` is based on a single column, then you are trying to put a condition in the `when`, which is not allowed: ``` CASE IsActive WHEN (Type) = 'adhoc' THEN 'n/a' WHEN 0 THEN 'Stopped' WHEN 1 THEN 'Active' END AS 'Status' ``` I think you meant to do this (use a `case` without a column and put the conditions in the `when`): ``` CASE WHEN [Type] = 'adhoc' THEN 'n/a' WHEN IsActive = 0 THEN 'Stopped' WHEN IsActive = 1 THEN 'Active' END AS 'Status' ```
Try this: ``` SELECT (CASE Type WHEN 'scheduled' THEN 'Scheduled' WHEN 'adhoc' THEN 'AdHoc' ELSE 'Unknown' END) AS 'Type', (CASE WHEN Type = 'adhoc' THEN 'n/a' WHEN IsActive = 0 THEN 'Stopped' WHEN IsActive = 1 THEN 'Active' END) AS 'Status' FROM MyTable; ```
WHEN condition based on the current row value
[ "", "sql", "sql-server", "database", "select", "case", "" ]
I have done my research on this in Stack Overflow and am aware of offered solutions whereby the advice is that MSAccess reports override queries and the sort order is usually set in the report properties, or by VBA OnLoad, or something similar. I have noticed something weird in my MS Access reports however, and that is that on one report I have created with the following query the report displays everything in perfect order: ``` SELECT Format([DateOfEnquiry],"yyyy") AS [Year], Count(T_Enquiry.DateOfEnquiry) AS NumOfEnquiries, T_Enquiry.YearLevel FROM T_Enquiry GROUP BY Format([DateOfEnquiry],"yyyy"), T_Enquiry.YearLevel, IIf([YearLevel] Is Null,0,Val([YearLevel])) ORDER BY Format([DateOfEnquiry],"yyyy"), IIf([YearLevel] Is Null,0,Val([YearLevel])); ``` Here I'm particularly concerned with the ordering/sorting of the [YearLevel] field. [YearLevel] is a text lookup field because it not only contains the integers 1-12 but also has the letters 'K' and 'P' in the lookup field. When running the above query it returns the correct order - that is from 'K' then 'P', then from 1-12. I've used this query as the record source for my report and the report lists all items just as the datasheet does when running the query itself. Perfect! Now take a look at the following query that I use as a record source for another report: ``` SELECT Format([DateOfEnquiry],"yyyy") AS [Year], Count(T_Enquiry.Outcome) AS NumOfEnrolments, T_Enquiry.YearLevel FROM T_Enquiry WHERE (((T_Enquiry.Outcome)="Enrolled")) GROUP BY Format([DateOfEnquiry],"yyyy"), T_Enquiry.YearLevel, IIf([YearLevel] Is Null,0,Val([YearLevel])) ORDER BY Format([DateOfEnquiry],"yyyy"), IIf([YearLevel] Is Null,0,Val([YearLevel])); ``` When this query is run, the datasheet is in perfect [YearLevel] order. However the report view is not. The report view puts [YearLevel] 10 first, then 12, then 2. The only difference (apart from the respective fields) between both SQL queries is the WHERE statement in the second query above. Should this make a difference in report view? I don't see how. Can anybody please suggest a work around? Or point out what I might be missing in the report properties, VBA code, SQL queries...or maybe there might even be a macro that can sort [YearLevel] more easily in the proper order? I look forward to any advice. Cheers. **New Information** I have done some more testing and determined that in my report design view I have a text box called Txt\_TotalEnrol in the report footer which contains the following calculation: ``` =Sum([NumOfEnrolments]) ``` This is in addition to some other totals. It seems it is this textbox that causes the order of [YearLevel] to be out. I deleted Txt\_TotalEnrol and the ordering of [YearLevel] went back to my desired order. Why does this operation effect the order of [YearLevel] on the report? Any suggestions very much appreciated.
Problem solved! I created a simple query to list all the "Enrolled" [Outcomes]: ``` SELECT Outcomes FROM T_Enquiry WHERE T_Enquiry.Outcomes="Enrolled"; ``` and then inserted the following into the `Data` Property of the *Txt\_TotalEnrol* textbox in the report: ``` =DCount("Outcome","qry_TotalOutcomeEnrolled") ``` This had the desired effect of providing a total number and not re-ordering [YearLevel].
As @parakmiakos said, check first that there is no other constraint in the `OrderBy` property of the report that may be overriding your bound query. If your report is shown in a form, also check that the form doesn't have its own `RowSource` and `OrderBy` properties set to something that would explain the behaviour. I would also not use `Year` as a field name since it's a reserved word and it may cause strange issues that can be hard to debug. You could also try to wrap your query into another query (I've made small changes): ``` SELECT P.TheYear, P.NumOfEnquiries, P.YearLevel FROM (SELECT Format([DateOfEnquiry], "yyyy") AS TheYear, COUNT(T_Enquiry.DateOfEnquiry) AS NumOfEnquiries, T_Enquiry.YearLevel, Val(Nz(YearLevel)) AS YearLevelAsNumber FROM T_Enquiry GROUP BY Format([DateOfEnquiry], "yyyy"), T_Enquiry.YearLevel, IIf([YearLevel] IS NULL, 0, Val([YearLevel]))) AS P ORDER BY P.TheYear, YearLevelAsNumber ``` **EDIT:** I had forgotten that reports have a really un-intuitive way of settings sortting and grouping rules. In Design mode, you need to right-click on an empty part of the report an select *Sorting and Grouping*: ![Report Context Menu](https://i.stack.imgur.com/90e5v.png) Then a panel will appear that should let you setup your sorting rules: ![SOrting and Grouping Panel](https://i.stack.imgur.com/RXGIA.png)
SQL ORDER BY behaving strangely in MS Access 2010 report
[ "", "sql", "vba", "ms-access", "report", "ms-access-2010", "" ]
I have a view pulling together data from a number of tables. Let's consider it to be like this: ``` TransPK Amount Currency_ID ------------------------------- 1 2000 0 1 -2000 0 2 3600 1 2 -7200 2 . . . ``` I want to calculate sum(Amount), per distinct TransPK, but only where all the Currency\_IDs for a given TransPK are the same. i.e. in the data above there would only be one result returned, since for TransPK=2 there are two values for Currency\_ID. I have no idea what to search on for this! I've tried "SQL SUM restricted", but the hits weren't useful. I'm not the first person to want to do something like this. Is anyone able to point me in the right direction?
Try this: ``` SELECT TransPK, SUM(Amount) FROM tableA GROUP BY TransPK HAVING COUNT(DISTINCT Currency_ID) = 1; ``` Check the [**SQL FIDDLE DEMO**](http://www.sqlfiddle.com/#!3/f3c16/2) **OUTPUT** ``` | TRANSPK | AMOUNT | |---------|--------| | 1 | 0 | | 3 | 3000 | ```
I guess this is what you are after: ``` SELECT TransPK, SUM(Amount) AS Amount, Currency_ID FROM TableName GROUP BY TransPK, Currency_ID HAVING COUNT(Currency_ID) > 1 ``` Result: ``` TRANSPK AMOUNT CURRENCY_ID 1 0 0 ``` See result in [**SQL Fiddle**](http://www.sqlfiddle.com/#!3/161d6/10).
SQL SUM restricted
[ "", "sql", "select", "group-by", "sum", "aggregate-functions", "" ]
I want to learn ASP.NET MVC5 so I decided to make somekind of review site. I want to add diffrent console / pc games and movies etc. For the Games table I have this: ``` CREATE TABLE [dbo].[Games] ( [GameId] INT IDENTITY (1, 1) NOT NULL, [Title] NVARCHAR (MAX) NOT NULL, [Genre] NVARCHAR (MAX) NOT NULL, [Format] NVARCHAR (MAX) NOT NULL, CONSTRAINT [PK_dbo.Games] PRIMARY KEY CLUSTERED ([GameId] ASC) ); ``` --- I have the same code for the movie table, And here are is the user table I got from MVC: ``` CREATE TABLE [dbo].[AspNetUsers] ( [Id] NVARCHAR (128) NOT NULL, [Email] NVARCHAR (256) NULL, [EmailConfirmed] BIT NOT NULL, [PasswordHash] NVARCHAR (MAX) NULL, [SecurityStamp] NVARCHAR (MAX) NULL, [PhoneNumber] NVARCHAR (MAX) NULL, [PhoneNumberConfirmed] BIT NOT NULL, [TwoFactorEnabled] BIT NOT NULL, [LockoutEndDateUtc] DATETIME NULL, [LockoutEnabled] BIT NOT NULL, [AccessFailedCount] INT NOT NULL, [UserName] NVARCHAR (256) NOT NULL, CONSTRAINT [PK_dbo.AspNetUsers] PRIMARY KEY CLUSTERED ([Id] ASC) ); GO CREATE UNIQUE NONCLUSTERED INDEX [UserNameIndex] ON [dbo].[AspNetUsers]([UserName] ASC); ``` But my question is, how do I link the movies / games to the user? Do I just have to add a FK on the game / movie table, or should the FK go on the user table? And the next thing, how would I store rating, (INT) and reviews (NVARCHAR) ? I think it would be best if I made a spefic table for this and than link the table to the game / movie table? And a follow up question to that. How do I link these two? And once they are linked... how would I go ahead and populate my database? I would assume I could make some sort of script to populate them for me?
``` AspNetUsers ------ Id (PK) Game ------ Id (PK) Review ------ Id (PK) GameId (FK) UserId (FK) ``` or make a composite key if a user can only review a game 1 time ``` Review ------ GameId (PK) UserId (PK) ``` but if you want to have Movies and Games you could do ``` Review ------ Id (PK) GameId (FK NULL) MovieId (FK NULL) UserId (FK) ``` Look into Table Relationships
Technically, you should actually have a separate join table with a foreign key to the movie and one to the user, since you're dealing with a M2M relationship here: users can review many movies and movies can be reviewed by many users. Likewise for games. However, since you're starting off with MVC you're actually putting the cart before the horse. While you can use an existing database with Entity Framework, it's much preferable to let Entity Framework manage your database schema for you based on the code you write. Then, when you make changes to your classes, you can create migrations that will update the schema accordingly. Just do a search for something like "entity framework code first". You'll find a wealth of articles and tutorials explaining how to get going.
database structure - linking data
[ "", "mysql", "sql", "asp.net-mvc-5", "" ]
I'll try to be as clear as possible. I have these 3 tables `Customer`, `Link` & `Customer_link`. I am using the following query to retrieve the `customer_no` column from `Customer` which has values that are not available in the `customer_no` of the table `Customer_link`: ``` SELECT c.customer_no FROM CUSTOMER c LEFT JOIN Customer_link cl ON c.customer_no = cl.customer_no WHERE cl.customer_no IS NULL ``` The table `Customer_link` has the following columns: ``` ID (generated automatically using a sequence) Customer_no (linked to Customer table) Link_no (linked to the Link table) Maker_id ``` What I'm trying to do is, use the above query to get the `customer_no` from the `Customer` table that are not added to the table `Customer_link` yet (this is a constraint since `Customer_link` cannot have the same `customer_no` twice in the table which is unique. Same goes for `link_no` as well) & a similar query to get the `link_no` from the `Link` table. Then use a `link_no` & a `customer_no` from the respective results & add then to the table `Customer_link` (there is already a function for this which I will need to call with the values that I get in the result). I need to use a loop here so that I can update the results after adding a value from each table to the table `Customer_link` which will so that I do not get an error trying to add the same `customer_no` or `link_no` twice to the table. Using a cursor is one way I found on the internet. But it's not very clear to me. So what I'm trying to do exactly here is get the result for unused customer\_no from `Customer` & Link\_no from `Link` & insert the values in row1 is the respective columns in the table `Customer_link` & loop over to update the results & get the values in row1 again to add them as parameter to the function that I call
I finally found a way to get this working with the use of a cursor. I will be putting up the solution here so that someone else may need help with this. I created a variable `cust` that was of the type Customer.Customer\_no `cust Customer.Customer_no%type;` I then created a Cursor to be able to add the values of unused customer number to `Cursor cr_cust IS select c.customer_no into cust from CUSTOMER c LEFT JOIN Customer_link cl on c.customer_no = cl.customer_no where cl.customer_no is null;` Now i used a loop to get these all working this way `begin FOR i IN cr_cust LOOP --to loop through the rows cust := i.customer_no; --to add the value of corresponding row to the variable --cust IF j <= 5 THEN --this for getting the values from link table & assigning them to --variable so that the variables can be passed to the required function {code for assigning values to variables & the function calling go here} j:=j+1; --to get the next value in link table END IF; IF j > 5 THEN --exit if 5 customers have been added to the Customer_link table & --there is no need for more links to be fetched from link table EXIT; END IF; END LOOP; end`
There is no need for a loop. Just use the select as a source for the insert: ``` insert into customer_link (customer_no) select c.customer_no from customer c left Customer_link cl ON c.customer_no = cl.customer_no where cl.customer_no is null; ``` Or alternatively use a NOT EXISTS query which is sometimes faster: ``` insert into customer_link (customer_no) select c.customer_no from customer c where not exists (select * from customer_link cl where cl.customer_no = c.customer_no); ``` Both statements will only select (and insert) unique values (i.e. no duplicates) for the `customer_no` column assuming `customer_no` is the primary key in the `customer` table.
Retrieving the first value of the column to a variable in pl sql
[ "", "sql", "oracle", "plsql", "plsqldeveloper", "" ]
I have 2 tables: Downtime Categories: `CatID ----- CatName ----- SiteID` Downtimes: `DtID ----- DtName ----- CatID` How do I create a query so I can pull all of the records from Downtimes where the SiteID from Downtime Categories = x? I don't know how to word this search to get what I need, I have tried all the Joins but none of them give me what I'm after Any help would be appreciated, thanks Edit: I have tried all variants of the below answers but none of them worked... copying one into SMS and running it now worked first time :( I must have had a typo somewhere...stupid... Thanks for the replies
You can try this ``` SELECT D.CatID ,D.CatName ,D.SiteID FROM Downtimes D INNER JOIN DowntimesCategories DC ON D.CatID = DC.CatID WHERE DC.SiteID = put your id here ```
``` SELECT D.* FROM DOWNTIMES D INNER JOIN DOWNTIMECATEGORIES DC ON (D.CATID = DC.CATID) WHERE DC.SiteID = X ```
SQL join 2 tables, query 1st with condition from 2nd
[ "", "sql", "sql-server", "" ]
I've got a query that is: ``` SELECT DISTINCT a.field1, a.field2, a.field3, b.field1, b.field2, a.field4 FROM table1 a JOIN table2 b ON b.fielda = a.fieldb WHERE a.field1 = 'xxxx' ``` I run this and it returns three xxxx rows. I need all of the information listed above with the first field being distinct. Do I have the correct syntax for this?
In Postgres, you can use `distinct on`: ``` select distinct on (a.field1) a.field1, a.field2, a.field3, b.field1, b.field2, a.field4 from table1 a join table2 b on b.fielda = a.fieldb where a.field1 = 'xxxx' order by a.field1; ``` In either Postgres or SQL Server, you can use `row_number()`: ``` select ab.* from (select a.field1, a.field2, a.field3, b.field1, b.field2, a.field4, row_number() over (partition by a.field1 order by a.field1) as seqnum from table1 a join table2 b on b.fielda = a.fieldb where a.field1 = 'xxxx' ) ab where seqnum = 1; ``` Or, since you only want one row, you can use `limit`/`top`: ``` select a.field1, a.field2, a.field3, b.field1, b.field2, a.field4 from table1 a join table2 b on b.fielda = a.fieldb where a.field1 = 'xxxx' limit 1; ``` In SQL Server: ``` select top 1 a.field1, a.field2, a.field3, b.field1, b.field2, a.field4 from table1 a join table2 b on b.fielda = a.fieldb where a.field1 = 'xxxx'; ```
One option is to use `row_number()`: ``` with cte as ( select distinct a.field1, a.field2, a.field3, b.field1, b.field2, a.field4, row_number() over (partition by a.field1 order by a.field1) rn from table1 a join table2 b on b.fielda = a.fieldb where a.field1 = 'xxxx' ) select * from cte where rn = 1 ``` But you need to define which record to take. This orders by `field1` which essentially will take a random record...
Select Distinct returning multiple rows
[ "", "sql", "sql-server", "postgresql", "distinct", "" ]
I have a mysql join result which looks like this. ``` ID | Name | Score | SomeID --------------------------- 1 | abc | 100 | 2 1 | abc | 100 | 2 1 | abc | 100 | 3 1 | abc | 100 | 3 1 | abc | 100 | 4 1 | abc | 100 | 4 ``` I want the result of this join result in sum as distinct Like ``` ID | Name | SUM(Score) | SomeID --------------------------- 1 | abc | 200 | 2 1 | abc | 200 | 3 1 | abc | 200 | 4 ``` Is there any possible solution to this problem! Any Help?
Try this: ``` SELECT ID, Name, SUM(Score), SomeID FROM tableA GROUP BY ID, Name, SomeID; ```
Things are this simple: ``` SELECT ID, Name,SUM(Score),SomeID FROM table_name GROUP BY SomeID; ```
DISTINCT WITH SUM
[ "", "mysql", "sql", "select", "group-by", "sum", "" ]
There is a SQL table `mytable` that has a column `mycolumn`. That column has text inside each cell. Each cell may contain "this.text/31/" or "this.text/72/" **substrings** (numbers in that substrings can be any) as a part of string. What SQL query should be executed to display a list of unique such substrings? P.S. Of course, some cells may contain several such substrings. And here are the answers for questions from the comments: The query supposed to work on SQL Server. The prefered output should contain the whole substring, not the numeric part only. It actually could be not just the number between first "/" and the second "/". And it is varchar type (probably) Example: `mycolumn` contains such values: ``` abcd/eftthis.text/31/sadflh adslkjh abcd/eftthis.text/44/khjgb ljgnkhj this.text/447/lhkjgnkjh ljgkhjgadsvlkgnl uygouyg/this.text/31/luinluinlugnthis.text/31/ouygnouyg khjgbkjyghbk ``` The query should display: ``` this.text/31/ this.text/44/ this.text/447/ ```
How about using a recursive CTE: ``` CREATE TABLE #myTable ( myColumn VARCHAR(100) ) INSERT INTO #myTable VALUES ('abcd/eftthis.text/31/sadflh adslkjh'), ('abcd/eftthis.text/44/khjgb ljgnkhj this.text/447/lhkjgnkjh'), ('ljgkhjgadsvlkgnl'), ('uygouyg/this.text/31/luinluinlugnthis.text/31/ouygnouyg'), ('khjgbkjyghbk') ;WITH CTE AS ( SELECT MyColumn, CHARINDEX('this.text/', myColumn, 0) AS startPos, CHARINDEX('/', myColumn, CHARINDEX('this.text/', myColumn, 1) + 10) AS endPos FROM #myTable WHERE myColumn LIKE '%this.text/%' UNION ALL SELECT T1.MyColumn, CHARINDEX('this.text/', T1.myColumn, C.endPos) AS startPos, CHARINDEX('/', T1.myColumn, CHARINDEX('this.text/', T1.myColumn, c.endPos) + 10) AS endPos FROM #myTable T1 INNER JOIN CTE C ON C.myColumn = T1.myColumn WHERE SUBSTRING(T1.MyColumn, C.EndPos, 100) LIKE '%this.text/%' ) SELECT DISTINCT SUBSTRING(myColumn, startPos, EndPos - startPos) FROM CTE ```
Having a table named test with the following data: ``` COLUMN1 aathis.text/31/ this.text/1/ bbbthis.text/72/sksk ``` could this be what you are looking for? ``` select SUBSTR(COLUMN1,INSTR(COLUMN1,'this.text', 1 ),INSTR(COLUMN1,'/',INSTR(COLUMN1,'this.text', 1 )+10) - INSTR(COLUMN1,'this.text', 1 )+1) from test; ``` result: ``` this.text/31/ this.text/1/ this.text/72/ ``` --- i see your problem: Assume the same table as above but now with the following data: ``` this.text/77/ xxthis.text/33/xx xthis.text/11/xxthis.text/22/x xthis.text/1/x ``` The following might help you: ``` SELECT SUBSTR(COLUMN1, INSTR(COLUMN1,'this.text', 1 ,1), INSTR(COLUMN1,'/',INSTR(COLUMN1,'this.text', 1 ,1)+10) - INSTR(COLUMN1,'this.text', 1 ,1)+1) FROM TEST UNION SELECT CASE WHEN (INSTR(COLUMN1,'this.text', 1,2 ) >0) THEN SUBSTR(COLUMN1, INSTR(COLUMN1,'this.text', 1,2 ), INSTR(COLUMN1,'/',INSTR(COLUMN1,'this.text', 1 ,2),2) - INSTR(COLUMN1,'this.text', 1,2 )+1) end FROM TEST; ``` it will generate the following result: ``` this.text/1/ this.text/11/ this.text/22/ this.text/33/ this.text/77/ ``` The downside is that you need to add a select statement for every occurance you might have of "this.text". If you might have 100 "this.text" in the same cell it might be a problem.
SQL: select unique substrings from the table by mask
[ "", "mysql", "sql", "sql-server", "select", "" ]
I am trying to sort a varchar column A which contains the data like this: ``` A.1) NULL A.1.xc) 1131820 B.1) NULL B.1.xc) 1131822 C.1) NULL C.1.xc) 131824 C.2) (CE) NULL C.2) (NRML) NULL C.2.xc) 131826 C.2.xc) 132152 C.3) NULL C.3.a) 131828 C.3.a.xc) 131830 C.3.xc) 131828 C.4) NULL C.4.a) 131838 C.4.a.xc) 131840 C.4.xc) 131838 D.1) NULL D.1) NULL D.1) NULL D.1) NULL D.1) NULL D.1) NULL D.1) NULL D.1) NULL D.1) NULL D.1) NULL D.1) NULL D.1) NULL D.1) NULL D.1) NULL D.1) NULL D.1) NULL D.1.xc) 16131842 D.1.xc) 15131842 D.1.xc) 14131842 D.1.xc) 13131842 D.1.xc) 12131842 D.1.xc) 11131842 D.1.xc) 10131842 D.1.xc) 9131842 D.1.xc) 8131842 D.1.xc) 7131842 D.1.xc) 6131842 D.1.xc) 5131842 D.1.xc) 4131842 D.1.xc) 1131842 D.1.xc) 3131842 D.1.xc) 2131842 D.2) NULL D.2.xc) 132124 D.3) NULL D.3.xc) 132126 D.4) NULL D.4.xc) 1132156 D.5) (NRML) NULL D.5.xc) 132158 E.1) NULL E.1.xc) 132138 E.10) NULL E.10.xc) 131932 E.10.xf) 131932 E.10.xl) 131932 E.11) (NRML) NULL E.11.xc) 131939 E.11.xf) 131939 E.11.xl 131939 E.12.a) NULL E.12.a.xc) 131965 E.12.a.xl) 131965 E.13) NULL E.13.a) 131988 E.13.a.xc) 131990 E.13.xc) 131988 E.14) NULL E.14.xc) 131994 E.14.xl) 131994 E.15) NULL E.15.xc) 132012 E.16) NULL E.16.xc) 132014 E.17.a) (ALLFNDS) NULL E.17.a.xc) 132016 E.17.a.xf) 132016 E.18) NULL E.18.xc) 132022 E.2) NULL E.2.xc) 131844 E.3) NULL E.3.xc) 131850 E.4) NULL E.4.xc) 131856 E.5) NULL E.5.xc) 131862 E.6) NULL E.6.xc) 131868 E.7) NULL E.7.a) 131874 E.7.a.xc) 131876 E.7.b) 131874 E.7.b.i) 131878 E.7.b.i.xc) 131886 E.7.b.xc) 131878 E.7.xc) 131874 E.8) (NRML) NULL E.8.xc) 131890 E.9) (NRML) NULL E.9.a) 131908 E.9.a.xc) 131910 E.9.a.xf) 131910 E.9.a.xl) 131910 E.9.xc) 131908 ``` I am using below query to sort the column ``` Select A,Bfrom ABCD where id =18613 order by A ``` Now this query is giving me the issue that E.1.xc), I am expecting E.2) but it is returning me E.10) and so on. Column A is a varchar column. I also tried this query but with no luck ``` SELECT CASE WHEN ISNUMERIC(A)=1 THEN CAST(A as int) WHEN PATINDEX('%[^0-9]%',A) > 1 THEN CAST( LEFT( A, PATINDEX('%[^0-9]%',A) - 1 ) as int) ELSE 2147483648 END, CASE WHEN ISNUMERIC(A)=1 THEN NULL WHEN PATINDEX('%[^0-9]%',A) > 1 THEN SUBSTRING( A, PATINDEX('%[^0-9]%',A) , 50 ) ELSE A END from ABCD where id=18613 order by A ``` Also to note that like after D.5) which contains theB as NULL I wan its sub branch as D.5 xc) having its B column as 132158. [Sample SQL Fiddle](http://sqlfiddle.com/#!3/fa929/1)
Try this. It uses the PARSENAME function is sql which is actually to split an object name into it's various parts, but will split anything with dots... ``` SELECT A, B FROM ( SELECT A, B, CASE WHEN LEN(A) - LEN(REPLACE(A, '.', '')) = 3 THEN A WHEN PATINDEX('%.', REPLACE(A, ')', '.')) > 0 THEN REPLACE(A, ')', '.') + 'x' ELSE REPLACE(A, ')', '.') END as dummy FROM ABCD) data ORDER BY LEFT(data.A, 1), CONVERT(INT, CASE WHEN ISNUMERIC(PARSENAME(data.dummy, 4)) = 1 THEN PARSENAME(data.dummy, 4) WHEN ISNUMERIC(PARSENAME(data.dummy, 3)) = 1 THEN PARSENAME(data.dummy, 3) WHEN ISNUMERIC(PARSENAME(data.dummy, 2)) = 1 THEN PARSENAME(data.dummy, 2) WHEN ISNUMERIC(PARSENAME(data.dummy, 1)) = 1 THEN PARSENAME(data.dummy, 1) END), data.A ``` This query might not cater for all the permutations in your data, but it's a start. You get the idea.
**Step 1: Split your sort criterion** Use the SQL Server string manipulation and casting methods to add the following fields to your result set: ``` Major (varchar) Minor (int) Remainder yourOtherFields ------------------------------------------------------------- A 1 NULL ... A 1 xc ... ... ``` (For example, Minor can be extracted by getting the substring between the first `.` and the following `.` or `)` and casting it to integer.) **Step 2: Sort** ``` SELECT myField1, myField2, ... FROM (...SQL from Step 1...) AS mySource ORDER BY Major, Minor, Remainder ``` This ensures a string sort order on Major and Remainder, but an integer sort order on Minor.
Sorting a varchar column in customized way
[ "", "sql", "sql-server", "" ]
I have an unresolved doubt about a query I'm making in PostgreSQL. I have these 2 tables **PLAYER** ``` playerID title 1 Rondo 2 Allen 3 Pierce 4 Garnett 5 Perkins< ``` **PLAYS** ``` playerID TeamID 1 1 1 2 1 3 2 1 2 3 3 1 3 3 ``` and that's my query ``` SELECT DISTINCT concat(N.playerID, ':', N.title), TID FROM player N INNER JOIN ( SELECT DISTINCT P.playerID as PID, teamID as TID FROM plays P ) AS derivedTable ON N.playerID = PID ORDER BY concat ``` the result of the query is: ``` "1:Rondo" | 1 "1:Rondo" | 2 "1:Rondo" | 3 "2:Allen" | 1 "2:Allen" | 3 "3:Pierce" | 1 "3:Pierce" | 3 ``` but I want something like that ``` "1:Rondo" | 1, 2, 3 "2:Allen" | 1, 3 "3:Pierce" | 1, 3 ``` I could use an array\_agg, but i really dunno how
Use `string_agg()` ``` SELECT concat(N.playerID, ':', N.title), string_agg(p.TeamID::text, ',') as teamid_list FROM player N JOIN plays p ON n.playerID = p.playerID GROUP BY n.playerID, n.title ORDER BY 1; ``` Your derived table is not necessary (and the distinct even more so)
For **MySQL** Try this: ``` SELECT CONCAT(N.playerID, ':', N.title) playerTitle, GROUP_CONCAT(P.TID SEPARATOR ', ') TID FROM player N LEFT JOIN plays P ON N.playerID = PID GROUP BY N.playerID ORDER BY playerTitle ```
SQL aggregation of columns with common value in another column
[ "", "sql", "postgresql", "select", "group-by", "string-aggregation", "" ]
I have a table, with types `varchar`, `datetime`, `datetime`: ``` NAME | START | END Bob | 10/30 | 11/2 ``` What's a SQL query can I look up to find out how to make that table be?: ``` NAME | START | END Bob | 10/30 | 10/30 Bob | 10/31 | 10/31 Bob | 11/01 | 11/01 Bob | 11/02 | 11/02 ``` This is only ran once, and on a very small dataset. Optimization isn't necessary.
May be you need a `Recursive CTE`. ``` CREATE TABLE #dates(NAME VARCHAR(50),START DATETIME,[END] DATETIME) INSERT INTO #dates VALUES ('Bob','2014-10-30','2014-11-02') DECLARE @maxdate DATETIME = (SELECT Max([end]) FROM #dates); WITH cte AS (SELECT NAME, START, [END] FROM #dates UNION ALL SELECT NAME, Dateadd(day, 1, start), Dateadd(day, 1, start) FROM cte WHERE start < @maxdate) SELECT * FROM cte ``` **OUTPUT :** ``` name START END ---- ---------- ---------- Bob 2014-10-30 2014-10-30 Bob 2014-10-31 2014-10-31 Bob 2014-11-01 2014-11-01 Bob 2014-11-02 2014-11-02 ```
You can do this with a recursive cte: ``` ;with cte AS (SELECT Name,Start,[End] FROM YourTable UNION ALL SELECT Name ,DATEADD(day,1,Start) ,[End] FROM cte WHERE Start < [End]) SELECT Name, Start, Start AS [End] FROM cte ``` However, I suggest creating a calendar table and joining to it: ``` SELECT a.Name,b.CalendarDate AS Start, b.CalendarDate AS [End] FROM YourTable a JOIN tlkp_Calendar b ON b.CalendarDate BETWEEN a.[Start] AND a.[End] ``` Demo of both queries: [SQL Fiddle](http://sqlfiddle.com/#!6/e5216d/2/1)
For each day between two dates, add a row with the same info but only that day in the start/end columns
[ "", "sql", "sql-server", "datetime", "insert", "" ]
i am trying to join the following two tables: ``` Table Patient | Table incident patient.id patient.birthdate | incident.patientid serviceid 1 1/1/2000 | 1 8 2 1/1/1990 | 1 8 3 1/1/2005 | 2 10 4 1/1/1980 | 3 11 5 1/1/2000 | 3 11 6 1/1/1990 | 3 11 7 1/1/1980 | 6 23 8 1/1/2000 | 7 8 ``` in order to make an age seperation of all patients grouped by their serviceid. ``` SELECT serviceid, SUM(CASE WHEN FLOOR((CAST (GetDate() AS INTEGER) - CAST(patient.birthdate AS INTEGER)) /365.25 ) BETWEEN 0 AND 15 THEN 1 ELSE 0 END) AS [Under 15], SUM(CASE WHEN FLOOR((CAST (GetDate() AS INTEGER) - CAST(patient.birthdate AS INTEGER)) /365.25 ) BETWEEN 16 AND 18 THEN 1 ELSE 0 END) AS [16-18], SUM(CASE WHEN FLOOR((CAST (GetDate() AS INTEGER) - CAST(patient.birthdate AS INTEGER)) /365.25 ) BETWEEN 19 AND 23 THEN 1 ELSE 0 END) AS [19-23], SUM(CASE WHEN FLOOR((CAST (GetDate() AS INTEGER) - CAST(patient.birthdate AS INTEGER)) /365.25 ) BETWEEN 24 AND 30 THEN 1 ELSE 0 END) AS [24-30], SUM(CASE WHEN FLOOR((CAST (GetDate() AS INTEGER) - CAST(patient.birthdate AS INTEGER)) /365.25 ) BETWEEN 31 AND 40 THEN 1 ELSE 0 END) AS [31-40], SUM(CASE WHEN FLOOR((CAST (GetDate() AS INTEGER) - CAST(patient.birthdate AS INTEGER)) /365.25 ) BETWEEN 41 AND 50 THEN 1 ELSE 0 END) AS [41-50], SUM(CASE WHEN FLOOR((CAST (GetDate() AS INTEGER) - CAST(patient.birthdate AS INTEGER)) /365.25 ) BETWEEN 51 AND 65 THEN 1 ELSE 0 END) AS [51-65], SUM(CASE WHEN FLOOR((CAST (GetDate() AS INTEGER) - CAST(patient.birthdate AS INTEGER)) /365.25 ) > 65 THEN 1 ELSE 0 END) AS [>65] from patient inner join incident on patient.id = incident.patientConcerned group by serviceid ``` But what i am trying above, counts the age of all patients for ALL their incidents, meaning that i am not counting distinct patients. (for example i am counting patient 1, twice and patient 3, three times) So i want to join these two tables, but only with one row. How can i do that?
Instead of `sum()` use `count(distinct)`. Here is an example: ``` SELECT serviceid, COUNT(DISTINCT CASE WHEN FLOOR((CAST (GetDate() AS INTEGER) - CAST(patient.birthdate AS INTEGER)) /365.25 ) BETWEEN 0 AND 15 THEN Patient.Id END) AS [Under 15], . . . ```
Instead of `group by serviceid` use `group by patient.patient_id`
Join tables resulting only one row
[ "", "sql", "join", "" ]
I have a simple database that keeps track of contributions (donations) to a charitable organisation. For the purposes of this question, I only need mention the two tables involved: `contributions` and `benefactors`. **Benefactors:** ``` ID | BenefactorName | AssociatedFundraiser | Other Meta Columns ------------------------------------------------------------------- 99 | ABC Accounting | 12 | ... ``` **Contributions:** ``` ID | ContributionDate | BenefactorID | Other Meta Columns -------------------------------------------------------------- 603 | 2014-09-29 | 99 | ... ``` Many of the benefactors in this database make contributions more than once; mostly on a monthly basis. However, I'd like to pin point the benefactors who have not made a contribution in the last year. How do I go about doing that? I have tried various `INNER JOIN` stuff, but was way off-track.
Since Access doesn't support the `MINUS` set operator, things can be a little confusing in situations like this. I think a query such as this would meet your needs: ``` SELECT v.ID, v.[Benefactor Name], v.MaxContributionDate FROM ( SELECT b.ID, b.[Benefactor Name], mx.MaxContributionDate FROM benefactors b INNER JOIN ( SELECT MAX([Contribution Date]) AS MaxContributionDate, [Benefactor ID] FROM contributions GROUP BY [Benefactor ID] ) mx ON (mx.[Benefactor ID] = b.ID) ) v LEFT JOIN ( SELECT DISTINCT [Benefactor ID] FROM contributions WHERE Year([Contribution Date]) = 2014 ) t ON (t.[Benefactor ID] = v.ID) WHERE t.[Benefactor ID] IS NULL; ``` The "in line" query finds the distinct list of benefactors who have made contributions for this year, and the `LEFT JOIN` where the key `IS NULL` ensures that we only return benefactors which are ***not*** in this set. **edit:** added an extra join to fetch the most recent contribution date for each benefactor.
``` SELECT b.* FROM Benefactors b INNER JOIN Contributions c ON b.id = c.BenefactorID WHERE c.ContributionDate < DATEADD(year,-1,GETDATE()) ```
How to select benefactors that have not made contributions in the last year (SQL)
[ "", "sql", "ms-access", "" ]
I have a postgres table with the following format: ``` id key value ------------------------------------- a 1 p b 2 q c 3 r a 2 s ``` I want to convert it into the following format: ``` id 1 2 3 -------------------------------------------- a p s b q c r ``` I am trying to using the following crosstab() query to do this: ``` create extension tablefunc; select * from crosstab( 'select id, key, value from table order by 1,2') as ct(id text, key integer, value text); ``` However, it fails with the following exception: ``` ERROR: return and sql tuple descriptions are incompatible ********** Error ********** ERROR: return and sql tuple descriptions are incompatible SQL state: 42601 ``` What am I missing here? **Update:** There are around 25 keys in the table.
You **must** provide a second parameter with the list of possible values to allow for missing values in each resulting row: ``` SELECT * FROM crosstab( 'SELECT id, key, value FROM tbl ORDER BY 1, 2' , 'SELECT generate_series(1,25)' -- assuming your key is type integer ) AS ct(id text , "1" text, "2" text, "3" text, "4" text, "5" text , "6" text, "7" text, "8" text, "9" text, "10" text , "11" text, "12" text, "13" text, "14" text, "15" text , "16" text, "17" text, "18" text, "19" text, "20" text , "21" text, "22" text, "23" text, "24" text, "25" text); ``` Detailed explanation: * [PostgreSQL Crosstab Query](https://stackoverflow.com/questions/3002499/postgresql-crosstab-query/11751905#11751905) If you get tired of typing the lengthy column definition list, consider this related (advanced) answer: * [Dynamic alternative to pivot with CASE and GROUP BY](https://stackoverflow.com/questions/15506199/dynamic-alternative-to-pivot-with-case-and-group-by/15514334#15514334)
Try this: ``` SELECT * FROM crosstab( 'SELECT id, key, value FROM table ORDER BY 1, 2' ) AS CT(id text, one text, two text, three text); ``` You need the final four column names in `as ct()`, check [here](http://www.postgresql.org/docs/9.1/static/tablefunc.html) for more details
Using crosstab to convert rows to columns fails
[ "", "sql", "postgresql", "crosstab", "" ]