Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I have two tables `Products` and `Keywords`, joined by common `ID`. ``` PRODUCTS KEYWORDS Prod_id Prod_name Prod_id Keyword -------------------- ------------------ 1 Broccoli 1 kw1 2 Caulifower 1 kw2 3 Leek 2 kw1 4 Spinach 2 kw3 5 Zucchini 2 kw4 3 kw1 3 kw2 3 kw4 4 kw2 4 kw3 4 kw4 ``` How do I select only those products that have both 'kw1' and 'kw2' (may have other) assigned?. In the above example that would be Broccoli and Leek. List of required keywords can be longer than two. It's probably trivial, but I can't find the way to achieve this. If I do ``` SELECT Prod_id FROM products p JOIN keywords k ON p.prod_id = k.prod_id WHERE keyword IN ('kw1', 'kw2') ``` it selects all rows with 'kw1' OR 'kw2' (as expected) but I need 'kw1' **AND** 'kw2'.
First, you don't seem to need the `products` table, if you only want the `id`. Then, you can do what you want basically by adding a `group by` and `having` clause to your query: ``` SELECT k.Prod_id FROM keywords k WHERE k.keyword IN ('kw1' ,'kw2') GROUP BY k.Prod_id HAVING COUNT(DISTINCT k.keyword) = 2; ```
Use this: ``` select * from Keywords where Keyword = 'kw1' and Prod_id in( select Prod_id from Keywords where Keyword = 'kw2' ) ```
SQL Server : select rows with all specified values in another table
[ "", "sql", "sql-server", "" ]
I'm not even sure how to word this question but here goes. I need to be able to loop through a result set, within the same SQL script, and use the results in more SQL. For example ``` begin SELECT (SELECT ColumnA, ColumnB from SomeTable) as x loop through x( INSERT ColumnA into TableA INSERT ColumnB into TableB ) end ``` But I forget the exact way of doing this. I know I've done it before at a previous position, but I can't find the code for it in my files from that company. Obviously, this is a very crude and basic example and I plan on doing a lot more with the result set, but I just gave this as an example. --- EDIT: Here's a closer example of what I'm looking to do in case this will help. ``` begin while(select columnA, columnB, columnC, columnD from myTable) as x begin INSERT columnA, columnB into TableA (get newly created ID of TableA - but that's a separate question involving @@IDENTITY) INSERT NewID, columnC, columnD into TableB end loop end ```
The usual way to handle obtaining the identity in a set based manner is through the [`OUTPUT`](https://msdn.microsoft.com/en-us/library/ms177564.aspx) clause: ``` INSERT INTO TableA (ColumnA, ColumnB) OUTPUT inserted.Id, inserted.ColumnA, inserted.ColumnB SELECT ColumnA, ColumnB FROM MyTable; ``` The problem here is that what you would ideally like to do is this: ``` INSERT INTO TableA (ColumnA, ColumnB) OUTPUT inserted.Id, MyTable.ColumnC, inserted.ColumnD INTO TableB (AID, ColumnC, ColumnD) SELECT ColumnA, ColumnB FROM MyTable; ``` The problem is that you can't reference the source table in the OUTPUT, only the target. Fortunately there is a workaround for this using `MERGE`, since this allows you to use reference both the resident memory inserted table, and the source table in the output clause if you use `MERGE` on a condition that will never be true you can the output all the columns you need: ``` WITH x AS ( SELECT ColumnA, ColumnB, ColumnC, ColumnD FROM MyTable ) MERGE INTO TableA AS a USING x ON 1 = 0 -- USE A CLAUSE THAT WILL NEVER BE TRUE WHEN NOT MATCHED THEN INSERT (ColumnA, ColumnB) VALUES (x.ColumnA, x.ColumnB) OUTPUT inserted.ID, x.ColumnC, x.ColumnD INTO TableB (NewID, ColumnC, ColumnD); ``` The problem with this method is that SQL Server does not allow you to insert either side of a foreign key relationship, so if tableB.NewID references tableA.ID then the above will fail. To work around this you will need to output into a temporary table, then insert the temp table into TableB: ``` CREATE TABLE #Temp (AID INT, ColumnC INT, ColumnD INT); WITH x AS ( SELECT ColumnA, ColumnB, ColumnC, ColumnD FROM MyTable ) MERGE INTO TableA AS a USING x ON 1 = 0 -- USE A CLAUSE THAT WILL NEVER BE TRUE WHEN NOT MATCHED THEN INSERT (ColumnA, ColumnB) VALUES (x.ColumnA, x.ColumnB) OUTPUT inserted.ID, x.ColumnC, x.ColumnD INTO #Temp (AID, ColumnC, ColumnD); INSERT TableB (AID, ColumnC, ColumnD) SELECT AID, ColumnC, ColumnD FROM #Temp; ``` **[Example on SQL Fiddle](http://sqlfiddle.com/#!3/55c2f/3)**
In `SQL` it is called `CURSORS`. The basic structure of `CURSOR` is: ``` DECLARE @ColumnA INT, @ColumnB INT DECLARE CurName CURSOR FAST_FORWARD READ_ONLY FOR SELECT ColumnA, ColumnB FROM SomeTable OPEN CurName FETCH NEXT FROM CurName INTO @ColumnA, @ColumnB WHILE @@FETCH_STATUS = 0 BEGIN INSERT INTO TableA( ColumnA ) VALUES ( @ColumnA ) INSERT INTO TableB( ColumnB ) VALUES ( @ColumnB ) FETCH NEXT FROM CurName INTO @ColumnA, @ColumnB END CLOSE CurName DEALLOCATE CurName ``` Another way of iterative solution is `WHILE` loop. But for this to work you should have unique identity column in a table. For example ``` DECLARE @id INT SELECT TOP 1 @id = id FROM dbo.Orders ORDER BY ID WHILE @id IS NOT NULL BEGIN PRINT @id SELECT TOP 1 @id = id FROM dbo.Orders WHERE ID > @id ORDER BY ID IF @@ROWCOUNT = 0 BREAK END ``` But note that you should avoid using `CURSORS` if there is alternative not iterative way of doing the same job. But of course there are a situations when you can not avoid `CURSORs`
Looping through SELECT result set in SQL
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "" ]
I have written a plugin that fires off when new record is created (post). This plugin simple runs a SSIS package that moves newly created record details in to SQL table. I'm having to deploy this plugin as none isolation mode. What I want to know is that is there any better solution to live sync CRM records with SQL table? I am working on CRM 2011 RU 18 On premise. ``` string connectionString = "User Id=username; Password=password; Initial Catalog=db; Data Source=myServer;"; using (SqlConnection connection = new SqlConnection(connectionString)) { SqlCommand cmd = new SqlCommand("INSERT INTO testPerson (Firstname, Lastname, Emailaddress, CrmGuid) VALUES (@FN, @LN, @Email, @CrmGuid)"); cmd.CommandType = CommandType.Text; cmd.Connection = connection; cmd.Parameters.AddWithValue("@FN", "James"); cmd.Parameters.AddWithValue("@LN", "Bolton"); cmd.Parameters.AddWithValue("@Email", "1152@now.com"); cmd.Parameters.AddWithValue("@CrmGuid", "C45843ED-45BC-E411-9450-00155D1467C5"); connection.Open(); cmd.ExecuteNonQuery(); } //////// Direct DB Connection //////////// <--- END 1 } catch (Exception ex) { tracingService.Trace("CRM to mWeb Sync Plugin: {0}", ex.ToString()); throw; } ```
If you want to run this in sandbox mode you will not be able to directly target SQL (or run an SSIS package.) The correct solution would be to target a web service. Create a standard .NET web service (WebApi, ASP.NET old school, WCF, etc. - pick your poison) to which you can post the fields. That web service should take the fields are write them to SQL. This is basic .NET - you can use a `SqlCommand`, EntityFramework, etc. Deploy this web service to IIS as you would any web service. You can get a full tutorial on creating an ASP.NET WebApi Web Service here: <http://www.codeproject.com/Articles/659131/Understanding-and-Implementing-ASPNET-WebAPI> Now, your plugin should call the web service you have created using `HttpClient`. So long as you have a FQDN (i.e., service.mydomain.com and not something such as localhost or 34.22.643.23) you will be able to run your plugin in sandbox mode (i.e., not isolation.) You can secure your service using any IIS security feature or implement authorization in your WebAPI code.
As a possible alternative I see that plugin can use ADO Connection/Command e.t.c. to push data directly to external DB.
How to copy newly created CRM contact record to SQL table
[ "", "sql", "plugins", "dynamics-crm-2011", "synchronization", "" ]
I am trying to delete the last three charecters of the postcode, but the the issue is that the postcode could be eneter the field type is `varchar(max)` any idead how I can't delete the last there characters in the text value. Currently when I try runing the code I get the follwoing error > Invalid length parameter passed to the LEFT or SUBSTRING function Code: ``` SELECT c.[postcode], left ( ltrim(rTrim(c.[postcode])) ,len(ltrim(rTrim(c.[postcode])))-4) as ode FROM [testing].[dbo].[canidateinfo] as c ```
This will exclude the last 3 characters: ``` SELECT c.[postcode], ltrim(substring( c.postcode , -2, len(c.postcode))) code FROM [testing].[dbo].[canidateinfo] as c ``` Example: ``` DECLARE @t table(postcode varchar(20)) INSERT @t values('1234567'),('32'),('abcde'),(' aa123'),(' aa ') SELECT postcode, ltrim(substring( postcode , -2, len(postcode))) as code FROM @t ``` Result: ``` postcode code 1234567 1234 32 <blank> abcde ab aa123 aa aa <blank> ```
Would something like this do the trick: ``` SELECT LEFT(PostCode, CHARINDEX(' ', Postcode)+1) PostCodeSector FROM testing.dbo.canidateinfo ``` \*\* Update \*\* ``` CREATE TABLE #postcodeTest (postcode varchar(max)) insert into #postcodeTest Values ('AB3 4FY'), ('AB34FY'), ('M504FY'), ('M50 4FY') SELECT DISTINCT CASE (isnumeric(SUBSTRING([Postcode] , 2 , 1))) WHEN 1 THEN LEFT ([Postcode] , 2) ELSE LEFT ([Postcode] , 3) END PcodeArea FROM #postcodeTest ```
Delete last three characters of test in SQL Server 2008 R2
[ "", "sql", "sql-server", "sql-server-2008-r2", "" ]
SQL is having an issue with the WHERE clause of this script: ``` SELECT ITEM_ID, ITEM_PRICE, DISCOUNT_AMOUNT, QUANTITY, (ITEM_PRICE*QUANTITY) AS price_total, (DISCOUNT_AMOUNT*QUANTITY) AS discount_total, ((ITEM_PRICE-DISCOUNT_AMOUNT)*QUANTITY) AS item_total FROM ORDER_ITEMS WHERE item_total > 500 ORDER BY item_total; ``` I am receiving this error: ``` Error starting at line : 1 in command - SELECT ITEM_ID, ITEM_PRICE, DISCOUNT_AMOUNT, QUANTITY, (ITEM_PRICE*QUANTITY) AS price_total, (DISCOUNT_AMOUNT*QUANTITY) AS discount_total, ((ITEM_PRICE-DISCOUNT_AMOUNT)*QUANTITY) AS item_total FROM ORDER_ITEMS WHERE item_total > 500 ORDER BY item_total DESC; Error at Command Line : 7 Column : 7 Error report - SQL Error: ORA-00904: "ITEM_TOTAL": invalid identifier 00904. 00000 - "%s: invalid identifier" *Cause: *Action: ``` Why has it no issue with price\_total nor discount\_total, but is reporting item\_total as invalid? I am trying to first select only the items which have a total greater than 500 when the discount amount is subtracted and it is multiplied by the quantity. Then, I need to sort the results in descending order by item\_total.
> **An alias can be used in a query select list to give a column a different name. You can use the alias in GROUP BY, ORDER BY, or HAVING > clauses to refer to the column.** > > **Standard SQL disallows references to column aliases in a WHERE clause. This restriction is imposed because when the WHERE clause is > evaluated, the column value may not yet have been determined.** So, the following query is illegal: ``` SQL> SELECT empno AS employee, deptno AS department, sal AS salary 2 FROM emp 3 WHERE employee = 7369; WHERE employee = 7369 * ERROR at line 3: ORA-00904: "EMPLOYEE": invalid identifier SQL> ``` The column alias is allowed in: * **GROUP BY** * **ORDER BY** * **HAVING** You could refer to the column alias in WHERE clause in the following cases: 1. **Sub-query** 2. **Common Table Expression(CTE)** For example, ``` SQL> SELECT * FROM 2 ( 3 SELECT empno AS employee, deptno AS department, sal AS salary 4 FROM emp 5 ) 6 WHERE employee = 7369; EMPLOYEE DEPARTMENT SALARY ---------- ---------- ---------- 7369 20 800 SQL> WITH DATA AS( 2 SELECT empno AS employee, deptno AS department, sal AS salary 3 FROM emp 4 ) 5 SELECT * FROM DATA 6 WHERE employee = 7369; EMPLOYEE DEPARTMENT SALARY ---------- ---------- ---------- 7369 20 800 SQL> ```
You cannot use the column name which is used as alias one in the query Reason: The query will first checks for runtime at that time the column name "item\_total" is not found in the table "ORDER\_ITEMS" because it was give as alias which is not stored in anywhere and you are assigning that column in desired output only Alternate: If you want to use that type go with sub queries it's performance is not good but it is one of the alternate way ``` SELECT * FROM (SELECT ITEM_ID, ITEM_PRICE, DISCOUNT_AMOUNT, QUANTITY, (ITEM_PRICE*QUANTITY) AS price_total, (DISCOUNT_AMOUNT*QUANTITY) AS discount_total, ((ITEM_PRICE-DISCOUNT_AMOUNT)*QUANTITY) AS item_total FROM ORDER_ITEMS) as tbl WHERE tbl.item_total > 500 ORDER BY tbl.item_total; ```
SQL not recognizing column alias in where clause
[ "", "sql", "oracle", "column-alias", "" ]
I have an existing query to select some payments I want to filter out any payments that are for clients that have an active alert in another table called ClientAlert So I figured I would do a left join and check if the ClientAlertId is null. ``` select * from payments p left join client c on c.clientid = p.clientid left join ClientAlert ca on ca.CRMId = c.CRMId and ca.ClientAlertSubjectId = 1 and ca.IsActive = 1 and (ca.ExpiryDate is null or ca.ExpiryDate > GetDate()) where ca.clientalertid is null and p.PaymentStatusId = 2 and p.PaymentDate <= GetDate() and p.PaymentCategoryId = 1 ``` This seems to work I think But I have two questions: 1. Could there ever be a scenario that would cause multiple payments to be returned instead of one by adding this join? 2. when I specified the following in the where clause instead of the join, it did not give the same results and I dont understand why and ca.ClientAlertSubjectId = 1 and ca.IsActive = 1 and (ca.ExpiryDate is null or ExpiryDate > GetDate()) I thought having that criteria in the where clause woiuld be equivelent to having it in the join
1. If they can have multiple alerts, theoretically. However since you are excluding payments with alerts, this should not be a problem. If you were including them it could be. If this was a problem, you should use a "not in" subquery instead of left outer join since that can cause duplicate records if it's not 1:1. 2. Having criteria in the where clause excludes the entire row if it doesn't match the criteria. Having it in the join clause means the joined record is not shown but the "parent" is.
1. You could get multiples per payment record if it links to more than one Client record. Based on the WHERE clause though, I don't see how multiple ClientAlert records could cause duplication. 2. `LEFT JOIN` records return NULLs across all their columns when there is no match. Adding `ca.ClientAlertSubjectId = 1 and ca.IsActive = 1` to the WHERE clause basically forces that join to behave like an INNER JOIN because it would HAVE to find a matching record, but I'm guessing it would never return data because ClientAlertId is a non-nullable column. So basically you created a query where you need a NULL row (indicating there are no alerts), but the row must contain data.
sql left join criteria in join vs where clause
[ "", "sql", "sql-server", "" ]
So I have these three tables: ``` WORKERS( WNO, WNAME, ZIP, HDATE ) CUSTOMERS( CNO, CNAME, STREET, ZIP, PHONE ) ORDERS( ONO, CNO, WNO, RECEIVED, SHIPPED ) ``` I want to find the workers who have **ONLY** made sales to customers who lives in the same zip code as the workers. Until now I have this code: ``` SELECT e.wno FROM ORDERS o, CUSTOMERS c, WORKERS e WHERE o.cno = c.cno AND o.wno = e.wno AND c.zip = e.zip ``` But this doesn't give the required answer. Can anyone help me with this? It would be great if you can help. ``` ONO CNO WNO RECEIVED SHIPPED CNO CNAME STREET ZIP PHONE WNO WNAME ZIP HDATE 1021 1111 1000 12-JAN-95 15-JAN-95 1111 Charles 123 Main St. 67226 316-636-5555 1000 Jones 67226 12-DEC-95 1020 1111 1000 10-DEC-94 12-DEC-94 1111 Charles 123 Main St. 67226 316-636-5555 1000 Jones 67226 12-DEC-95 1022 2222 1001 13-FEB-95 20-FEB-95 2222 Bertram 237 Ash Avenue 67226 316-689-5555 1001 Smith 60606 01-JAN-92 1026 2222 1001 03-JAN-96 08-JAN-96 2222 Bertram 237 Ash Avenue 67226 316-689-5555 1001 Smith 60606 01-JAN-92 1023 3333 1000 20-JUN-97 3333 Barbara 111 Inwood St. 60606 316-111-1234 1000 Jones 67226 12-DEC-95 1024 4444 1002 13-APR-95 17-APR-95 4444 Jorge 131 State St. 47907 765-111-1884 1002 Brown 50302 01-SEP-94 1025 4444 1000 25-APR-95 30-APR-95 4444 Jorge 131 State St. 47907 765-111-1884 1000 Jones 67226 12-DEC-95 ```
I think your original query will work with the addition of just a `NOT EXISTS` clause: ``` SELECT e.wno FROM workers e, orders o, customers c WHERE e.wno = o.wno AND o.cno = c.cno AND c.zip = e.zip AND NOT EXISTS ( SELECT 1 FROM orders o1, customers c1 WHERE o1.wno = e.wno AND o1.cno = c1.cno AND c1.zip != e.zip ); ``` That is, get all of the workers who have delivered to customers in their own zip code but who have not delivered outside their zip code. You may wish to use ANSI joins as well (I have to admit that I'm not as comfortable with this syntax so I'm not 100% certain that the joins are correct): ``` SELECT e.wno FROM workers e INNER JOIN orders o ON e.wno = o.wno INNER JOIN customers c ON o.cno = c.cno WHERE c.zip = e.zip AND NOT EXISTS ( SELECT 1 FROM orders o1 INNER JOIN customers c1 ON o1.cno = c1.cno WHERE o1.wno = e.wno AND c1.zip != e.zip ); ```
Try this: ``` SELECT e.wno FROM ORDERS o join CUSTOMERS c on o.cno = c.cno join WORKERS e on o.wno = e.wno Group by e.wno,c.zip having count(*)=1 ```
Oracle SQL query about only extracting certain items
[ "", "sql", "oracle", "oracle11g", "" ]
I want to modify a column name to new name present in a table but here problem i want to manually modify the `column name` present in `Triggers` or `SP's`. Is there a any better way of doing it. To `rename` a column am using this ``` sp_RENAME 'Tablename.old_Column', 'new_column' , 'COLUMN'; ``` similarly how can i do it for `triggers` or `SP's`.? without opening each script?
Well, there are a bunch of 3rd party tools that are promising this type of "safe rename", some for free and some are not: * ApexSQL has a [free tool](http://solutioncenter.apexsql.com/rename-column-without-breaking-sql-database/) for that, as [MWillemse](https://stackoverflow.com/users/411860/mwillemse) wrote in his answer, * RedGate have a commercial tool called [SQLPrompt](https://documentation.red-gate.com/display/SP7/Renaming+objects) that also have a safe renaming feture, However it is far from being free. * Microsoft have a visual studio add-in called [SQL Server Data Tools](https://msdn.microsoft.com/en-us/data/tools.aspx?f=255&MSPPError=-2147217396) (or SSDT in the short version), as [Dan Guzman](https://stackoverflow.com/users/3711162/dan-guzman) wrote in his comment. I have to say I've never tried any of these specific tools for that specific task, but I do have some experience with SSDT and some of RedGate's products and I consider them to be very good tools. I know nothing about ApexSQL. Another option is to try and write the sql script yourself, However there are a couple of things to take into consideration before you start: * Can your table be accessed directly from outside the sql server? I mean, is it possible that some software is executing sql statement directly on that table? If so, you might break it when you rename that column, and no sql tool will help in this situation. * Are your sql scripting skills really that good? I consider myself to be fairly experienced with sql server, but I think writing a script like that is beyond my skills. Not that it's impossible for me, but it will probably take too much time and effort for something I can get for free. Should you decide to write it yourself, there are a few articles that might help you in that task: First, Microsoft official documentation of [sys.sql\_expression\_dependencies](https://msdn.microsoft.com/en-us/library/bb677315.aspx). Second, an article called [Different Ways to Find SQL Server Object Dependencies](https://www.mssqltips.com/sqlservertip/2999/different-ways-to-find-sql-server-object-dependencies/) that is written by a 13 years experience DBA, and last but not least, [a related question](https://dba.stackexchange.com/questions/77813/finding-dependencies-on-a-specific-column-modern-way-without-using-sysdepends) on StackExchange's Database Administrator's website. You could, of course, go with the safe way Gordon Linoff suggested in his comment, or use synonyms like destination-data suggested in his answer, but then you will have to manually modify all of the columns dependencies manually, and from what I understand, that is what you want to avoid.
1. Renaming the Table column 2. Deleting the Table column 3. Alter Table Keys Best way use Database Projects in Visual Studio. Refer this links [link 1](https://msdn.microsoft.com/library/xee70aty%28v=vs.100%29.aspx) [link 2](https://www.mssqltips.com/sqlservertutorial/3001/creating-a-new-database-project/)
Renaming a column without breaking the scripts and stored procedures
[ "", "sql", "sql-server", "triggers", "rename", "" ]
I have a `person` table where the `name` column contains names, some in the format "first last" and some in the format "first". My query ``` SELECT name, SUBSTRING(name FROM 1 FOR POSITION(' ' IN name) ) AS first_name FROM person ``` creates a new row of first names, but it doesn't work for the names which only have a first name and no blank space at all. I know I need a `CASE` statement with something like `0 = (' ', name)` but I keep running into syntax errors and would appreciate some pointers.
Just use [**`split_part()`**](http://www.postgresql.org/docs/current/interactive/functions-string.html): ``` SELECT split_part(name, ' ', 1) AS first_name , split_part(name, ' ', 2) AS last_name FROM person; ``` [**SQL Fiddle.**](http://www.sqlfiddle.com/#!12/080a8/2) Related: * [Split comma separated column data into additional columns](https://stackoverflow.com/questions/8584967/split-comma-separated-column-data-into-additional-columns/8612456#8612456)
1. ``` select substring(name from '^([\w\-]+)') first_name, substring(name from '\s(\w+)$') last_name from person ``` 2. ``` select (regexp_split_to_array(name, ' '))[1] first_name , (regexp_split_to_array(name, ' '))[2] last_name from person ```
Seperate first and last names from single column
[ "", "sql", "postgresql", "substring", "delimiter", "" ]
I am trying to find the best way to compare between rows in the same table. I wrote a self join query and was able to pull out the ones where the rates are different. Now I need to find out if the rates increased or decreased. If the rates increased, it's an issue. If it decreased, then there is no issue. My data looks like this ``` ID DATE RATE 1010 02/02/2014 7.4 1010 03/02/2014 7.4 1010 04/02/2014 4.9 2010 02/02/2014 4.9 2010 03/02/2014 7.4 2010 04/02/2014 7.4 ``` So in my table, I should be able to code ID 1010 as 0 (no issue) and 2010 as 1 (issue) because the rate went up from feb to apr.
You can achieve this with a select..case ``` select case when a.rate > b.rate then 'issue' else 'no issue' end from yourTable a join yourTable b using(id) where a.date > b.date ``` See [documentation for CASE expressions](http://docs.oracle.com/cd/B19306_01/server.102/b14200/expressions004.htm).
Sounds like a case for `LAG()`: ``` with sample_data as (select 1010 id, to_date('02/02/2014', 'mm/dd/yyyy') dt, 7.4 rate from dual union all select 1010 id, to_date('03/02/2014', 'mm/dd/yyyy') dt, 7.4 rate from dual union all select 1010 id, to_date('04/02/2014', 'mm/dd/yyyy') dt, 4.9 rate from dual union all select 2010 id, to_date('02/02/2014', 'mm/dd/yyyy') dt, 4.9 rate from dual union all select 2010 id, to_date('03/02/2014', 'mm/dd/yyyy') dt, 7.4 rate from dual union all select 2010 id, to_date('04/02/2014', 'mm/dd/yyyy') dt, 7.4 rate from dual) select id, dt, rate, case when rate > lag(rate, 1, rate) over (partition by id order by dt) then 1 else 0 end issue from sample_data; ID DT RATE ISSUE ---------- ---------- ---------- ---------- 1010 02/02/2014 7.4 0 1010 03/02/2014 7.4 0 1010 04/02/2014 4.9 0 2010 02/02/2014 4.9 0 2010 03/02/2014 7.4 1 2010 04/02/2014 7.4 0 ``` You may want to throw an outer query around that to only display rows that have `issue = 1`, or perhaps an aggregate query to retrieve id's that have at least one row that has `issue = 1`, depending on your actual requirements. Hopefully the above is enough for you to work out how to get what you're after.
Comparing between rows in same table in Oracle
[ "", "sql", "oracle", "compare", "" ]
I have some user-created stored procedures and functions in this legacy database. How do I list all procedures and functions of one specific schema, let's say, SCHEMA1, for instance.
Schema and user are somewhat synonymous in Oracle. If you want to list down all the procedures and functions in a specific schema, then query: 1. **user\_objects** : If you are logged in as the user you want to query the object list. 2. **all\_objects** : You need to filter with **OWNER**. For example, ``` SELECT * FROM user_objects WHERE object_type IN('FUNCTION', 'PROCEDURE'); ``` Or, ``` SELECT * FROM ALL_OBJECTS WHERE OBJECT_TYPE IN ('FUNCTION','PROCEDURE') AND OWNER = 'your_schema_name'; ``` Make sure you pass the required values in upper case. **UPDATE** From documentation here <http://docs.oracle.com/cd/B19306_01/server.102/b14237/statviews_2025.htm>, > ALL\_PROCEDURES > > ALL\_PROCEDURES lists all functions and procedures, along with > associated properties. For example, ALL\_PROCEDURES indicates whether > or not a function is pipelined, parallel enabled or an aggregate > function. If a function is pipelined or an aggregate function, the > associated implementation type (if any) is also identified. So, you could also use **user\_procedures** view as per documentation. **NOTE** Please note few things regarding `*_procedures`. You need to take care whether the procedure is standalone or whether is wrapped within a package. I have written an article based on the same here [Unable to find procedure in DBA\_PROCEDURES view](https://stackoverflow.com/questions/28343972/unable-to-find-procedure-in-dba-procedures-view)
If you want to look up the list of all procedures then - ``` SELECT * FROM ALL_PROCEDURES WHERE OWNER = 'SCHEMA1'; ``` This of course assumes that you have permissions to see the procedures/functions/packages of SCHEMA1. If however you have the DBA privilege, then you can also do - ``` SELECT * from DBA_PROCEDURES WHERE OWNER = 'SCHEMA1'; ``` If you want the code inside the procedures then look up ALL\_SOURCE or DBA\_SOURCE.
How to get a list of all user-created stored procedures and functions in a specific schema of Oracle 9i?
[ "", "sql", "database", "oracle", "" ]
I have a database that stores search criteria entered by users and want to analyse how often certain words have been used. The "problem" is that many searches have similar meaning but have one or more words that accompany them. Example (in this example "foo" is the interesting word): ``` bar foo 2015 show me foo germany foo ``` I would like to determine that `foo` was used three times. I need to do this programmatically that means using SQL commands would be the ideal solution. The words used vary based on user behaviour. Because of this I do not know in advance which words get used, I need the logic to determine this on its own.
Expanding on this [This Answer](https://stackoverflow.com/questions/11018076/splitting-delimited-values-in-a-sql-column-into-multiple-rows) (Credit to Aaron Bertrand for Function), you can do this by creating a Split Function and using a `Cross Apply` to it with a `Group By`: ``` CREATE FUNCTION dbo.SplitStrings ( @List NVARCHAR(MAX), @Delimiter NVARCHAR(255) ) RETURNS TABLE AS RETURN (SELECT Number = ROW_NUMBER() OVER (ORDER BY Number), Item FROM (SELECT Number, Item = LTRIM(RTRIM(SUBSTRING(@List, Number, CHARINDEX(@Delimiter, @List + @Delimiter, Number) - Number))) FROM (SELECT ROW_NUMBER() OVER (ORDER BY s1.[object_id]) FROM sys.all_objects AS s1 CROSS APPLY sys.all_objects) AS n(Number) WHERE Number <= CONVERT(INT, LEN(@List)) AND SUBSTRING(@Delimiter + @List, Number, 1) = @Delimiter ) AS y); GO ``` Sample Data: ``` Create Table SplitTest ( A Varchar (100) ) Insert SplitTest Values ('bar'), ('foo 2015'), ('show me foo'), ('germany foo') ``` Query: ``` Select f.Item, Count(*) Count From SplitTest As s Cross Apply dbo.SplitStrings(s.A, ' ') As F Group By F.Item Order By Count Desc ``` Results: ``` Item Count foo 3 germany 1 me 1 show 1 2015 1 bar 1 ```
I think this is an idea problem to use the full text search feature of sql server. Solving this problem is what that full text search does. <https://msdn.microsoft.com/en-us/library/ms142571.aspx> To quote from that page: > Full-Text Search Queries > > After columns have been added to a full-text index, users and > applications can run full-text queries on the text in the columns. > These queries can search for any of the following: > > * One or more specific words or phrases (simple term) > * A word or a phrase where the words begin with specified text (prefix term) > * Inflectional forms of a specific word (generation term) > * A word or phrase close to another word or phrase (proximity term) > * Synonymous forms of a specific word (thesaurus) > * Words or phrases using weighted values (weighted term) Why re-invent when the feature already exists in the product?
Full text search for MS SQL
[ "", "sql", "sql-server", "full-text-search", "" ]
I need some help with my query. I need to select data that is not selected in another query. So what is mean is: Table 1 have 50 Questions Table 2 have selected 32 Then there are 18 not used. I only need to select that 18 not used questions. Hope you can help me! Edit: Table with all Questions: Id - InputType - InputName - InputLabel Table with the selected questions: Id - required - position Relations: Id with Id
You can use `LEFT JOIN`: ``` SELECT T1.* FROM Table1 T1 LEFT JOIN Table2 T2 ON T1.Id=T2.Id WHERE T2.required IS NULL ``` **Explanation:** When we join those tables with `LEFT JOIN`, it will select all records from Table1 and corresponding records from Table2 (if any). And we are excluding the questions which are already in Table2. Consider the table data: ``` Table1 Table2 -------------------------------------------------- id Question id Question 1 Question1 1 Question1 2 Question2 3 Question3 3 Question3 5 Question5 4 Question4 5 Question5 6 Question6 ``` Then this query will result: ``` id Question ----------------- 2 Question2 4 Question4 6 Question6 ```
``` SELECT aq.* FROM all_questions aq LEFT JOIN selected_questions sq ON sq.Id = aq.Id WHERE sq.Id IS NULL ```
Mysql Get rows that are not used yet
[ "", "mysql", "sql", "" ]
I have a long table like the following. The table adds two similar rows after the id changes. E.g in the following table when ID changes from 1 to 2 a duplicate record is added. All I need is a SELECT query to skip this and all other duplicate records only if the ID changes. ``` # | name| id --+-----+--- 1 | abc | 1 2 | abc | 1 3 | abc | 1 4 | abc | 1 5 | abc | 1 5 | abc | 2 6 | abc | 2 7 | abc | 2 8 | abc | 2 9 | abc | 2 ``` and so on
So I achieved it by using the following query in SQL server. ``` select #, name, id from table group by #, name, id having count(*) > 0 ```
You could use `NOT EXISTS` to eliminate the duplicates: ``` SELECT * FROM yourtable AS T WHERE NOT EXISTS ( SELECT 1 FROM yourtable AS T2 WHERE T.[#] = T2.[#] AND T2.ID > T.ID ); ``` This will return: ``` # name ID ------------------ . ... . 4 abc 1 5 abc 2 6 abc 2 . ... . ``` *... (Some irrelevant rows have been removed from the start and the end)* If you wanted the first record to be retained, rather than the last, then just change the condition `T2.ID > T.ID` to `T2.ID < T.ID`.
Query to skip first row after id changes in SQL Server
[ "", "sql", "sql-server", "sql-server-2008", "" ]
According the instructions [here](https://stackoverflow.com/a/7945958/1500111) I have created two functions that use `EXECUTE FORMAT` and return the same table of `(int,smallint)`. Sample definitions: ``` CREATE OR REPLACE FUNCTION function1(IN _tbl regclass, IN _tbl2 regclass, IN field1 integer) RETURNS TABLE(id integer, dist smallint) CREATE OR REPLACE FUNCTION function2(IN _tbl regclass, IN _tbl2 regclass, IN field1 integer) RETURNS TABLE(id integer, dist smallint) ``` Both functions return the exact same number of rows. Sample result (**will be always ordered by dist**): ``` (49,0) (206022,3) (206041,3) (92233,4) ``` Is there a way to compare values of the second field between the two functions for the same rows, to ensure that both results are the same: For example: ``` SELECT function1('tblp1','tblp2',49),function2('tblp1_v2','tblp2_v2',49) ``` Returns something like: ``` (49,0) (49,0) (206022,3) (206022,3) (206041,3) (206041,3) (92233,4) (133,4) ``` Although I am not expecting identical results (each function is a **topK** query and I have ties which are broken arbitrarily / with some optimizations in the second function for faster performance) I can ensure that both functions return correct results, if for each row the second numbers in the results are the same. In the example above, I can ensure I get correct results, because: ``` 1st row 0 = 0, 2nd row 3 = 3, 3rd row 3 = 3, 4th row 4 = 4 ``` despite the fact that for the 4th row, `92233!=133` Is there a way to get only the 2nd field of each function result, to batch compare them e.g. with something like: ``` SELECT COUNT(*) FROM (SELECT function1('tblp1','tblp2',49).field2, function2('tblp1_v2','tblp2_v2',49).field2 ) n2 WHERE function1('tblp1','tblp2',49).field2 != function1('tblp1','tblp2',49).field2; ``` I am using PostgreSQL 9.3.
> Is there a way to get only the 2nd field of each function result, to batch compare them? All of the following answers assume that rows are returned in ***matching*** order. ## Postgres 9.3 With the quirky feature of exploding rows from SRF functions returning the *same* number of rows in parallel: ``` SELECT count(*) AS mismatches FROM ( SELECT function1('tblp1','tblp2',49) AS f1 , function2('tblp1_v2','tblp2_v2',49) AS f2 ) sub WHERE (f1).dist <> (f2).dist; -- note the parentheses! ``` The parentheses around the row type are necessary to disambiguate from a possible table reference. [Details in the manual here.](http://www.postgresql.org/docs/current/interactive/sql-expressions.html#FIELD-SELECTION) This defaults to Cartesian product of rows if the number of returned rows is not the same (which would break it completely for you). ## Postgres 9.4 ### `WITH ORDINALITY` to generate row numbers on the fly You can use `WITH ORDINALITY` to generate a row number o the fly and don't need to depend on pairing the result of SRF functions in the `SELECT` list: ``` SELECT count(*) AS mismatches FROM function1('tblp1','tblp2',49) WITH ORDINALITY AS f1(id,dist,rn) FULL JOIN function2('tblp1_v2','tblp2_v2',49) WITH ORDINALITY AS f2(id,dist,rn) USING (rn) WHERE f1.dist IS DISTINCT FROM f2.dist; ``` This works for the same number of rows from each function as well as differing numbers (which would be counted as mismatch). Related: * [PostgreSQL unnest() with element number](https://stackoverflow.com/questions/8760419/postgresql-unnest-with-element-number/8767450#8767450) ### [`ROWS FROM`](http://www.postgresql.org/docs/current/interactive/queries-table-expressions.html#QUERIES-TABLEFUNCTIONS) to join sets row-by-row ``` SELECT count(*) AS mismatches FROM ROWS FROM (function1('tblp1','tblp2',49) , function2('tblp1_v2','tblp2_v2',49)) t(id1, dist1, id2, dist2) WHERE t.dist1 IS DISTINCT FROM t.dist2; ``` Related answer: * [Is it possible to answer queries on a view before fully materializing the view?](https://stackoverflow.com/questions/28730338/is-it-possible-to-answer-queries-on-a-view-before-fully-materializing-the-view/28731911#28731911) Aside: `EXECUTE FORMAT` is not a set plpgsql functionality. `RETURN QUERY` is. [`format()`](http://www.postgresql.org/docs/current/interactive/functions-string.html#FUNCTIONS-STRING-FORMAT) is just a convenient function for building a query string, can be used anywhere in SQL or plpgsql.
The order in which the rows are returned from the functions is not guaranteed. If you can return the [`row_number()`](http://www.postgresql.org/docs/current/static/functions-window.html) (`rn` in the below example) from the functions then: ``` select count(f1.dist is null or f2.dist is null or null) as diff_count from function1('tblp1','tblp2',49) f1 inner join function2('tblp1_v2','tblp2_v2',49) f2 using(rn) ```
Compare result of two table functions using one column from each
[ "", "sql", "postgresql", "postgresql-9.3", "set-returning-functions", "" ]
Given a table with two columns col1 and col2, how can I use the Oracle CHECK constraint to ensure that what is allowed in col2 depends on the corresponding col1 value. Specifically, * if col1 has A, then corresponding col2 value must be less than 50; * if col1 has B, then corresponding col2 value must be less than 100; * and if col1 has C, then corresponding col2 value must be less than 150. Thanks for helping!
You need to use a case statement, eg. something like: ``` create table test1 (col1 varchar2(2), col2 number); alter table test1 add constraint test1_chk check (col2 < case when col1 = 'A' then 50 when col1 = 'B' then 100 when col1 = 'C' then 150 else col2 + 1 end); insert into test1 values ('A', 49); insert into test1 values ('A', 50); insert into test1 values ('B', 99); insert into test1 values ('B', 100); insert into test1 values ('C', 149); insert into test1 values ('C', 150); insert into test1 values ('D', 5000); commit; ``` Output: ``` 1 row created. insert into test1 values ('A', 50) Error at line 2 ORA-02290: check constraint (MY_USER.TEST1_CHK) violated 1 row created. insert into test1 values ('B', 100) Error at line 4 ORA-02290: check constraint (MY_USER.TEST1_CHK) violated 1 row created. insert into test1 values ('C', 150) Error at line 6 ORA-02290: check constraint (MY_USER.TEST1_CHK) violated 1 row created. Commit complete. ```
add `check constraint` using `case` statement ``` CREATE TABLE tbl ( col1 varchar(10), col2 numeric(4), CONSTRAINT check_cols_ctsr CHECK (CASE WHEN col1='A' THEN col2 ELSE 1 END <50 AND CASE WHEN col1='B' THEN col2 ELSE 1 END <100 AND CASE WHEN col1='C' THEN col2 ELSE 1 END <150) ); ```
Add multiple CHECK constraints on one column depending on the values of another column
[ "", "sql", "oracle", "constraints", "" ]
``` String sql="select * from offerpoolride WHERE date >= '1997-05-05'"; ``` I am a beginner and Can anyone please help me how to retrieve those?
``` use backtick for date SELECT * FROM `offerpoolride` WHERE `date` >= '1997-05-05'; ```
I'm not 100% sure about ANSI-SQL, but in T-SQL you would use the following: ``` SELECT * FROM TABLE WHERE [date] >= '19970505' ``` So the date is input in format 'yyyyMMdd'. I am assuming your field is called date...
I can't retrieve the Date greater than or equal to a specific date in Sql.
[ "", "mysql", "sql", "" ]
I am working with two tables(`student_class` and `class`) in my database. I have a query below that shows `class` that have `students`. But it is not quite what I am looking for. How to display classes that have students but show the results so maximum seats is descending. Would count be needed? ``` SELECT class.class_name FROM class INNER JOIN student_class ON class.class_id = student_class.class_id; ``` Tables: `Student_class`: ``` CLASS_ID STUDENT_ID ---------- ---------- 2 12 2 11 2 2 7 5 7 6 7 7 7 8 7 9 9 2 9 11 9 12 10 20 10 2 10 4 ``` `Class`: ``` CLASS_ID CLASS_NAME TEACHER_ID MAX_SEATS_AVAILABLE ---------- ------------------- ---------- ------------------- 1 Intro to ALGEBRA 11 12 2 Basic CALCULUS 2 10 3 ABC and 123 1 15 4 Sharing 101 8 10 5 Good Talk, Bad Talk 9 20 6 Nap Time 1 21 7 WRITing 101 5 10 8 Finger Painting 9 14 9 Physics 230 2 20 10 Gym 5 25 ```
Just use an order by statement: ``` SELECT class.class_name FROM class INNER JOIN student_class ON class.class_id = student_class.class_id ORDER BY class.max_seats_available DESC ```
You would not need a count. Just do an `ORDER BY MAX_SEATS_AVAILABLE DESC`. ``` SELECT class.class_name, class.max_seats_available FROM class INNER JOIN student_class ON class.class_id = student_class.class_id ORDER BY class.MAX_SEATS_AVAILABLE DESC; ``` this might be helped.
Using Inner join and sorting records in descending order
[ "", "sql", "" ]
So the below query on an oracle server takes around an hour to execute. Is it a way to make it faster? ``` SELECT * FROM ACCOUNT_CYCLE_ACTIVITY aca1 WHERE aca1.ACTIVITY_TYPE_CODE='021' AND aca1.ACTIVITY_GROUP_CODE='R12' AND aca1.CYCLE_ACTIVITY_COUNT='999' AND EXISTS ( SELECT 'a' FROM ACCOUNT_CYCLE_ACTIVITY aca2 WHERE aca1.account_id = aca2.account_id AND aca2.ACTIVITY_TYPE_CODE='021' AND aca2.ACTIVITY_GROUP_CODE='R12' AND aca2.CYCLE_ACTIVITY_COUNT ='1' AND aca2.cycle_activity_amount > 25 AND (aca2.cycle_ctr > aca1.cycle_ctr) AND aca2.cycle_ctr = ( SELECT MIN(cycle_ctr) FROM ACCOUNT_CYCLE_ACTIVITY aca3 WHERE aca3.account_id = aca1.account_id AND aca3.ACTIVITY_TYPE_CODE='021' AND aca3.ACTIVITY_GROUP_CODE='R12' AND aca3.CYCLE_ACTIVITY_COUNT ='1' ) ); ``` So basically this is what it is trying to do. Find a row with a R12, 021 and 999 value, for all those rows we have to make sure another row exist with the same account id, but with R12, 021 and count = 1. If it does we have to make sure that the amount of that row is > 25 and the cycle\_ctr counter of that row is the smallest. As you can see we are doing repetition while doing a select on MIN(CYCLE\_CTR). EDIT: There is one index define on ACCOUNT\_CYCLE\_ACTIVITY table's column ACCOUNT\_ID. Our table is ACCOUNT\_CYCLE\_ACTIVITY. If there is a row with ACTIVITY\_TYPE\_CODE = '021' and ACTIVITY\_GROUP\_CODE = 'R12' and CYCLE\_ACTIVITY\_COUNT = '999', that represents the identity row. If an account with an identity row like that has other 021 R12 rows, query for the row with the lowest CYCLE\_CTR value that is greater than the CYCLE\_CTR from the identity row. If a row is found, and the CYCLE\_ACTIVITY\_AMOUNT of the row found is > 25 and CYCLE\_ACTIVITY\_COUNT = 1, report the account. Note that identity row is just for identification and will not be reported. For example, this a SELECT on a account\_id which should be reported. ``` Account_ID Group_Code Type_code Cycle_ctr Activity_Amount Activity_count 53116267 R12 021 14 0 999 53116267 R12 021 25 35 1 53116267 R12 021 22 35 1 53116267 R12 021 20 35 1 ``` There are several other Activity\_count apart from 999 and 1, so a WHERE clause for that is necessary. Similarly if the above example was like following ``` Account_ID Group_Code Type_code Cycle_ctr Activity_Amount Activity_count 53116267 R12 021 14 0 999 53116267 R12 021 25 35 1 53116267 R12 021 22 35 1 53116267 R12 021 20 **20** 1 ``` It wouldnt be reported because the activity\_amount of the row with the lowest cycle\_ctr greater than the cycle\_ctr of the identity row is 20, which is less than 25. Explain plan after ``` explain plan for select * from account_activity; select * from table(dbms_xplan.display); Plan hash value: 1692077632 --------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | --------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 470M| 12G| 798K (1)| 02:39:38 | | | | 1 | PARTITION HASH ALL | | 470M| 12G| 798K (1)| 02:39:38 | 1 | 64 | | 2 | TABLE ACCESS STORAGE FULL| ACCOUNT_ACTIVITY | 470M| 12G| 798K (1)| 02:39:38 | 1 | 64 | --------------------------------------------------------------------------------------------------------------- ```
Rewrite the query using explicit joins, and not with EXISTS. Basically these two lines ``` WHERE aca1.account_id = aca2.account_id AND (aca2.cycle_ctr > aca1.cycle_ctr) ``` are the join condition for joining the first and second select, and this one joins the first and the third. ``` WHERE aca3.account_id = aca1.account_id ``` The query should look like this ``` select distinct aca1.* FROM ACCOUNT_CYCLE_ACTIVITY aca1, ACCOUNT_CYCLE_ACTIVITY aca2, ACCOUNT_CYCLE_ACTIVITY aca3 WHERE join conditions and other selection conditions ```
I would probably start with use of the WITH statement to hopefully reduce the number of times that the data is selected, and make it more readable. The other thing i would recommend is replacing the exists by some sort of join. ``` with base as ( select * from account_cycle_activity where activity_type_code = '021' and activity_group_code = 'R12' ) SELECT * FROM base aca1 WHERE aca1.CYCLE_ACTIVITY_COUNT='999' AND EXISTS ( SELECT 'a' FROM base aca2 WHERE aca1.account_id = aca2.account_id AND aca2.CYCLE_ACTIVITY_COUNT ='1' AND aca2.cycle_activity_amount > 25 AND (aca2.cycle_ctr > aca1.cycle_ctr) AND aca2.cycle_ctr = ( SELECT MIN(cycle_ctr) FROM base aca3 WHERE aca3.account_id = aca1.account_id AND aca3.CYCLE_ACTIVITY_COUNT ='1' ) ); ```
make this sql query faster or use pl sql?
[ "", "sql", "oracle", "plsql", "" ]
I have the following sql that will display test score values: ``` SELECT s.dcid, s.lastfirst, s.student_number, s.grade_level, s.schoolid, (SELECT stc.numscore FROM studenttestscore stc JOIN testscore ts ON stc.testscoreid = ts.id JOIN test t on ts.testid = t.id JOIN studenttest st ON stc.studenttestid = st.id WHERE stc.studentid = s.id AND t.id = 451 AND ts.id = 857 AND st.termid LIKE '24%' AND ROWNUM = 1) as FALL, (SELECT stc.numscore FROM studenttestscore stc JOIN testscore ts ON stc.testscoreid = ts.id JOIN test t on ts.testid = t.id JOIN studenttest st ON stc.studenttestid = st.id WHERE stc.studentid = s.id AND t.id = 501 AND ts.id = 1001 AND st.termid LIKE '24%' AND ROWNUM = 1) as WINTER, (SELECT stc.numscore FROM studenttestscore stc JOIN testscore ts ON stc.testscoreid = ts.id JOIN test t on ts.testid = t.id JOIN studenttest st ON stc.studenttestid = st.id WHERE stc.studentid = s.id AND t.id = 551 AND ts.id = 1051 AND st.termid LIKE '24%' AND ROWNUM = 1) as SPRING FROM students s WHERE s.grade_level = 1 ORDER BY s.lastfirst ``` As written, this returns all students and what their scores were during the Fall, Winter, and Spring testing sessions. What I need to do now is limit the list of students to only those where their scores are below a specific benchmark during the Fall and Winter. I know I can accomplish this by adding to the WHERE clause with something like: ``` WHERE s.grade_level = 1 AND (SELECT stc.numscore FROM studenttestscore stc JOIN testscore ts ON stc.testscoreid = ts.id JOIN test t on ts.testid = t.id JOIN studenttest st ON stc.studenttestid = st.id WHERE stc.studentid = s.id AND t.id = 451 AND ts.id = 857 AND st.termid LIKE '24%' AND ROWNUM = 1) < 28 AND (SELECT stc.numscore FROM studenttestscore stc JOIN testscore ts ON stc.testscoreid = ts.id JOIN test t on ts.testid = t.id JOIN studenttest st ON stc.studenttestid = st.id WHERE stc.studentid = s.id AND t.id = 501 AND ts.id = 1001 AND st.termid LIKE '24%' AND ROWNUM = 1) < 37 ``` My question though is, is this the most efficient way of creating the selection criteria? Is there a way I can refer back to the selected score's alias names, FALL, and WINTER? It does not work when I test it with ``` WHERE s.grade_level = 1 AND FALL < 28 AND WINTER < 37 ```
You simply nest your Select in a Derived Table (aka Inline View) and then you can use the aliased columns in WHERE: ``` SELECT * FROM ( SELECT s.dcid, s.lastfirst, s.student_number, s.grade_level, s.schoolid, (SELECT stc.numscore FROM studenttestscore stc JOIN testscore ts ON stc.testscoreid = ts.id JOIN test t ON ts.testid = t.id JOIN studenttest st ON stc.studenttestid = st.id WHERE stc.studentid = s.id AND t.id = 451 AND ts.id = 857 AND st.termid LIKE '24%' AND ROWNUM = 1) AS FALL, (SELECT stc.numscore FROM studenttestscore stc JOIN testscore ts ON stc.testscoreid = ts.id JOIN test t ON ts.testid = t.id JOIN studenttest st ON stc.studenttestid = st.id WHERE stc.studentid = s.id AND t.id = 501 AND ts.id = 1001 AND st.termid LIKE '24%' AND ROWNUM = 1) AS WINTER, (SELECT stc.numscore FROM studenttestscore stc JOIN testscore ts ON stc.testscoreid = ts.id JOIN test t ON ts.testid = t.id JOIN studenttest st ON stc.studenttestid = st.id WHERE stc.studentid = s.id AND t.id = 551 AND ts.id = 1051 AND st.termid LIKE '24%' AND ROWNUM = 1) AS SPRING FROM students s WHERE s.grade_level = 1 ) dt WHERE FALL < 28 AND WINTER < 37 ```
Using Common Table Expressions, you can reference fields from the CTE select statements in the where clause of the main query. They also clean up the structure a little and the reuse limits the number of times you need to copy+paste common predicates (e.g. - AND st.termid LIKE '24%') ``` WITH TermTestData AS ( SELECT ts.testid , ts.id , stc.numscore , stc.studentid FROM studenttestscore AS stc JOIN testscore AS ts ON ts.id = stc.testscoreid JOIN studenttest AS st ON st.id = stc.testscoreid WHERE st.termid LIKE '24%' ), SemesterScores AS ( SELECT s.dcid, s.lastfirst, s.student_number, s.grade_level, s.schoolid , (SELECT td.numscore FROM TermTestData AS td WHERE td.studentid = s.id AND td.id = 451 AND td.id = 857 AND ROWNUM = 1) as FALL , (SELECT td.numscore FROM TermTestData AS td WHERE td.studentid = s.id AND td.id = 501 AND td.id = 1001 AND ROWNUM = 1) as WINTER , (SELECT td.numscore FROM TermTestData AS td WHERE td.studentid = s.id AND td.id = 551 AND td.id = 1051 AND ROWNUM = 1) as SPRING FROM students AS s ) SELECT * FROM SemesterScores WHERE FALL < 28 AND WINTER < 37 ``` Side Note: If you are using Oracle 11g, you can pivot the data to avoid the having select statements for single-value fields
Referring back to selected value in WHERE clause
[ "", "sql", "oracle", "" ]
This return single-row query subquery returns more than one row ``` select E.NO_ENCAN, E.NOM_ENC, TE.DESC_TYPE_ENC as TYPE_ENC, (select sum(ITEM.MNT_VALEUR_ITE) from ENCAN left join ITEM on ITEM.NO_ENCAN = ENCAN.NO_ENCAN group by ENCAN.NO_ENCAN) as SOMME_ITEMS, count(distinct INV.NOM_UTILISATEUR_INVITE) as NOMBRE_INVITES from ENCAN E left join TYPE_ENCAN TE on TE.CODE_TYPE_ENC = E.CODE_TYPE_ENC left join INVITE INV on INV.NO_ENCAN = E.NO_ENCAN group by E.NO_ENCAN, E.NOM_ENC, TE.DESC_TYPE_ENC order by E.NO_ENCAN; ``` And if I add order by in the subquery, it returns a missing right parenthesis. Anyone can give me any clues on what's going on? By the way, I know that keyword/word are inversed uppercase/lowercase
You want a correlated subquery rather than a `group by` in the subselect. This also means that the subquery is not needed. So, this is probably what you are trying to write: ``` select E.NO_ENCAN, E.NOM_ENC, TE.DESC_TYPE_ENC as TYPE_ENC, (select sum(ITEM.MNT_VALEUR_ITE) from ITEM where ITEM.NO_ENCAN = ENCAN.NO_ENCAN ) as SOMME_ITEMS, count(distinct INV.NOM_UTILISATEUR_INVITE) as NOMBRE_INVITES from ENCAN E left join TYPE_ENCAN TE on TE.CODE_TYPE_ENC = E.CODE_TYPE_ENC left join INVITE INV on INV.NO_ENCAN = E.NO_ENCAN group by E.NO_ENCAN, E.NOM_ENC, TE.DESC_TYPE_ENC order by E.NO_ENCAN; ```
If I am correctly understanding what you are trying to accomplish, I believe the subquery is unnecessary. You should just put an analytic on the SUM() call. ``` SELECT e.no_encan ,e.nom_enc ,te.desc_type_enc AS type_enc ,SUM(item.mnt_valeur_ite) OVER (PARTITION BY e.no_encan) somme_items ,COUNT(DISTINCT inv.nom_utilisateur_invite) AS nombre_invites FROM encan e LEFT JOIN type_encan te ON te.code_type_enc = e.code_type_enc LEFT JOIN invite INV ON inv.no_encan = e.no_encan GROUP BY e.no_encan, e.nom_enc, te.desc_type_enc ORDER BY e.no_encan; ``` Details can be found [here](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions163.htm), although I would really suggest reading more about [Analytic Functions in Oracle](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions001.htm#i81407).
SQL oracle error on select inside select
[ "", "sql", "oracle", "subquery", "" ]
Input: ``` ID CREATED_TIME CANCELED_TIME 1 4 10 1 8 2 1 6 -1 1 3 7 2 5 null 2 4 8 ``` Desired output: ``` ID CREATED_TIME CANCELED_TIME 1 3 2 2 4 null ``` so I basically want to display id, min(created\_time) and canceled time of the row where created\_time is maximum for each user? please provide the answer in PostgreSql and MySql?
If you want `canceled_time` where `created_time` has the maximum value, I would suggest the `substring_index()`/`group_concat()` trick: ``` select id, min(created_time), substring_index(group_concat(canceled_time order by created_time desc), ',', 1) as canceled_time from table group by id; ``` This will not work in all cases using default settings, because there is a maximum length for the `group_concat()` intermediate result. However, it does work on most reasonable data sets and definitely for the data in your question.
Use a sub-query to find each id's max created\_time: ``` SELECT ID, created_time, MIN(CANCELED_TIME) AS MinTimeCanceled FROM tableName t1 WHERE created_time = (select max(created_time) from tableName t2 where t1.ID = t2.ID) GROUP BY ID, created_time ```
how to display min of a column and use max of the same column to filter other attributes in mysql?
[ "", "mysql", "sql", "postgresql", "group-by", "" ]
At work we have three SQL Server 2008 R2 database environments: production, test and development. In the production environment, different parts of our system run on 3 different database servers (let's call them `PROD1`, `PROD2` and `PROD3`), while in the development environment we have all the databases running on the same server (let's call it `DEV1`). This is normally not a problem, since most of our SQL code doesn't need to refer to objects outside its own database. But in a handful of cases a view or a stored procedures need to refer to a table, view or function that not only reside in another database, but in a database on a different production server. I.e. we have a view on `PROD3` that need to look up a user in the user list on `PROD1` like this: ``` PROD1.UserDB.UserSchema.UserList ``` Of course this gives us problems in the test environment, because we need these views/stored procedures to refer to different servers, depending on where they run. My question: is it possible to have conditional values in a fully qualified table name (`ServerName.DatabaseName.SchemaName.TableName`) in T-SQL? I.e. could I have somethjing like this `(PROD1|DEV1).UserDB.UserSchema.UserList` in a view or stored procedure? Or maybe in some way get the server name from a variable, which we could then set depending on the current server? Bonus question: in one case we even have to also change the database name. Can this be done in a similar manner? I realize that we could get around this by creating a dynamic SQL statement in a string and then executing it. But for various reasons we would very much like to avoid this approach.
As already mentioned, you do need to use the dynamic SQL. But you can use it for the object creation: simply to create the synonym in the dynamic SQL for all objects which need to be accessed from the another server and / or another database. In this case all your code will be static and only during the installation / deployment process using the dynamic SQL the needed object will be referenced. The code can be like the following: ``` declare @sql nvarchar(max) if @@servername = 'DEV1' set @sql = 'create synonym vUserList for [Dev1].UserDB.UserSchema.UserList' else set @sql = 'create synonym vUserList for [PROD1].UserDB.UserSchema.UserList' exec sp_executesql @sql ``` And all your code (SPs, functions, etc.) can use this synonym - vUserList In the same manner you can change the targeted database name depends on the environment - test, dev or prod
For cross database references your best bet might be using [synonyms](https://msdn.microsoft.com/en-us/library/ms187552.aspx). The definition of the synonyms will be different between DEV and PROD, but the definition of the complex objects (views, SPs etc) that use the synonyms can remain unchanged.
Is it possible to conditionally specify the database in a fully qualified table name in T-SQL?
[ "", "sql", "sql-server", "t-sql", "sql-server-2008-r2", "" ]
I have 3 tables and my query is : ``` SELECT BRAND, AMOUNT FROM ( SELECT BRAND, AMOUNT FROM SALES1 UNION SELECT BRAND, AMOUNT FROM SALES2 UNION SELECT BRAND, AMOUNT FROM SALES3 ) ``` `SALES 1` TABLE HAS BRAND: *A* AND AMOUNT: *50* `SALES 3` TABLE HAS BRAND: *A* AND AMOUNT: *100* I want to get the amount 50 and disregard 100. I want to ask if is there any priority when using union?
If you want the first occurrence, you should use `union all` rather than `union`. This is important for performance reasons, because `union` does unnecessary duplicate elimination. Then use the `not exists` clauses for each subquery: ``` (SELECT BRAND, AMOUNT FROM SALES1 ) UNION ALL (SELECT BRAND, AMOUNT FROM SALES2 s2 WHERE NOT EXISTS (SELECT 1 FROM SALES1 s1 WHERE s1.BRAND = s2.BRAND) ) UNION ALL (SELECT BRAND, AMOUNT FROM SALES3 s3 WHERE NOT EXISTS (SELECT 1 FROM SALES1 s1 WHERE s1.BRAND = s3.BRAND) AND NOT EXISTS (SELECT 1 FROM SALES2 s2 WHERE s2.BRAND = s3.BRAND) ) ```
``` SELECT BRAND, AMOUNT FROM ( SELECT BRAND, AMOUNT, 1 AS priority FROM SALES1 UNION ALL SELECT BRAND, AMOUNT, 2 AS priority FROM SALES2 UNION ALL SELECT BRAND, AMOUNT, 3 AS priority FROM SALES3 ) ORDER BY priority LIMIT 1; ```
Mysql union Priority
[ "", "mysql", "sql", "" ]
I have a `varchar` column which has a variation of the following string: `Stations, FlightName, FlightWeeks, BindTo, SpotLength, foo, bar, etc...` I need to change it to `Stations` to `Station`, `FlightName` to `Flight Name`, `BindTo` to `Bind To` etc to eventually come out like this: `Station; Flight Name; Weeks; Bind To; Spot Length; foo; bar` etc I have a really ugly code to do this: ``` select replace(replace(replace(replace(replace(replace(GroupFlightsBy, ', ', ';'), 'Stations', 'Station'), 'FlightName', 'Flight Name'), 'FlightWeeks', 'Weeks'), 'BindTo', 'Bind To'), 'SpotLength', 'Spot Length') ``` Is there a better way of doing this, yet as performant?
Creating CLR function is a solution for calling multiple REPLACE functions by replacing a single function. Yes there are some performance considerations using CLR functions but if you are dealing with low number of data, then it would be fine. For creating CLR functions, * [Enabling CLR](https://msdn.microsoft.com/en-us/library/ms131048.aspx) * [Creating CLR project](https://msdn.microsoft.com/en-us/library/84b1se47(v=vs.100).aspx) * [Creating a sample CLR function](https://msdn.microsoft.com/en-us/library/91e6taax(v=VS.90).aspx) For your requirement I have created a sample function like this, ``` using System.Data.SqlTypes; public partial class UserDefinedFunctions { [Microsoft.SqlServer.Server.SqlFunction] public static SqlString ReplaceFunc(string inputString) { return new SqlString (inputString .Replace("Stations", "Station") .Replace("FlightName", "Flight Name") .Replace("FlightWeeks", "Weeks") .Replace("BindTo", "Bind To") .Replace("SpotLength", "Spot Length") .Replace(", ", ";")); } }; ``` and called from SQL like this, ``` DECLARE @GroupFlightsBy VARCHAR(MAX) = 'Stations, FlightName, FlightWeeks, BindTo, SpotLength, foo, bar' SELECT dbo.ReplaceFunc(@GroupFlightsBy) ``` For more about CLR functions, <http://sqlmag.com/database-development/common-language-runtime-dba-point-view> <https://stackoverflow.com/a/25876797/311255>
You could write your own Transformation-Function in TSQL but i think it will not be as performant as a multi-replace. You should avoid using your multi-replace in WHERE/ORDER-clauses
Is there a better way to do multiple replace on a string?
[ "", "sql", "sql-server", "" ]
I've tried to use this, but it throws syntax error. ``` SET @deletedRepliesCount = (DELETE FROM reply WHERE type = 3 AND id IN (SELECT id FROM `like` WHERE commentId = :commentId)) + @deletedRepliesCount; ``` > "com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'DELETE FROM reply WHERE type = 3 AND id IN (SELECT id FROM `like` > WHERE comment' at line 1"
Try this: ``` DELETE FROM reply WHERE type = 3 AND id IN (SELECT id FROM `like` WHERE commentId = :commentId)); SET @deletedRepliesCount = SELECT ROW_COUNT() + @deletedRepliesCount ```
Try below as per [link](https://stackoverflow.com/questions/10070406/how-to-store-query-result-in-variable-using-mysql) ``` SET @deletedRepliesCount := (DELETE FROM reply WHERE type = 3 AND id IN (SELECT id FROM like WHERE commentId = :commentId)) + @deletedRepliesCount; ```
How to save delete result to variable in mysql?
[ "", "mysql", "sql", "" ]
I have an annoying problem which is stopping me from generating some data; The SQL job has 23 steps in total and fails on the 21st. ``` -- Step 21 Create the table z1QReportOverview -- Create z1QReportProjectOverview.sql -- Project Overview - By Category (Part 4).sql USE database SELECT z1QReportProjectOverview1.[ERA Category] AS Category, z1QReportProjectOverview1.[Total Projects Signed], z1QReportProjectOverview1.[Total Spend Under Review], z1QReportProjectOverview1.[Avg. Project Size], z1QReportProjectOverview2.[Work in Progress], z1QReportProjectOverview2.[Implemented], z1QReportProjectOverview2.[No Savings], z1QReportProjectOverview2.[Lost], CONVERT(decimal(18,0),[Lost])/CONVERT(decimal(18,0),[Total Projects Signed]) AS [Loss Ratio], z1QReportProjectOverview2.[Completed], ( CONVERT(decimal(18,0),([Completed]+[Implemented]))/ CONVERT(decimal(18,0),([Completed]+[Implemented]+[Lost])) ) AS [Success Ratio], z1QReportProjectOverview3.[Avg. Spend] AS [Average Spend], z1QReportProjectOverview3.[Avg. Savings] AS [Average Savings], z1QReportProjectOverview3.[Avg. Savings %] AS [Average Savings %] INTO dbo.z1QReportProjectOverview FROM dbo.z1QReportProjectOverview1 JOIN dbo.z1QReportProjectOverview2 ON (z1QReportProjectOverview1.[ERA Category] = z1QReportProjectOverview2.[ERA Category]) JOIN dbo.z1QReportProjectOverview3 ON (z1QReportProjectOverview2.[ERA Category] = z1QReportProjectOverview3.[ERA Category]) ORDER BY Category ``` I believe I know what is causing the divide by zero error. The 'Lost' field is made up of three fields and in some cases (very rare) all 3 fields are 0 resulting in a 0 in the 'Lost' field. I believe this is the main cause of the error but there is a second division there as well, I am pretty rubbish at SQL hence my question: Where should I put the CASE WHEN clause? -Most likely written this bit wrong as well :( "CASE When [Lost] = 0 SET [Total Projects Signed] = 0" Any advice is much appreciated!
You can use `CASE` as to check if the dividing value is `0` or not. ``` CASE WHEN CONVERT(decimal(18,0),[Lost]) <> 0 THEN CONVERT(decimal(18,0),[Lost])/CONVERT(decimal(18,0),[Total Projects Signed]) ELSE 0 END AS [Loss Ratio], z1QReportProjectOverview2.[Completed], CASE WHEN CONVERT(decimal(18,0),([Completed]+[Implemented]+[Lost])) <> 0 THEN (CONVERT(decimal(18,0),([Completed]+[Implemented]))/CONVERT(decimal(18,0),([Completed]+[Implemented]+[Lost]))) ELSE 0 END AS [Success Ratio], ```
Use `NULLIF` to handle divide by zero error ``` ........ Isnull(CONVERT(DECIMAL(18, 0), [Lost]) / NULLIF(CONVERT(DECIMAL(18, 0), [Total Projects Signed]), 0), 0) AS [Loss Ratio], Isnull(CONVERT(DECIMAL(18, 0), ( [Completed] + [Implemented] )) / NULLIF(CONVERT(DECIMAL(18, 0), ( [Completed] + [Implemented] + [Lost] )), 0), 0) AS [Success Ratio], ........ ```
SQL Divide by Zero Error
[ "", "sql", "sql-server", "divide", "" ]
So I have these three tables ``` Persons {id, name} {1, "Jim"} {2, "Kim"} {3, "Tim"} {4, "Brim"} Knows {id_A, id_B} {1,2} {1,3} {1,4} {2,3} {4,2} Hates {id_A, id_B} {1,4} {2,1} {3,1} {3,2} {4,2} ``` And I want to get data using NOT EXIST to get names of all Persons who hates everyone they know. I tried this query: ``` SELECT DISTINCT P.name FROM Persons P, Likes L, Knows K WHERE K.personA_id = P.id AND L.personA_id = P.id AND NOT EXISTS (SELECT * FROM Persons P WHERE L.personA_id = P.id AND L.personB_id <> K.personB_id) ``` but it also returns name if a person knows several people but hate at least one of them (For example this query returns {1, "Jim"} even he knows 3 people but only hates 1 of them). And I need to get person who hates EVERYONE they know.. Help!
Written another way, give me all people that knows someone they do not hate: ``` SELECT * FROM Persons p WHERE EXISTS ( SELECT 0 FROM Knows k WHERE k.personA_id = P.id AND NOT EXISTS ( SELECT 0 FROM Hates h WHERE k.personA_id = h.personA_id ) ) ```
you are not using the other tables in sub-query. You should do somthing like this:- ``` SELECT DISTINCT P.name FROM Persons P, Hates H, Knows K WHERE K.A_id = P.id AND H.A_id = P.id AND NOT EXISTS (SELECT * FROM Persons P, Hates H, Knows K WHERE H.A_id = P.id and H.B_id <> K.B_id) ```
SQL NOT EXISTS (Person hates everybody they know) EDITED
[ "", "sql", "" ]
So I have these three tables: ``` Persons {id, name} Knows {A_id, B_id} - (Person A knows Person B) Smoking {id} - (id -> Persons{id}) Persons: {1, "Tim"} {2, "Kim"} {3, "Jim"} {4, "Rim"} Knows: {1, 2} {1, 3} {3, 2} {3, 4} Smoking: {3} ``` And I need {3, "Jim"} to be returned since he doesn't know anyone who smokes ({1, "Tim"} knows Jim who smokes so he's out) I tried this query: ``` SELECT P.name FROM Persons P, Knows K WHERE K.A_id = P.id AND K.B_id NOT IN (SELECT id FROM Smokes) ``` but it still return "Tim" even he knows 2 people and only 1 of them is smoking. And I need only the persons who's EVERY 'friend' doesn't smoke.. Help!
``` SELECT p.name FROM Persons p WHERE NOT EXISTS ( -- there does not exist SELECT * FROM Knows k -- a person I know JOIN smokes s ON s.id = k.b_id -- who smokes WHERE k.A_id = p.id ); ```
Allow me to explain how to think about your problem in set theory / SQL terms: 1. You need an aggregated sum of the total number of people that any given person knows who smokes. 2. Then filter for people where that aggregated value is zero. That leads to: ``` select P.Name from Persons P inner join Knows K on K.A_Id = P.ID left join Smoking on Smoking.ID = P.B_Id group by person having sum(smoking.ID) = 0 ```
SQL NOT IN (Do not know anyone who smokes)
[ "", "sql", "" ]
I'm wondering whats wrong with that Statement. ``` INSERT INTO Table1(Myname,category ) SELECT TOP 1 thenames FROM tNames WHERE DateAdded > DATEADD(Day, -10, GETDATE() ORDER BY NEWID(),@ccategory) ``` I want to pick one random value from table tnames and put it in table 1 with category values that i got from SP. How should I do that? **EDITS:** I'm working in MS SQL Server. Complete code: ``` Create PROCEDURE [dbo].[Names_SP] @CCategory nvarchar(50) AS BEGIN INSERT INTO Table1(Myname,category ) SELECT TOP 1 thenames FROM tNames WHERE DateAdded > DATEADD(Day, -10, GETDATE() ORDER BY NEWID(),@ccategory) END ``` When I call SP I send @ccategory. Since table 1 has 2 columns (the first is category) I want to get the second (myname) column value from tname table as random (names that have been added in 10 days).
You have 2 Columns you want to insert in (Myname, category) but your select list only contains 1 column! (thenames) What about "@category"? At least in your posted code this variable is never declared or assigned... Maybe you should do something like this: ``` DECLARE @category varchar(20) SET @category = 'some cat.' INSERT INTO Table1(Myname,category) SELECT TOP 1 thenames,@category FROM tNames WHERE DateAdded > DATEADD(Day, -10, GETDATE()) ORDER BY NEWID() ```
You are selecting One Column from Select statement and There are Two Columns specified in insert statement ``` INSERT INTO Table1(Myname) SELECT TOP 1 thenames FROM tNames WHERE DateAdded > DATEADD(Day, -10, GETDATE() ORDER BY NEWID(),@ccategory) ```
SQL Select In Select for insert Stement
[ "", "sql", "sql-server", "sql-server-2008", "" ]
How can I find duplicate data because of bad data entry? I want to find Customers that have the same PhoneNumber. I need to list both customers FirstName and LastName Any thoughts? Thanks **Edit** When I run this query: ``` Select FirstName, LastName, PhoneNumber From Customer ``` I get this result: ``` Person One 12345 Person Two 23456 Third Person 34567 First Person 12345 Person Three 34567 ``` **Solution** This is a version of Bulat's answer, which is best so far. Other answers have not given the result I needed. ``` SELECT c1.FirstName as FirstName1, c1.LastName as LastName1, c2.FirstName as FirstName2, c2.LastName as LastName2, c1.PhoneNumber FROM Customer c1 INNER JOIN Customer c2 ON c1.Phone = c2.Phone AND c1.CustomerID < c2.CustomerID ```
You need to join CUstomers table to itself via Phone Number field like this: ``` SELECT c1.Name as Name1, c1.Surname as Surname1, c2.Name as Name2, c2.Surname as Surname2, c.Phone FROM Customers c1 INNER JOIN Customers c2 ON c1.Phone = c2.Phone AND c1.id < c2.id ``` Note: Unlike other answers this will return customers with same phone number even if their names and surnames do not match.
This query would give you the duplicata ``` select firstname, lastname from customers group by phonenumber, firstname, lastname having count(phonenumber) > 1 ``` * Group all results by phonenumber. * Display only those who appear more then one time (duplicate).
Find bad/duplicate data in same table
[ "", "sql", "" ]
I have the following code that returns an error message if my value is invalid. I would like to give the same error message if the value given is not numeric. ``` IF(option_id = 0021) THEN IF((value<10000) or (value>7200000) or /* Numeric Check */)THEN ip_msg(6214,option_name); -- Error Message return; END IF; END IF; ``` In SQL Server, I simply used `ISNUMERIC()`. I would like to do something similar in Oracle. Such as, ``` IF((!ISNUMERIC(value)) or (value<10000) or (value>7200000)) THEN ... ```
``` REGEXP_LIKE(column, '^[[:digit:]]+$') ``` returns TRUE if column holds only numeric characters
From `Oracle DB 12c Release 2` you could use **[VALIDATE\_CONVERSION](http://docs.oracle.com/database/122/SQLRF/VALIDATE_CONVERSION.htm#SQLRF-GUID-DC485EEB-CB6D-42EF-97AA-4487884CB2CD)** function: > VALIDATE\_CONVERSION determines whether expr can be converted to the specified data type. If expr can be successfully converted, then this function returns 1; otherwise, this function returns 0. If expr evaluates to null, then this function returns 1. If an error occurs while evaluating expr, then this function returns the error. ``` IF (VALIDATE_CONVERSION(value AS NUMBER) = 1) THEN ... END IF; ``` **[db<>fiddle demo](https://dbfiddle.uk/?rdbms=oracle_18&fiddle=265cb8e8224f641643c6fa9b3f60de21)**
How can you tell if a value is not numeric in Oracle?
[ "", "sql", "oracle", "plsql", "isnumeric", "" ]
Assume I have a table called `Person` with columns `name, surname and age.` I want to add two columns when performing a SELECT statment, probably something like this: ``` SELECT a FROM Person a WHERE a.name + a.surname LIKE ... ``` How can I do this correctly?
You do not add string values, like names. Presumably, you want to concatenate them, so you use the `concat()` function: ``` where concat(a.name, a.surname) like . . . ```
``` SELECT a FROM Person a WHERE CONCAT(a.name, a.surname) LIKE ... ```
How to perform a SELECT by adding two columns?
[ "", "mysql", "sql", "" ]
I have the following plsql block ``` for holder in ( select pm.product_holder , cast( collect( product_table(pm.product_no,pm.product_catalogue) order by pm.product_catalogue , pm.product_no ) as t_prod_cat_no_table ) product_cats_nos from product_master pm group by pm.product_holder order by pm.product_holder ) loop test_proc( holder.product_holder, holder.product_cats_nos ); end loop; ``` In the above sql, if any one of the column is null, I wouldn't like to execute loop, it should terminate execution. Columns are `product_holder,product_cats_nos` How could I achieve this?
You should be able to do this like that: ``` loop if holder.product_holder is null or holder.product_cats_nos is null then exit; end if; test_proc( holder.product_holder, holder.product_cats_nos ); end loop; ``` Exit will break the loop. For more information look up the docs: <http://docs.oracle.com/cd/B19306_01/appdev.102/b14261/exit_statement.htm> Another approach would be to make your test\_proc procedure null proof, but then again I don't really know what you want to do ;)
product\_cats\_nos can't be null (it can contain nulls or be empty, but that's a different concept: the collection is not null by the time it is initialized as the first step of collect aggregation function) and you can only have the last **one**(you're grouping by that field) occurrence of pm.product\_holder (because nulls are last as default when you're using order by) to be null. Given this you can simply modify the query adding a where clause on product\_holder is not null.
Conditional execution of loop
[ "", "sql", "oracle", "plsql", "" ]
I have a table in which I inserted a large batch of records. All these records have a certain column in which one character is missing: they're missing a slash `/` at the beginning. Is there a way to add this character to the beginning of this column in every row, using purely SQL? It wouldn't be a problem in PHP, but I'd like to know a solution in SQL. I know of the SQL UPDATE statement, of course, but I'm not sure how to let it rely on the current value. The best I can come up with is using a subquery: ``` UPDATE the_table t SET the_column = CONCAT('/', SELECT the_column FROM the_table WHERE id=t.id) ``` Would this work, and is this the easiest solution? I'm on MySQL 5.5.31.
Simply use this: ``` UPDATE the_table t SET the_column = CONCAT('/', the_column ) ```
Your version might work. But you really just want to do: ``` UPDATE the_table t SET the_column = CONCAT('/', the_column) ```
Adding a character to every row in a MySQL database
[ "", "mysql", "sql", "database", "" ]
how can I simplify this query? unfortunately this is not working ``` $Query = mysqli_query($conn, "SELECT * FROM order_number where trans_id NOT IN (SELECT order_no from billing) AND NOT IN(SELECT order_no from pending) AND NOT IN(SELECT order_no from on_process) AND NOT IN(SELECT order_no from finished)"); ``` is there any other way around this? im trying to print trans\_id row if its not in billing, pending, on\_process and finished.
``` SELECT * FROM order_number where trans_id NOT IN ( SELECT order_no from billing union SELECT order_no from pending union SELECT order_no from on_process union SELECT order_no from finished ) ``` or as you tried it ``` SELECT * FROM order_number where trans_id NOT IN (SELECT order_no from billing) AND trans_id NOT IN (SELECT order_no from pending) AND trans_id NOT IN (SELECT order_no from on_process) AND trans_id NOT IN (SELECT order_no from finished) ```
There is no reason to "simplify" the query. With proper indexes on the tables in the four subqueries, it should be quite efficient. If `orderno` could be `NULL` in any of the tables, then I would recommend using `not exists` instead: ``` SELECT * FROM order_number o WHERE NOT EXISTS (select order_no from billing where b.order_no = o.trans_id) AND NOT EXISTS (select order_no from pending p where p.order_no = o.trans_id) AND NOT EXISTS (select order_no from on_process op where op.order_no = o.trans_id) AND NOT EXISTS (select order_no from finished f where f.order_no = o.trans_id); ``` The proper indexes are: `billing(order_no)`, `pending(order_no)`, `finished(order_no)`, and `on_process(order_no)`.
Simplified way to use NOT IN statements for multiple tables
[ "", "mysql", "sql", "database", "" ]
Just opened the Oracle SQL Developer and i'm getting this error: > Failed to create naming Context for db connections at url: file:/C:/Users/.../AppData/Roaming/SQL Developer/system3.2.20.09.87/o.jdeveloper.db.connection.11.1.1.4.37.59.48/connections.xml > > SEVERE 95 69513 oracle.jdevimpl.db.adapter.DefaultContextWrapper Failed to create naming Context for db connections at url: file:/C:/Users/.../AppData/Roaming/SQL Developer/system3.2.20.09.87/o.jdeveloper.db.connection.11.1.1.4.37.59.48/connections.xml > > SEVERE 96 0 oracle.jdeveloper.db.DatabaseConnections DatabaseConnections has no JNDI context so cannot list connections. and I've lost all my connections... the connections.xml seems empty any idea on how to fix this? thanks!
I ran into the same problem. Here is what worked for me. connections.xml file just contained repetitive string NULL I simply deleted this file and created a new connection which went through fine.
* STEP 1: Deleting the file `connections.xml` under the path: `C:\Users\tejgm\AppData\Roaming\SQL Developer\system2.1.1.64.45\o.jdeveloper.db.connection.11.1.1.2.36.55.30`. * STEP 2: Start up the SQL newly.
DatabaseConnections has no JNDI context so cannot list connections
[ "", "sql", "oracle", "oracle-sqldeveloper", "" ]
I need to update records that will match a specific query, so I'm currently trying to figure out how to find a list of duplicate values where one column differs in value. I have the following table definition ``` DocumentId (BIGINT) NotePopupId (INT) IsPopup (BIT) Note (NVARCHAR(100)) ``` My table might have data as follows: ``` 1|1|False|Note1 1|2|False|Note2 2|1|False|Note1 2|2|True|Popup1 3|1|False|Note1 3|2|True|Popup1 4|1|False|Note1 4|2|False|Note2 ``` I need to return a list of DocumentId that have more than one DocumentId defined but where The IsPopup field is True and False and ignore the ones where they are all false or all true. I understand how to write a basic query that will return the total number of duplicates but I don't get how would I ensure that it will only returns the duplicates that have their IsPopup field set to true and false for 2 or more records with the same DocumentId. So in this instance, based on the above, it would return DocumentId 2 and 3. Thanks.
I am inclined to handle a question like this using `group by` and aggregation: ``` select documentId from table group by documentId having min(cast(isPopup as int)) = 0 and max(cast(isPopup as int)) = 1; ```
Find `Distinct Count` and filter the group's whose count is more than 1. Try this. ``` select DocumentId from yourtable group by DocumentId having count(Distinct IsPopup)>1 ``` If you want to return documentId when there is only one IsPopup then use this ``` select DocumentId from yourtable group by DocumentId having count(Distinct IsPopup)>1 or count(IsPopup)=1 ```
Find duplicates where another column have different columns
[ "", "sql", "sql-server", "" ]
I have the following slash-delimited example strings and need to split them: ``` Record---String 1--------ABC 2--------DEF/123 3--------GHI/456/XYZ ``` The strings will always have 1 - 3 parts; no more, no less. To split them I have been using this function: ``` CREATE FUNCTION [dbo].[Split] ( @chunk VARCHAR(4000) ,@delimiter CHAR(1) ,@index INT ) RETURNS VARCHAR(1000) AS BEGIN DECLARE @curIndex INT = 0 ,@pos INT = 1 ,@prevPos INT = 0 ,@result VARCHAR(1000) WHILE @pos > 0 BEGIN SET @pos = CHARINDEX(@delimiter, @chunk, @prevPos); IF (@pos > 0) BEGIN -- Characters between position and previous position SET @result = SUBSTRING(@chunk, @prevPos, @pos - @prevPos) END ELSE BEGIN -- Last Delim SET @result = SUBSTRING(@chunk, @prevPos, LEN(@chunk)) END IF (@index = @curIndex) BEGIN RETURN @result END SET @prevPos = @pos + 1 SET @curIndex = @curIndex + 1; END RETURN '' -- Else Empty END ``` To split the strings, I call this function like so: ``` MyField1 = dbo.Split(MyInputString, '/', 0), MyField2 = dbo.Split(MyInputString, '/', 1), MyField3 = dbo.Split(MyInputString, '/', 2) ``` The expected results would be ``` Record 1: MyField1 = ABC MyField2 = NULL MyField3 = NULL Record 2: MyField1 = DEF MyField2 = 123 MyField3 = NULL Record 3: MyField1 = GHI MyField2 = 456 MyField3 = XYZ ``` It is almost doing what I had hoped, except the last character of MyField1 for Record 1 is being truncated resulting in "AB" instead of "ABC". I believe this is because there is no slash delimiter for this one-part string. Unfortunately, I did not write this function and my SQL skills are a bit weak. What should I change to make this function return proper results when there is no delimiter in the string?
The following fixes your "SPLIT" function. Add the following line just before the WHILE. ``` SET @chunk = @chunk + '/' ```
I would move the charindex to before the while: ``` alter FUNCTION [dbo].[Split] ( @chunk VARCHAR(4000) ,@delimiter CHAR(1) ,@index INT ) RETURNS VARCHAR(1000) AS BEGIN DECLARE @curIndex INT = 0 ,@pos INT = 1 ,@prevPos INT = 0 ,@result VARCHAR(1000) SET @pos = CHARINDEX(@delimiter, @chunk, @prevPos); if @pos= 0 return @chunk WHILE @pos > 0 BEGIN SET @pos = CHARINDEX(@delimiter, @chunk, @prevPos); IF (@pos > 0) BEGIN -- Characters between position and previous position SET @result = SUBSTRING(@chunk, @prevPos, @pos - @prevPos) END ELSE BEGIN -- Last Delim SET @result = SUBSTRING(@chunk, @prevPos, LEN(@chunk)) END IF (@index = @curIndex) BEGIN RETURN @result END SET @prevPos = @pos + 1 SET @curIndex = @curIndex + 1; SET @pos = CHARINDEX(@delimiter, @chunk, @prevPos); END RETURN @chunk -- Else Empty END ```
Split delimited string in SQL Server
[ "", "sql", "sql-server", "string", "t-sql", "sql-server-2012", "" ]
Bellow is an example table. ``` ID FROM TO DATE 1 Number1 Number2 somedate 2 Number2 Number1 somedate 3 Number2 Number1 somedate 4 Number3 Number1 somedate 5 Number3 Number2 somedate ``` Expected result is to get 1 row for each unique pair of TO and FROM columns Example result if ordered by ID ASC ``` (1,Number1,Number2) (4,Number3,Number1) (5,Number3,Number2) ``` Ok I have found how to do this with the following query ``` SELECT * FROM table GROUP BY LEAST(to,from), GREATEST(to,from) ``` However I am not able to get the most recent record for every unique pair. I have tried with `order by ID desc` but it returns the first found row for unique pair.
SQL fiddle isn't working for some reason so in the mean time you will need to help me to help you. Assuming that the following statement works ``` SELECT LEAST(to,from) as LowVal, GREATEST(to,from) as HighVal, MAX(date) as MaxDate FROM table GROUP BY LEAST(to,from), GREATEST(to,from) ``` then you could join to that as ``` select t.* from table t inner join (SELECT LEAST(to,from) as LowVal, GREATEST(to,from) as HighVal, MAX(date) as MaxDate FROM table GROUP BY LEAST(to,from), GREATEST(to,from) ) v on t.date = v.MaxDate and (t.From = v.LowVal or t.From = v.HighVal) and (t.To = v.LowVal or t.To= v.HighVal) ```
This answer was originally inspired by [Get records with max value for each group of grouped SQL results](https://stackoverflow.com/questions/12102200/get-records-with-max-value-for-each-group-of-grouped-sql-results) but then I looked further and came up with the correct solution. ``` CREATE TABLE T (`id` int, `from` varchar(7), `to` varchar(7), `somedate` datetime) ; INSERT INTO T (`id`, `from`, `to`, `somedate`) VALUES (1, 'Number1', 'Number2', '2015-01-01 00:00:00'), (2, 'Number2', 'Number1', '2015-01-02 00:00:00'), (3, 'Number2', 'Number1', '2015-01-03 00:00:00'), (4, 'Number3', 'Number1', '2015-01-04 00:00:00'), (5, 'Number3', 'Number2', '2015-01-05 00:00:00'); ``` Tested on MySQL 5.6.19 ``` SELECT * FROM ( SELECT * FROM T ORDER BY LEAST(`to`,`from`), GREATEST(`to`,`from`), somedate DESC ) X GROUP BY LEAST(`to`,`from`), GREATEST(`to`,`from`) ``` **Result set** ``` id from to somedate 3 Number2 Number1 2015-01-03 4 Number3 Number1 2015-01-04 5 Number3 Number2 2015-01-05 ``` But, this relies on some shady behavior of MySQL, which will be changed in future versions. MySQL 5.7 [rejects](http://dev.mysql.com/doc/refman/5.7/en/group-by-handling.html) this query because the columns in the SELECT clause are not functionally dependent on the GROUP BY columns. If it is configured to accept it (`ONLY_FULL_GROUP_BY` is disabled), it works like the previous versions, but still it is not [guaranteed](http://dev.mysql.com/doc/refman/5.6/en/group-by-handling.html): "The server is free to choose any value from each group, so unless they are the same, the values chosen are indeterminate." So, the correct answer seems to be this: ``` SELECT T.* FROM T INNER JOIN ( SELECT LEAST(`to`,`from`) AS LowVal, GREATEST(`to`,`from`) AS HighVal, MAX(somedate) AS MaxDate FROM T GROUP BY LEAST(`to`,`from`), GREATEST(`to`,`from`) ) v ON T.somedate = v.MaxDate AND (T.From = v.LowVal OR T.From = v.HighVal) AND (T.To = v.LowVal OR T.To = v.HighVal) ``` Result set is the same as above, but in this case it is guaranteed to stay like this, while before you could easily get different date and id for row `Number2, Number1`, depending on what indexes you have on the table. It will work as expected until you have two rows in the original data that have exactly the same `somedate` and `to` and `from`. Let's add another row: ``` INSERT INTO T (`id`, `from`, `to`, `somedate`) VALUES (6, 'Number1', 'Number2', '2015-01-03 00:00:00'); ``` The query above would return two rows for `2015-01-03`: ``` id from to somedate 3 Number2 Number1 2015-01-03 6 Number1 Number2 2015-01-03 4 Number3 Number1 2015-01-04 5 Number3 Number2 2015-01-05 ``` To fix this we need a method to choose only one row in the group. In this example we can use unique `ID` to break the tie. If there are more than one rows in the group with the same maximum date we will choose the row with the largest ID. The inner-most sub-query called `Groups` simply returns all groups, like original query in the question. Then we add one column `id` to this result set, and we choose `id` that belongs to the same group and has highest `somedate` and then highest `id`, which is done by `ORDER BY` and `LIMIT`. This sub-query is called `GroupsWithIDs`. Once we have all groups and an `id` of the correct row for each group we `join` this to the original table to fetch the rest of the column for found `id`s. **final query** ``` SELECT T.* FROM ( SELECT Groups.N1 ,Groups.N2 , ( SELECT T.id FROM T WHERE LEAST(`to`,`from`) = Groups.N1 AND GREATEST(`to`,`from`) = Groups.N2 ORDER BY T.somedate DESC, T.id DESC LIMIT 1 ) AS id FROM ( SELECT LEAST(`to`,`from`) AS N1, GREATEST(`to`,`from`) AS N2 FROM T GROUP BY LEAST(`to`,`from`), GREATEST(`to`,`from`) ) AS Groups ) AS GroupsWithIDs INNER JOIN T ON T.id = GroupsWithIDs.id ``` **final result set** ``` id from to somedate 4 Number3 Number1 2015-01-04 5 Number3 Number2 2015-01-05 6 Number1 Number2 2015-01-03 ```
SQL How to group by two columns
[ "", "mysql", "sql", "" ]
I'm trying to add a new column to an existing table which will be populated by a unique Guid value. I'm trying the following code: ``` IF NOT EXISTS(select * from sys.columns where Name = N'Product_GUID' and Object_ID = Object_ID(N'dbo.Product')) BEGIN PRINT 'Creating new GUID column in dbo.Product table' ALTER TABLE dbo.Product ADD Product_GUID uniqueidentifier NULL UPDATE dbo.Product SET Product_Guid=NEWID() ALTER TABLE dbo.Product ALTER COLUMN Product_Guid uniqueidentifier NOT NULL END ``` This won't work becasue the second statement doesn't recognise the new column name. I can't put GO or ; at the end of each statement though, presumably because I'm in the middle of the BEGIN/END block. What's the best way of solving this dilemma?
The statement doing the update must be compiled after the column is added. The way this is usually done is wrapping the statements in an `EXEC`: ``` EXEC(' UPDATE dbo.Product SET Product_Guid = NEWID() ALTER TABLE dbo.Product ALTER COLUMN Product_Guid uniqueidentifier NOT NULL ') ```
It seems like you want to set the default value and have the column be not null. You'll get the same effect if you just set the default value to NEWID() ``` IF NOT EXISTS(select * from sys.columns where Name = N'Product_GUID' and Object_ID = Object_ID(N'dbo.Product')) BEGIN PRINT 'Creating new GUID column in dbo.Product table' ALTER TABLE dbo.Product ADD Product_GUID uniqueidentifier NOT NULL DEFAULT NEWID() END ``` If you need to remove the constraint after, you can create the DEFAULT constraint after you define the column in the alter statement and then drop the named constraint right after. If you don't name the constraint you'll have to get the name from sys.objects and then do dynamic sql to remove it. ``` IF NOT EXISTS(select * from sys.columns where Name = N'Product_GUID' and Object_ID = Object_ID(N'dbo.Product')) BEGIN PRINT 'Creating new GUID column in dbo.Product table' ALTER TABLE dbo.Product ADD Product_GUID uniqueidentifier NOT NULL, CONSTRAINT Default_Product_GUID DEFAULT NEWID() FOR Product_GUID; ALTER TABLE dbo.Product DROP CONSTRAINT Default_Product_GUID END ```
SQL: multiple statements within a BEGIN and END
[ "", "sql", "sql-server", "t-sql", "" ]
I'm trying to create a table using SQL view. It suppose to add a column to each row in questions table that will have an integer value of answers given to that question. This is what I have so far: ``` CREATE VIEW [dbo].[Question] AS SELECT COUNT(answer.Id) as 'Answers', question.Id, question.CreatorId, question.Title, question.Content, question.CreationDate FROM Questions AS question JOIN Answers AS answer ON answer.QuestionId = question.Id; ``` I understand that this is not right but I can't think of nothing else. Please help!
My favorite, correlated sub-query to get count: ``` CREATE VIEW [dbo].[Question] AS SELECT (select COUNT(*) from Answers where QuestionId = question.Id) as 'Answers', question.Id, question.CreatorId, question.Title, question.Content, question.CreationDate FROM Questions AS question; ``` Or, a join with a group by; ``` CREATE VIEW [dbo].[Question] AS SELECT COUNT(answer.Id) as 'Answers', question.Id, question.CreatorId, question.Title, question.Content, question.CreationDate FROM Questions AS question JOIN Answers AS answer ON answer.QuestionId = question.Id GROUP BY question.Id, question.CreatorId, question.Title, question.Content, question.CreationDate; ``` Note that columns in select list are either argument to aggregate functions, or also listed in GROUP BY clause.
This not creating a table,you are joining ``` var commandStr= "If not exists (select name from sysobjects where name = 'Customer') CREATE TABLE Customer(First_Name char(50),Last_Name char(50),Address char(50),City char(50),Country char(25),Birth_Date datetime)"; ``` using (SqlCommand command = new SqlCommand(commandStr, con)) command.ExecuteNonQuery(); you need like this
Add rows count in SQL view
[ "", "sql", "database", "view", "" ]
Given these simplified multiple choice tables where sometimes more than one answer is correct: ``` STUDENT_ANSWERS AnswerID | StudentID | QuestionID | Answers ------------------------------------------- 1 | 1 | 1 | C,D QUESTION_ANSWERS QuestionID | Answer | Text ------------------------------------------------- 1 | A | This is answer A 1 | B | B could also be correct 1 | C | Maybe it's C? 1 | D | Definitely D! ``` How do I do a select which translates the answers to their descriptions? My start: ``` SELECT * FROM STUDENT_ANSWERS sa LEFT OUTER JOIN QUESTION_ANSWERS qa ON qa.Answer IN sa.Answers??? -- Doesn't seem to work as IN requires a format of ('C','D') while I have 'C,D' ``` Desired output: ``` AnswerID | StudentID | QuestionID | AnswerDescriptions ------------------------------------------- 1 | 1 | 1 | Maybe it's C?,Definitely D! ``` So the descriptions simply have to replace the codes instead of getting a single line for each answer.
Your problem is the structure of table `STUDENT_ANSWERS`. It should have one row per answer: ``` AnswerID | StudentID | QuestionID | Answer ------------------------------------------- 1 | 1 | 1 | C 2 | 1 | 1 | D ``` --- Now, assuming you can't do anything to change (read: fix) this, you can fudge it with appending a comma and using LIKE: ``` select * from STUDENT_ANSWERS a join QUESTION_ANSWERS q on ',' + a.Answers + ',' like '%,' + q.Answer + ',%' and a.QuestionID = q.QuestionID ``` [SQL Fiddle demo](http://sqlfiddle.com/#!6/25ba6/2) Note this *assumes you will never ever have the text `,` in `QUESTION_ANSWERS.Answer`*. It will also never be able to use an index, so it's going to be slower than slow. And if you *absolutely must* format this in the database to be one line, you can use `STUFF` and the `FOR XML PATH('')` trick to concatenate the resulting rows.
This is full working example using only `T-SQL` statements. I will recommend to you to create separate function for splitting `CSV` that returns row set. Also, if you are working with huge amount of data, you may want to create a `CLR` function for splitting the values. Take a look to [this article](https://msdn.microsoft.com/en-us/library/ff878119.aspx) (there is everything you need). ``` DECLARE @StudentAnswers TABLE ( [AnswerID] INT ,[StudentID] INT ,[QuestionID] INT ,[Answers] VARCHAR(256) ); DECLARE @QuestionAnswers TABLE ( [QuestionID] INT ,[Answer] CHAR ,[Text] VARCHAR(256) ); INSERT INTO @StudentAnswers ([AnswerID], [StudentID], [QuestionID], [Answers]) VALUES (1, 1, 1, 'C,D') ,(2, 2, 1, 'A'); INSERT INTO @QuestionAnswers ([QuestionID], [Answer], [Text]) VALUES (1, 'A', 'This is answer A') ,(1, 'B', 'B could also be correct') ,(1, 'C', 'Maybe it''s C?') ,(1, 'D', 'Definitely D!'); SELECT SA.[AnswerID] ,SA.[StudentID] ,SA.[QuestionID] ,T.c.value('.', 'CHAR') ,QA.[Text] FROM @StudentAnswers SA CROSS APPLY ( SELECT CAST('<i>' + REPLACE([Answers], ',', '</i><i>') + '</i>' AS XML) Answers ) DS CROSS APPLY DS.Answers.nodes('i') T(c) INNER JOIN @QuestionAnswers QA ON SA.[QuestionID] = QA.[QuestionID] AND T.c.value('.', 'CHAR') = QA.[Answer]; ```
Join multiple values from one column, selected from another table
[ "", "sql", "sql-server", "t-sql", "csv", "join", "" ]
I have a simple employee table ![employee table](https://i.stack.imgur.com/DLPk4.png) and here is the data ![employee table data](https://i.stack.imgur.com/G7euL.png) When I try to run a query with string parameter it doesn't return anything although it has correct parameter ![query with correct string parameter](https://i.stack.imgur.com/Q8kpW.png) and when I try to change it to double quote ", it returns an error ![query enclosed parameter with "" for string parameters](https://i.stack.imgur.com/xevzj.png) Have I missed something? Thanks in advance
Your first query is correct. The problem is the data. The `empid` field has something other than `'10002'` in the field. You can investigate this in various ways: ``` select length(empid), '|' + empid + '|' from employee ``` This will provide information about unexpected characters that you might not see. The length for the field should be 6. The output for the second column should be "|100002|". If an extra character at the beginning or end of the field is the issue, then: ``` where empid = '%100002%' ``` solves the problem. However, there is a big potential performance difference, because this query cannot take advantage of the primary key index. So, you want to fix the data. You might have confused the letter zero with the number, or capital "I" or lower case "l" for 1. These can be harder to spot, but you can use `lower()` and `upper()` to help find such issues.
You could have a special char, like a carriage return, at the end of the value. Use the query below to check if there are some of those cases in your table. ``` SELECT EmpId, -- convert(int, EmpId) AS EmpId_INT, LEN(EmpId) AS Len_EmpId FROM Employee -- WHERE -- LEN(EmpId) <> LEN(convert(int, EmpId)) ORDER BY EmpId ``` If it's not mandatory, I suggest to change the "EmpId" data type to int.
Unable to return correct query when using string parameter in SQL Server
[ "", "sql", "sql-server-2012", "" ]
I am newbie to database I want to delete top 10 records form table without any condition on column.
Just execute this one-line command : ``` DELETE FROM table LIMIT 10 ```
Use below query to `delete` Top ten records from `Mysql`, ``` DELETE FROM table_name limit 10; ``` It deletes top 10 records from table. You can Also modify number to change number of records.
Delete Top 10 record from table
[ "", "mysql", "sql", "database", "" ]
I have the following stored procedure that selects comments from a SQL Server database for a selected photo. I'm using the `OFFSET` and `FETCH` in order to select a numer of posts (10) on load and then to have a button that will prepend more posts to a DIV using AJAX. ``` CREATE PROCEDURE [dbo].[SD_PhotoCmt] @IDphoto int, @IDuserInput nvarchar(150), @offset int, @fetch int = 10 AS SET NOCOUNT ON; SELECT photoCmt.IDphotoCmt, photoCmt.photoCmt, photoCmt.IDuser, photoCmt.photoCmtDate, //the date and time when the comment was added photoCmt.IDphoto, FROM photoCmt WHERE photoCmt.IDphoto = @IDphoto ORDER BY photoCmt.photoCmtDate DESC OFFSET @offset ROWS FETCH NEXT @fetch ROWS ONLY ``` Let's say there are 12 comments for a selected photo, I want the comments to be displayed on the page as follows (on first load): Desired order ``` [Button PREPEND More] Comment3 Comment4 Comment5 Comment6 Comment7 Comment8 Comment9 Comment10 Comment11 Comment12 ``` Unfortunately the stored procedure is dispalying the comments as follows: Undesired order ``` [Button PREPEND More] Comment12 Comment11 Comment10 Comment9 Comment8 Comment7 Comment6 Comment5 Comment4 Comment3 ``` How can I get my desired order? I tried many combinations but I can't get it right. I can't use `TOP` with `OFFSET` and `FETCH`.
Do OFFSET/FETCH first, then order the RESULT, something like: ``` select * from (SELECT photoCmt.IDphotoCmt, photoCmt.photoCmt, photoCmt.IDuser, photoCmt.photoCmtDate, //the date and time when the comment was added photoCmt.IDphoto, FROM photoCmt WHERE photoCmt.IDphoto = @IDphoto ORDER BY photoCmt.photoCmtDate DESC OFFSET @offset ROWS FETCH NEXT @fetch ROWS ONLY) order by photoCmtDate ASC ```
Try this, Essentially, use the descending order to get the page into a sub-query, then reorder the page in the outer query. I would caveat my advice, using a different display order to page order could be very confusing to the user. [**Fiddle Here**](http://sqlfiddle.com/#!6/b1ed3/2) ``` CREATE PROCEDURE [dbo].[SD_PhotoCmt] @IDphoto int, @IDuserInput nvarchar(150), @offset int, @fetch int = 10 AS SET NOCOUNT ON; SELECT [O].[IDphotoCmt], [O].[photoCmt], [O].[IDuser], [O].[photoCmtDate], [O].[IDphoto] FROM (SELECT [C].[IDphotoCmt], [C].[photoCmt], [C].[IDuser], [C].[photoCmtDate], [C].[IDphoto] FROM [photoCmt] [C] WHERE [C].[IDphoto] = @IDphoto ORDER BY [C].[photoCmtDate] DESC OFFSET @offset ROWS FETCH NEXT @fetch ROWS ONLY) [O] ORDER BY [O].[photoCmt] ASC; ```
Getting the right order using ORDER with OFFSET and FETCH in a stored procedure
[ "", "sql", "sql-server", "stored-procedures", "" ]
**Scenario:** I need to convert an existing query using (+) outer join syntax to ANSI syntax. **Reason**: One of the JOINS requires an OR operand, which is not allowed with the (+) operator, but is allowed with LEFT OUTER JOIN. (At least I'm assuming this is correct.) **Goal of the Query:** Table D contains that name of two hierarchies, BB and Commercial. Simple joins through these tables will return 19 Positions and their respective hierarchy. I need to see ALL positions with the appropriate hierarchy if valid, else a NULL value. **(+) Syntax Query - working correctly:** ``` select a.userid, a.firstname, a.lastname, b.name PositionName, d.name Hierarchy from cs_participant a, cs_position b, cs_positionrelation c, cs_positionrelationtype d where a.payeeseq = b.payeeseq and b.ruleelementownerseq = c.childpositionseq(+) and c.positionrelationtypeseq = d.datatypeseq(+) and b.removedate = to_date('01/01/2200','dd/mm/yyyy') and b.effectiveenddate = to_date('01/01/2200','dd/mm/yyyy') and c.removedate(+) = to_date('01/01/2200','dd/mm/yyyy') and d.removedate(+) = to_date('01/01/2200','dd/mm/yyyy') and a.removedate = to_date('01/01/2200','dd/mm/yyyy') and a.effectiveenddate = to_date('01/01/2200','dd/mm/yyyy') ``` **Result Example:** ![enter image description here](https://i.stack.imgur.com/YbiGH.png) **My Attempt with ANSI Syntax:** ``` select a.firstname, a.lastname, b.name, d.name as "Hierarchy" from cs_participant a, cs_position b left outer join cs_positionrelation c on c.parentpositionseq = b.ruleelementownerseq or c.childpositionseq = b.ruleelementownerseq (--This is the OR clause that I cannot execute in the (+) syntax query) left outer join cs_positionrelationtype d on d.datatypeseq = c.positionrelationtypeseq where a.payeeseq = b.payeeseq and b.removedate = to_date('01/01/2200','dd/mm/yyyy') and b.effectiveenddate = to_date('01/01/2200','dd/mm/yyyy') and a.removedate = to_date('01/01/2200','dd/mm/yyyy') and c.removedate = to_date('01/01/2200','dd/mm/yyyy') and c.effectiveenddate = to_date('01/01/2200','dd/mm/yyyy') and d.removedate = to_date('01/01/2200','dd/mm/yyyy') ``` **ANSI Query Results:** This query returns ONLY the Positions that are assigned to a hierarchy. I need to see ALL positions, with or without a hierarchy assignment, which are currently being excluded from the results.
You have two date literals as outer join conditions in your first query, but you leave them in the `where` clause in the second query. To change the syntax properly, those criteria need to be left as part of the join criteria. It's also bad form to combine the two join syntax (i.e. having comma seperated tables and the `join` keyword in the same query). Below is the first query properly adapted to SQL-99 syntax: ``` SELECT a.userid, a.firstname, a.lastname, b.name AS positionname, d.name AS hierarchy FROM cs_participant a JOIN cs_position b ON a.payeeseq = b.payeeseq LEFT JOIN cs_positionrelation c ON b.ruleelementownerseq = c.childpositionseq AND c.removedate = TO_DATE ('01/01/2200', 'dd/mm/yyyy') LEFT JOIN cs_positionrelationtype d ON c.positionrelationtypeseq = d.datatypeseq AND d.removedate = TO_DATE ('01/01/2200', 'dd/mm/yyyy') WHERE b.removedate = TO_DATE ('01/01/2200', 'dd/mm/yyyy') AND b.effectiveenddate = TO_DATE ('01/01/2200', 'dd/mm/yyyy') AND a.removedate = TO_DATE ('01/01/2200', 'dd/mm/yyyy') AND a.effectiveenddate = TO_DATE ('01/01/2200', 'dd/mm/yyyy') ``` Once that's done, adapting it to join on either column is trivial: ``` SELECT a.userid, a.firstname, a.lastname, b.name AS positionname, d.name AS hierarchy FROM cs_participant a JOIN cs_position b ON a.payeeseq = b.payeeseq LEFT JOIN cs_positionrelation c ON ( c.parentpositionseq = b.ruleelementownerseq OR c.childpositionseq = b.ruleelementownerseq) AND c.removedate = TO_DATE ('01/01/2200', 'dd/mm/yyyy') LEFT JOIN cs_positionrelationtype d ON c.positionrelationtypeseq = d.datatypeseq AND d.removedate = TO_DATE ('01/01/2200', 'dd/mm/yyyy') WHERE b.removedate = TO_DATE ('01/01/2200', 'dd/mm/yyyy') AND b.effectiveenddate = TO_DATE ('01/01/2200', 'dd/mm/yyyy') AND a.removedate = TO_DATE ('01/01/2200', 'dd/mm/yyyy') AND a.effectiveenddate = TO_DATE ('01/01/2200', 'dd/mm/yyyy') ```
(Posting my comment as answer in case this was what you wanted) OR is same as a UNION. In oracle syntax, you can do ``` SELECT * FROM TABLE1, TABLE2 WHERE B1=C1(+) union SELECT * FROM TABLE1, TABLE2 WHERE B2=C2(+) ``` This is the same as - ``` SELECT * FROM TABLE1 LEFT JOIN TABLE2 ON (B1=C1 OR B2=C2) ``` (Maybe use UNION ALL if at all possible) Union is how a FULL OUTER JOIN was possible in oracle syntax.
Oracle LEFT OUTER JOIN on 3+ tables - (+) Syntax versus ANSI Syntax
[ "", "sql", "oracle", "" ]
The Queries are working perfectly each one separatedly: ``` SELECT asf.surface_name, am.* FROM atp_matchs_to_surfaces m2s LEFT JOIN atp_surfaces asf ON m2s.surfaces_id = asf.surfaces_id LEFT JOIN atp_matchs am ON am.matchs_id = m2s.matchs_id; SELECT att.tournament_type_name, am.* FROM atp_matchs_to_tournament_type m2s LEFT JOIN atp_tournament_type att ON m2s.tournament_type_id = att.tournament_type_id LEFT JOIN atp_matchs am ON am.matchs_id = m2s.matchs_id; ``` The tables 'atp\_matchs\_to\_surfaces' and 'atp\_matchs\_to\_tournament\_type' are defined in that way: ``` CREATE TABLE IF NOT EXISTS `atp_matchs_to_tournament_type` ( `tournament_type_id` int(4) NOT NULL, `matchs_id` int(6) NOT NULL, PRIMARY KEY (`tournament_type_id`,`matchs_id`) CREATE TABLE IF NOT EXISTS `atp_matchs_to_surfaces` ( `surfaces_id` int(4) NOT NULL, `matchs_id` int(6) NOT NULL, PRIMARY KEY (`surfaces_id`,`matchs_id`) ``` And the other Tables with all the data: ``` CREATE TABLE IF NOT EXISTS `atp_matchs` ( `matchs_id` int(7) NOT NULL AUTO_INCREMENT, `tournament_name` varchar(36) NOT NULL, `tournament_year` year NOT NULL,-- DEFAULT '0000', `tournament_country` varchar(26) NOT NULL, `match_datetime` datetime NOT NULL,-- DEFAULT '0000-00-00 00:00:00', `match_link` varchar(85) NOT NULL, `prize_money` int(12) NOT NULL, `round` varchar(8) NOT NULL,-- DEFAULT '1R', `sets` varchar(34) NOT NULL,-- DEFAULT '0-0', `result` varchar(4) NOT NULL,-- DEFAULT '0-0', `p1_odd` decimal(4,2) NOT NULL,-- DEFAULT '0.00', `p2_odd` decimal(4,2) NOT NULL,-- DEFAULT '0.00', PRIMARY KEY (`matchs_id`) CREATE TABLE IF NOT EXISTS `atp_surfaces` ( `surfaces_id` int(4) NOT NULL AUTO_INCREMENT, `surface_name` varchar(24) NOT NULL, PRIMARY KEY (`surfaces_id`) CREATE TABLE IF NOT EXISTS `atp_tournament_type` ( `tournament_type_id` int(4) NOT NULL AUTO_INCREMENT, `tournament_type_name` varchar(22) NOT NULL, PRIMARY KEY (`tournament_type_id`) ``` I want in the same Query all the records of match and surface name+tournament type. It's clear? I hope... I tried to implement this with SubQueries: <http://www.w3resource.com/mysql/subqueries/> and [How can an SQL query return data from multiple tables](https://stackoverflow.com/questions/12475850/how-can-an-sql-query-return-data-from-multiple-tables) but i can't do it to work.
OK, this is your current schema. As you can see, one match can be played on multiple surfaces and one match can be played within multiple tournament types. ![Your current schema](https://i.stack.imgur.com/Nqds6.png) If this schema is OK, you can get your result with this query: ``` SELECT am.*, asu.surface_name, att.tournament_type_name FROM atp_matchs AS am LEFT JOIN atp_matchs_to_surfaces AS m2s ON m2s.matchs_id = am.matchs_id LEFT JOIN atp_surfaces AS asu ON asu.surfaces_id = m2s.surfaces_id LEFT JOIN atp_matchs_to_tournament_type AS m2t ON m2t.matchs_id = am.matchs_id LEFT JOIN atp_tournament_type AS att ON att.tournament_type_id = m2t.tournament_type_id ``` **However**, if one match can be played on one surface only and within one tournament type only, I would change your schema to: ![Suggested schema](https://i.stack.imgur.com/IjJ1Q.png) Tables atp\_matchs\_to\_surfaces and atp\_matchs\_to\_tournament\_type are removed and fields surfaces\_id and tournament\_type\_id moved to atp\_matchs table. Your query is now: ``` SELECT am.*, asu.surface_name, att.tournament_type_name FROM atp_matchs AS am LEFT JOIN atp_surfaces AS asu ON asu.surfaces_id = am.surfaces_id LEFT JOIN atp_tournament_type AS att ON att.tournament_type_id = am.tournament_type_id ```
``` SELECT asf.surface_name, am.* FROM atp_matchs_to_surfaces m2s LEFT JOIN atp_surfaces asf ON m2s.surfaces_id = asf.surfaces_id LEFT JOIN atp_matchs am ON am.matchs_id = m2s.matchs_id LEFT JOIN( SELECT att.tournament_type, am.* FROM atp_matchs_to_tournament_type m2s LEFT JOIN atp_tournament_type att AS Q1 ON m2s.surfaces_id = att.surfaces_id LEFT JOIN atp_matchs am AS Q2 ON am.matchs_id = m2s.matchs_id); ``` I added some "AS" because I had the error: Every derived table must have its own alias. I'm a little lost here!
How to joint Two customs Queries with two Joins in only One Query in MySQL
[ "", "mysql", "sql", "select", "subquery", "left-join", "" ]
Good Morning, I am on a SQL learning tour and trying to create a small database with a few queries to gain experience. Two databases where used, Person {id, name, age} and Knows {id, guest1\_id → Persons, guest2\_id → Persons} The query should result in a list of names of people that do not know anyone from the database, but can be known by others. Below is the code that I have got so far, but it does not seem to acquire anything. What is the problem here? ``` SELECT distinct K.id FROM Persons P LEFT JOIN Knows K ON K.guest1_id = P.id AND K.guest2_id = P.id WHERE K.id NOT IN ( SELECT id FROM Knows ) ``` Thank you!
Try this : ``` SELECT P.* FROM Persons P LEFT JOIN Knows K ON K.guest1_id = P.id WHERE K.id IS NULL ``` This will give you `Persons` that know nobody. You can also try this : ``` SELECT * FROM Persons WHERE NOT EXISTS(SELECT 1 FROM Knows WHERE guest1_id = P.id) ```
Your question doesn't really make sense, nor does the look of the query. But, If you are looking for all people who don't know anyone, then that in summary means the person is in neither the guest1 or guest2 ID column within the Knows table. If that is the case, you can do a double-left-join to the knows table and just get those that don't fit in either side ``` SELECT P.* from Persons P LEFT JOIN Knows K1 on P.id = K1.guest1 LEFT JOIN Knows K2 on P.id = K2.guest2 where K1.guest1 IS NULL AND K2.guest2 IS NULL ``` So if your table of ``` Persons ID Name 1 A 2 B 3 C 4 D and Knows table ID Guest1 Guest2 1 1 3 2 1 4 3 3 4 ``` Then person 2 is the only person that does not know any other person, thus their ID is not in either Guest1 OR Guest2 columns of the Knows table.
LEFT OUTER JOIN does not work
[ "", "sql", "left-join", "notin", "" ]
Problem statement: > given a range `x -> y` of unsigned integers > where `x` and `y` are both in the range `0 -> 2``n` > and `n` is `0 -> 32` (or 64 in alternate cases) > find the minimum available value > not equal to `x` or `y` > that is not in an existing set > where existing sets are arbitrary subsets of `x -> y` I am working with modeling IPv4 and IPv6 subnets in a database. Each subnet is defined by its starting address and ending address (I ensure the integrity of the ranges via business rules). Because IPv6 is too large to store in the `bigint` datatype we store IP addresses as either `binary(4)` or `binary(16)`. The associated data is stored in `subnet`, `dhcp_range` and `ip_address` tables: * **Subnet**: A subnet range is defined by a a beginning and ending IP address and stored in the `subnet` table. A subnet range is always of size 2n (as per defintion of [CIDR](http://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) / netmask). * **IP**: A subnet has `0..*` IP addresses stored in the `ip_address` table. An IP address must be between the beginning and ending addresses but not equal to the range as defined by its associated subnet. * **DHCP Range**: A subnet has `0..*` DHCP ranges stored in the `dhcp_range` table. Similar to a subnet each DHCP range defines a beginning and ending address. A DHCP range is bounded by the associated subnet range. DHCP ranges do not overlap each other. What I want to determine is the next available IP for a subnet: * that is *not* already assigned (not in the IP address table) * *not* within a DHCP range * and *not* equal to the begin or end address of the subnet range. I am looking for a solution which finds either the minimum available address or all of the available addresses. My initial thought was to generate the range of possible addresses (numbers) bound by the subnet's range and then remove addresses based on the used sets: ``` declare @subnet_sk int = 42 ;with address_range as ( select cast(ipv4_begin as bigint) as available_address ,cast(ipv4_end as bigint) as end_address, subnet_sk from subnet s where subnet_sk = @subnet_sk union all select available_address + 1, end_address, subnet_sk from address_range where available_address + 1 <= end_address ), assigned_addresses as ( select ip.[address] ,subnet_sk from ip_address ip where ip.subnet_sk = @subnet_sk and ip.address_family = 'InterNetwork'), dhcp_ranges as ( select dhcp.begin_address ,dhcp.end_address ,subnet_sk from dhcp_range dhcp where dhcp.subnet_sk = @subnet_sk and dhcp.address_family = 'InterNetwork') select distinct ar.available_address from address_range ar join dhcp_ranges dhcp on ar.available_address not between dhcp.begin_address and dhcp.end_address left join assigned_addresses aa on ar.available_address = aa.[address] join subnet s on ar.available_address != s.ipv4_begin and ar.available_address != s.ipv4_end where aa.[address] is null and s.subnet_sk = @subnet_sk order by available_address option (MAXRECURSION 32767) ``` The above query makes use of a recursive CTE and does not work for all data permutations. The recursive CTE is troublesome because it is limited to a max size of 32,767 (much smaller than potential range sizes) and has the very real possibility of being very slow. I could probably get over my issues with the recursive CTE, but the query fails under the following conditions: * when no IP addresses or DHCP ranges are assigned: *it returns nothing* should return all IP addresses as defined by the subnet range * when multiple DHCP ranges are assigned: *returns IPs inside DHCP ranges* To aide in troubleshooting the issue I've created a [SQL Fiddle](http://sqlfiddle.com/#!6/1ac02/7) with three subnets; each with a different characteristic: chopped up, empty, or mostly contiguous. The above query and the setup in the fiddle both work for the mostly contiguous subnet, but fails for the others. There is also a [GitHub Gist of the schema and example data](https://gist.github.com/ahsteele/910cba4d63994ea16ace/99de3a8ad639a342b288a0435c978d491edb1329). I have endeavored to generate the number sequence with recursive and stacked CTEs, but as indicated above am afraid they will be poorly performing and in the case of recursive CTEs artificially limiting. [Aaron Bertrand](https://stackoverflow.com/users/61305/aaron-bertrand) details some alternatives to CTEs in his series *[Generate a set or sequence without loops](http://sqlperformance.com/2013/01/t-sql-queries/generate-a-set-1)*. Sadly the dataset is too large for a numbers table as creating one just for the IPv4 address space would require 32 gigabytes of disk space (SQL Server stores [`bigint` values in 8 bytes](https://msdn.microsoft.com/en-us/library/ms187745.aspx)). I would prefer to generate the sequence on the fly, but haven't come up with a good way to do so. Alternatively, I have attempted to seed my query by looking at what I know to be used addresses: ``` declare @subnet_sk int = 1 select unassigned_range.* from (select cast(l.address as bigint) + 1 as start ,min(cast(fr.address as bigint)) - 1 as stop from ip_address as l left join ip_address as r on l.address = r.address - 1 left join ip_address as fr on l.address < fr.address where r.address is null and fr.address is not null and l.subnet_sk = @subnet_sk group by l.address, r.address) as unassigned_range join dhcp_range dhcp on unassigned_range.start not between cast(dhcp.begin_address as bigint) and cast(dhcp.end_address as bigint) and unassigned_range.stop not between cast(dhcp.begin_address as bigint) and cast(dhcp.end_address as bigint) where dhcp.subnet_sk = @subnet_sk ``` Sadly the above query does not work when there is nothing in the `ip_address` or `dhcp_range` tables. Worse since it is unaware of the bounds of the subnet range a `dhcp_range` towards the upper bound of the subnet range will artificially limit what is returned as the query cannot return rows from empty space at the edges. The performance is also not outstanding. Using SQL or TSQL how do I determine the next minimum available integer value within an arbitrary integer range limited by other ranges?
In this case no recursion is needed, because we have `LEAD` function. I will think about the problem in terms of "gaps" and "islands". I will focus at first on IPv4, because it is easier to do arithmetic with them, but idea for IPv6 is the same and in the end I'll show a generic solution. To start with, we have a full range of possible IPs: from `0x00000000` to `0xFFFFFFFF`. Inside this range there are "islands" defined by the ranges (inclusive) in `dhcp_range`: `dhcp_range.begin_address, dhcp_range.end_address`. You can think about the list of assigned IP addresses as another set of islands, which have one element each: `ip_address.address, ip_address.address`. Finally, the subnet itself is two islands: `0x00000000, subnet.ipv4_begin` and `subnet.ipv4_end, 0xFFFFFFFF`. We know that these islands do **not** overlap, which makes our life easier. Islands can be perfectly adjacent to each other. For example, when you have few consecutively allocated IP addresses, the gap between them is zero. Among all these islands we need to find the first gap, which has at least one element, i.e. non-zero gap, i.e. the next island starts at some distance after the previous island ends. So, we'll put all islands together using `UNION` (`CTE_Islands`) and then go through all of them in the order of `end_address` (or `begin_address`, use the field that has index on it) and use `LEAD` to peek ahead and get the starting address of the next island. In the end we'll have a table, where each row had `end_address` of the current island and `begin_address` of the next island (`CTE_Diff`). If difference between them is more than one, it means that the "gap" is wide enough and we'll return the `end_address` of the current island plus 1. **The first available IP address for the given subnet** ``` DECLARE @ParamSubnet_sk int = 1; WITH CTE_Islands AS ( SELECT CAST(begin_address AS bigint) AS begin_address, CAST(end_address AS bigint) AS end_address FROM dhcp_range WHERE subnet_sk = @ParamSubnet_sk UNION ALL SELECT CAST(address AS bigint) AS begin_address, CAST(address AS bigint) AS end_address FROM ip_address WHERE subnet_sk = @ParamSubnet_sk UNION ALL SELECT CAST(0x00000000 AS bigint) AS begin_address, CAST(ipv4_begin AS bigint) AS end_address FROM subnet WHERE subnet_sk = @ParamSubnet_sk UNION ALL SELECT CAST(ipv4_end AS bigint) AS begin_address, CAST(0xFFFFFFFF AS bigint) AS end_address FROM subnet WHERE subnet_sk = @ParamSubnet_sk ) ,CTE_Diff AS ( SELECT begin_address , end_address --, LEAD(begin_address) OVER(ORDER BY end_address) AS BeginNextIsland , LEAD(begin_address) OVER(ORDER BY end_address) - end_address AS Diff FROM CTE_Islands ) SELECT TOP(1) CAST(end_address + 1 AS varbinary(4)) AS NextAvailableIPAddress FROM CTE_Diff WHERE Diff > 1 ORDER BY end_address; ``` Result set would contain one row if there is at least one IP address available and would not contain rows at all if there are no IP addresses available. ``` For parameter 1 result is `0xAC101129`. For parameter 2 result is `0xC0A81B1F`. For parameter 3 result is `0xC0A8160C`. ``` Here is a link to [SQLFiddle](http://sqlfiddle.com/#!6/b4915/11/0). It didn't work with parameter, so I hard coded `1` there. Change it in UNION to other subnet ID (2 or 3) to try other subnets. Also, it didn't display result in `varbinary` correctly, so I left it as bigint. Use, say, windows calculator to convert it to hex to verify result. If you don't limit results to the first gap by `TOP(1)`, you'll get a list of all available IP ranges (gaps). **List of all ranges of available IP addresses for a given subnet** ``` DECLARE @ParamSubnet_sk int = 1; WITH CTE_Islands AS ( SELECT CAST(begin_address AS bigint) AS begin_address, CAST(end_address AS bigint) AS end_address FROM dhcp_range WHERE subnet_sk = @ParamSubnet_sk UNION ALL SELECT CAST(address AS bigint) AS begin_address, CAST(address AS bigint) AS end_address FROM ip_address WHERE subnet_sk = @ParamSubnet_sk UNION ALL SELECT CAST(0x00000000 AS bigint) AS begin_address, CAST(ipv4_begin AS bigint) AS end_address FROM subnet WHERE subnet_sk = @ParamSubnet_sk UNION ALL SELECT CAST(ipv4_end AS bigint) AS begin_address, CAST(0xFFFFFFFF AS bigint) AS end_address FROM subnet WHERE subnet_sk = @ParamSubnet_sk ) ,CTE_Diff AS ( SELECT begin_address , end_address , LEAD(begin_address) OVER(ORDER BY end_address) AS BeginNextIsland , LEAD(begin_address) OVER(ORDER BY end_address) - end_address AS Diff FROM CTE_Islands ) SELECT CAST(end_address + 1 AS varbinary(4)) AS begin_range_AvailableIPAddress ,CAST(BeginNextIsland - 1 AS varbinary(4)) AS end_range_AvailableIPAddress FROM CTE_Diff WHERE Diff > 1 ORDER BY end_address; ``` Result. [SQL Fiddle](http://sqlfiddle.com/#!6/b4915/18/0) with result as simple bigint, not in hex, and with hardcoded parameter ID. ``` Result set for ID = 1 begin_range_AvailableIPAddress end_range_AvailableIPAddress 0xAC101129 0xAC10112E Result set for ID = 2 begin_range_AvailableIPAddress end_range_AvailableIPAddress 0xC0A81B1F 0xC0A81B1F 0xC0A81B22 0xC0A81B28 0xC0A81BFA 0xC0A81BFE Result set for ID = 3 begin_range_AvailableIPAddress end_range_AvailableIPAddress 0xC0A8160C 0xC0A8160C 0xC0A816FE 0xC0A816FE ``` **The first available IP address for each subnet** It is easy to extend the query and return first available IP address for all subnets, rather than specifying one particular subnet. Use `CROSS APPLY` to get list of islands for each subnet and then add `PARTITION BY subnet_sk` into the `LEAD` function. ``` WITH CTE_Islands AS ( SELECT subnet_sk , begin_address , end_address FROM subnet AS Main CROSS APPLY ( SELECT CAST(begin_address AS bigint) AS begin_address, CAST(end_address AS bigint) AS end_address FROM dhcp_range WHERE dhcp_range.subnet_sk = Main.subnet_sk UNION ALL SELECT CAST(address AS bigint) AS begin_address, CAST(address AS bigint) AS end_address FROM ip_address WHERE ip_address.subnet_sk = Main.subnet_sk UNION ALL SELECT CAST(0x00000000 AS bigint) AS begin_address, CAST(ipv4_begin AS bigint) AS end_address FROM subnet WHERE subnet.subnet_sk = Main.subnet_sk UNION ALL SELECT CAST(ipv4_end AS bigint) AS begin_address, CAST(0xFFFFFFFF AS bigint) AS end_address FROM subnet WHERE subnet.subnet_sk = Main.subnet_sk ) AS CA ) ,CTE_Diff AS ( SELECT subnet_sk , begin_address , end_address , LEAD(begin_address) OVER(PARTITION BY subnet_sk ORDER BY end_address) - end_address AS Diff FROM CTE_Islands ) SELECT subnet_sk , CAST(MIN(end_address) + 1 as varbinary(4)) AS NextAvailableIPAddress FROM CTE_Diff WHERE Diff > 1 GROUP BY subnet_sk ``` **Result set** ``` subnet_sk NextAvailableIPAddress 1 0xAC101129 2 0xC0A81B1F 3 0xC0A8160C ``` Here is [SQLFiddle](http://sqlfiddle.com/#!6/b4915/13/0). I had to remove conversion to `varbinary` in SQL Fiddle, because it was showing results incorrectly. ## **Generic solution for both IPv4 and IPv6** **All ranges of available IP addresses for all subnets** [SQL Fiddle with sample IPv4 and IPv6 data, functions and final query](http://sqlfiddle.com/#!6/1243c/3/0) Your sample data for IPv6 wasn't quite correct - the end of the subnet `0xFC00000000000000FFFFFFFFFFFFFFFF` was less than your dhcp ranges, so I changed that to `0xFC0001066800000000000000FFFFFFFF`. Also, you had both IPv4 and IPv6 in the same subnet, which is cumbersome to handle. For the sake of this example I've changed your schema a little - instead of having explicit `ipv4_begin / end` and `ipv6_begin / end` in `subnet` I made it just `ip_begin / end` as `varbinary(16)` (same as for your other tables). I also removed `address_family`, otherwise it was too big for SQL Fiddle. *Arithmetic functions* To make it work for IPv6 we need to figure out how to add/subtract `1` to/from `binary(16)`. I would make CLR function for it. If you are not allowed to enable CLR, it is possible via standard T-SQL. I made two functions that return a table, rather than scalar, because in such way they can be inlined by the optimizer. I wanted to make a generic solution, so the function would accept `varbinary(16)` and work for both IPv4 and IPv6. Here is T-SQL function to increment `varbinary(16)` by one. If parameter is not 16 bytes long I assume that it is IPv4 and simply convert it to `bigint` to add `1` and then back to `binary`. Otherwise, I split `binary(16)` into two parts 8 bytes long each and cast them into `bigint`. `bigint` is signed, but we need unsigned increment, so we need to check few cases. The `else` part is most common - we simply increment low part by one and append result to original high part. If low part is `0xFFFFFFFFFFFFFFFF`, then we set low part to `0x0000000000000000` and carry over the flag, i.e. increment the high part by one. If low part is `0x7FFFFFFFFFFFFFFF`, then we set low part to `0x8000000000000000` explicitly, because an attempt to increment this `bigint` value would cause overflow. If the whole number is `0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF` we set result to `0x00000000000000000000000000000000`. The function to decrement by one is similar. ``` CREATE FUNCTION [dbo].[BinaryInc](@src varbinary(16)) RETURNS TABLE AS RETURN SELECT CASE WHEN DATALENGTH(@src) = 16 THEN -- Increment IPv6 by splitting it into two bigints 8 bytes each and then concatenating them CASE WHEN @src = 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF THEN 0x00000000000000000000000000000000 WHEN SUBSTRING(@src, 9, 8) = 0x7FFFFFFFFFFFFFFF THEN SUBSTRING(@src, 1, 8) + 0x8000000000000000 WHEN SUBSTRING(@src, 9, 8) = 0xFFFFFFFFFFFFFFFF THEN CAST(CAST(SUBSTRING(@src, 1, 8) AS bigint) + 1 AS binary(8)) + 0x0000000000000000 ELSE SUBSTRING(@src, 1, 8) + CAST(CAST(SUBSTRING(@src, 9, 8) AS bigint) + 1 AS binary(8)) END ELSE -- Increment IPv4 by converting it into 8 byte bigint and then back into 4 bytes binary CAST(CAST(CAST(@src AS bigint) + 1 AS binary(4)) AS varbinary(16)) END AS Result ; GO CREATE FUNCTION [dbo].[BinaryDec](@src varbinary(16)) RETURNS TABLE AS RETURN SELECT CASE WHEN DATALENGTH(@src) = 16 THEN -- Decrement IPv6 by splitting it into two bigints 8 bytes each and then concatenating them CASE WHEN @src = 0x00000000000000000000000000000000 THEN 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF WHEN SUBSTRING(@src, 9, 8) = 0x8000000000000000 THEN SUBSTRING(@src, 1, 8) + 0x7FFFFFFFFFFFFFFF WHEN SUBSTRING(@src, 9, 8) = 0x0000000000000000 THEN CAST(CAST(SUBSTRING(@src, 1, 8) AS bigint) - 1 AS binary(8)) + 0xFFFFFFFFFFFFFFFF ELSE SUBSTRING(@src, 1, 8) + CAST(CAST(SUBSTRING(@src, 9, 8) AS bigint) - 1 AS binary(8)) END ELSE -- Decrement IPv4 by converting it into 8 byte bigint and then back into 4 bytes binary CAST(CAST(CAST(@src AS bigint) - 1 AS binary(4)) AS varbinary(16)) END AS Result ; GO ``` *All ranges of available IP addresses for all subnets* ``` WITH CTE_Islands AS ( SELECT subnet_sk, begin_address, end_address FROM dhcp_range UNION ALL SELECT subnet_sk, address AS begin_address, address AS end_address FROM ip_address UNION ALL SELECT subnet_sk, SUBSTRING(0x00000000000000000000000000000000, 1, DATALENGTH(ip_begin)) AS begin_address, ip_begin AS end_address FROM subnet UNION ALL SELECT subnet_sk, ip_end AS begin_address, SUBSTRING(0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF, 1, DATALENGTH(ip_end)) AS end_address FROM subnet ) ,CTE_Gaps AS ( SELECT subnet_sk ,end_address AS EndThisIsland ,LEAD(begin_address) OVER(PARTITION BY subnet_sk ORDER BY end_address) AS BeginNextIsland FROM CTE_Islands ) ,CTE_GapsIncDec AS ( SELECT subnet_sk ,EndThisIsland ,EndThisIslandInc ,BeginNextIslandDec ,BeginNextIsland FROM CTE_Gaps CROSS APPLY ( SELECT bi.Result AS EndThisIslandInc FROM dbo.BinaryInc(EndThisIsland) AS bi ) AS CA_Inc CROSS APPLY ( SELECT bd.Result AS BeginNextIslandDec FROM dbo.BinaryDec(BeginNextIsland) AS bd ) AS CA_Dec ) SELECT subnet_sk ,EndThisIslandInc AS begin_range_AvailableIPAddress ,BeginNextIslandDec AS end_range_AvailableIPAddress FROM CTE_GapsIncDec WHERE CTE_GapsIncDec.EndThisIslandInc <> BeginNextIsland ORDER BY subnet_sk, EndThisIsland; ``` **Result set** ``` subnet_sk begin_range_AvailableIPAddress end_range_AvailableIPAddress 1 0xAC101129 0xAC10112E 2 0xC0A81B1F 0xC0A81B1F 2 0xC0A81B22 0xC0A81B28 2 0xC0A81BFA 0xC0A81BFE 3 0xC0A8160C 0xC0A8160C 3 0xC0A816FE 0xC0A816FE 4 0xFC000000000000000000000000000001 0xFC0000000000000000000000000000FF 4 0xFC000000000000000000000000000101 0xFC0000000000000000000000000001FF 4 0xFC000000000000000000000000000201 0xFC0000000000000000000000000002FF 4 0xFC000000000000000000000000000301 0xFC0000000000000000000000000003FF 4 0xFC000000000000000000000000000401 0xFC0000000000000000000000000004FF 4 0xFC000000000000000000000000000501 0xFC0000000000000000000000000005FF 4 0xFC000000000000000000000000000601 0xFC0000000000000000000000000006FF 4 0xFC000000000000000000000000000701 0xFC0000000000000000000000000007FF 4 0xFC000000000000000000000000000801 0xFC0000000000000000000000000008FF 4 0xFC000000000000000000000000000901 0xFC00000000000000BFFFFFFFFFFFFFFD 4 0xFC00000000000000BFFFFFFFFFFFFFFF 0xFC00000000000000CFFFFFFFFFFFFFFD 4 0xFC00000000000000CFFFFFFFFFFFFFFF 0xFC00000000000000FBFFFFFFFFFFFFFD 4 0xFC00000000000000FBFFFFFFFFFFFFFF 0xFC00000000000000FCFFFFFFFFFFFFFD 4 0xFC00000000000000FCFFFFFFFFFFFFFF 0xFC00000000000000FFBFFFFFFFFFFFFD 4 0xFC00000000000000FFBFFFFFFFFFFFFF 0xFC00000000000000FFCFFFFFFFFFFFFD 4 0xFC00000000000000FFCFFFFFFFFFFFFF 0xFC00000000000000FFFBFFFFFFFFFFFD 4 0xFC00000000000000FFFBFFFFFFFFFFFF 0xFC00000000000000FFFCFFFFFFFFFFFD 4 0xFC00000000000000FFFCFFFFFFFFFFFF 0xFC00000000000000FFFFBFFFFFFFFFFD 4 0xFC00000000000000FFFFBFFFFFFFFFFF 0xFC00000000000000FFFFCFFFFFFFFFFD 4 0xFC00000000000000FFFFCFFFFFFFFFFF 0xFC00000000000000FFFFFBFFFFFFFFFD 4 0xFC00000000000000FFFFFBFFFFFFFFFF 0xFC00000000000000FFFFFCFFFFFFFFFD 4 0xFC00000000000000FFFFFCFFFFFFFFFF 0xFC00000000000000FFFFFFBFFFFFFFFD 4 0xFC00000000000000FFFFFFBFFFFFFFFF 0xFC00000000000000FFFFFFCFFFFFFFFD 4 0xFC00000000000000FFFFFFCFFFFFFFFF 0xFC00000000000000FFFFFFFBFFFFFFFD 4 0xFC00000000000000FFFFFFFBFFFFFFFF 0xFC00000000000000FFFFFFFCFFFFFFFD 4 0xFC00000000000000FFFFFFFCFFFFFFFF 0xFC00000000000000FFFFFFFFBFFFFFFD 4 0xFC00000000000000FFFFFFFFBFFFFFFF 0xFC00000000000000FFFFFFFFCFFFFFFD 4 0xFC00000000000000FFFFFFFFCFFFFFFF 0xFC00000000000000FFFFFFFFFBFFFFFD 4 0xFC00000000000000FFFFFFFFFBFFFFFF 0xFC00000000000000FFFFFFFFFCFFFFFD 4 0xFC00000000000000FFFFFFFFFCFFFFFF 0xFC00000000000000FFFFFFFFFFBFFFFD 4 0xFC00000000000000FFFFFFFFFFBFFFFF 0xFC00000000000000FFFFFFFFFFCFFFFD 4 0xFC00000000000000FFFFFFFFFFCFFFFF 0xFC00000000000000FFFFFFFFFFFBFFFD 4 0xFC00000000000000FFFFFFFFFFFBFFFF 0xFC00000000000000FFFFFFFFFFFCFFFD 4 0xFC00000000000000FFFFFFFFFFFCFFFF 0xFC00000000000000FFFFFFFFFFFFBFFD 4 0xFC00000000000000FFFFFFFFFFFFBFFF 0xFC00000000000000FFFFFFFFFFFFCFFD 4 0xFC00000000000000FFFFFFFFFFFFCFFF 0xFC00000000000000FFFFFFFFFFFFFBFD 4 0xFC00000000000000FFFFFFFFFFFFFBFF 0xFC00000000000000FFFFFFFFFFFFFCFD 4 0xFC00000000000000FFFFFFFFFFFFFCFF 0xFC00000000000000FFFFFFFFFFFFFFBD 4 0xFC00000000000000FFFFFFFFFFFFFFBF 0xFC00000000000000FFFFFFFFFFFFFFCD 4 0xFC00000000000000FFFFFFFFFFFFFFCF 0xFC0001065FFFFFFFFFFFFFFFFFFFFFFF 4 0xFC000106600000000000000100000000 0xFC00010666FFFFFFFFFFFFFFFFFFFFFF 4 0xFC000106670000000000000100000000 0xFC000106677FFFFFFFFFFFFFFFFFFFFF 4 0xFC000106678000000000000100000000 0xFC000106678FFFFFFFFFFFFFFFFFFFFF 4 0xFC000106679000000000000100000000 0xFC0001066800000000000000FFFFFFFE ``` ## **Execution plans** I was curious to see how different solutions suggested here work, so I looked at their execution plans. Keep in mind that these plans are for the small sample set of data without any indexes. My generic solution for both IPv4 and IPv6: ![](https://i.stack.imgur.com/yplow.png) Similar solution by **dnoeth**: ![](https://i.stack.imgur.com/PCny6.png) Solution by **cha** that doesn't use `LEAD` function: ![](https://i.stack.imgur.com/fJb2X.png)
After a lot of thinking, I believe a query as simple as this will do: ``` with a as( -- next ip address select n.next_address, i.subnet_sk from ip_address i CROSS APPLY (SELECT convert(binary(4), convert(bigint, i.address) + 1) AS next_address) as n where n.next_address NOT IN (SELECT address FROM ip_address) AND EXISTS (SELECT 1 FROM subnet s WHERE s.subnet_sk = i.subnet_sk and n.next_address > s.ipv4_begin and n.next_address < s.ipv4_end) UNION -- use UNION here, not UNION ALL to remove duplicates -- first ip address for completely unassigned subnets SELECT next_address, subnet_sk FROM subnet CROSS APPLY (SELECT convert(binary(4), convert(bigint, ipv4_begin) + 1) AS next_address) n where n.next_address NOT IN (SELECT address FROM ip_address) UNION -- use UNION here, not UNION ALL to remove duplicates -- next ip address from dhcp ranges SELECT next_address, subnet_sk FROM dhcp_range CROSS APPLY (SELECT convert(binary(4), convert(bigint, end_address) + 1) AS next_address) n where n.next_address NOT IN (SELECT address FROM ip_address) ) SELECT min(next_address), subnet_sk FROM a WHERE NOT exists(SELECT 1 FROM dhcp_range dhcp WHERE a.subnet_sk = dhcp.subnet_sk and a.next_address between dhcp.begin_address and dhcp.end_address) GROUP BY subnet_sk ``` It is for IPV4, but can be easily extended for IPV6 [SQLFiddle](http://sqlfiddle.com/#!6/1ac02/18) Results for each subnet: ``` subnet_sk ---------- ----------- 0xAC101129 1 0xC0A81B1F 2 0xC0A8160C 3 (3 row(s) affected) ``` In my opinion it should be very fast. Please check it
With sql find next available integer within range that is not present in existing integer subset(s)
[ "", "sql", "math", "sql-server-2012", "network-programming", "range", "" ]
I can give a result set consisting of a single value, say `1`, as follows: ``` SELECT 1 as column; ``` and it gives me the result set: ``` column ------ 1 ``` But I have a list of such values represented as a string `(1, 4, 7, ...)` and I need to produce the following result set: ``` column ------ 1 4 7 . . . ``` I tried `SELECT * FROM (1, 4, 7)` but it didn't work. I also tried to `SELECT 1, 4, 7` but it produces the following result set: ``` col1 col2 col3 1 4 7 ``` Which was not what I was looking for.
If those are constant values, you can use the `values` clause: ``` select * from ( values (1), (4), (7) ) as t(id); ``` If your values are inside a string literal, you can use this: ``` select * from unnest(string_to_array('1,2,3,4', ',')) as id; ```
You could `unnest` it as an [array](http://www.postgresql.org/docs/9.4/static/functions-array.html): ``` SELECT UNNEST(ARRAY[1, 4, 7]) ```
How to select multiple row in postgresql?
[ "", "sql", "postgresql", "select", "resultset", "" ]
I have an address in column `iad_Line1 = 415 W 10th St S` and want to split the `415` to column `ad_housenumber` and `'W 10th St S'` to column `iad_street`. Is this possible in SQL as a mass update for multiple rows?
Check this ``` DECLARE @ADDRESS AS NVARCHAR(50) SET @ADDRESS = '415 W 10th St S' SELECT SUBSTRING(@ADDRESS, 0, charindex(' ',@address) ) as ad_housenumber , RTRIM(SUBSTRING(@ADDRESS, charindex(' ',@address)+1,LEN(@ADDRESS) - 3)) AS iad_street ``` **Updated answer with update statement:** ``` declare @source table(iad_Line1 varchar(50), ad_housenumber varchar(50), iad_street varchar(50)) insert into @source values('415 W 10th St S',null,null) update @source set ad_housenumber = SUBSTRING(iad_Line1, 0, charindex(' ',iad_Line1) ) ,iad_street = RTRIM(SUBSTRING(iad_Line1, charindex(' ',iad_Line1)+1,LEN(iad_Line1) - 3)) from @source select * from @source ```
Try like this, ``` DECLARE @t TABLE ( AlphaColumn VARCHAR(30), DOORNO INT, ADDRESS1 VARCHAR(30) ) INSERT INTO @t (AlphaColumn) VALUES ('415 W 10th St S'), ('34 St S'), ('415 h St SAVC'), ('123 d'), ('ww 1') SELECT * FROM @t UPDATE Y SET DOORNO = ( CASE WHEN AlphaStart > 0 THEN LEFT(AlphaColumn, AlphaStart - 1) ELSE AlphaColumn END ), ADDRESS1 = ( CASE WHEN noStart > 0 THEN RIGHT(AlphaColumn, LEN(AlphaColumn) + 1 - AlphaStart) ELSE AlphaColumn END ) FROM (SELECT patindex('%[a-z]%', AlphaColumn) AS [AlphaStart], patindex('%[0-9]%', AlphaColumn) AS [noStart], AlphaColumn, DOORNO, ADDRESS1 FROM @t)y WHERE [AlphaStart] <> 1 SELECT * FROM @t ```
Convert address in two columns
[ "", "sql", "sql-server-2012", "" ]
I'm trying to update medical data from one table to another after switching from one system to another. We have two tables, for simplicity I'll make this a simple example. There are many columns in these tables in reality (not just 5). **Table1**: ``` name, date, var1, var2, var3 ``` **Table2**: ``` name, date, var1a, var2a, var3a ``` I want to transfer data from Table 1 to Table 2 for any rows where there isn't previous data for that date, where var1 = var1a, etc (same columns with different names). I was trying to do something with a loop, but realized that may not be necessary. I had gotten this far but keep wasn't sure if this was ok: ``` UPDATE Table2 VALUES (date, var1a, var2a, var3a) SELECT date, var1, var2, var3 FROM Table1 ``` Is that correct syntax so far? Or do I need to map the variables to translate var1 into var1a, etc? How do I add a check to make sure I don't overwrite any data already in Table1? I don't want to add data if there is already data for that date/name combination. Thanks!
You can `INSERT` into `TABLE2` all values from `TABLE1` that do not already exist in `Table2`: ``` INSERT INTO Table2 (date, var1a, var2a, var3a) SELECT date, var1, var2, var3 FROM Table1 t1 WHERE NOT EXISTS (SELECT 1 FROM Table2 t2 WHERE t2.date = t1.date) ``` Already existing values are specified by comparing the `date` column. You can add any other predicates in the `SELECT` subquery of the `NOT EXISTS` expression to suit your needs.
You could use an `update with a join`. And you dont need to update the `date column` since that's what you are using to find the matches in the 2 tables. Either you generate a dynamic query based on the empty/null valued columns, or you could do something like the below, which puts the same value in the column if it exists in `table2` or else puts the corresponding value from `table1`. The below approach requires less logic and easier to implement but will produce IO equivalent to updating the entire table. ``` update tbl2 set val1a=isnull(val1a,val1) , val2a=isnull(val2a,val2) , val3a=isnull(val3a.val3) from table1 tbl1 inner join table2 tbl2 on tbl1.name=tbl2.name and tbl1.date=tbl2.date ``` Considerations: The approach requires less logic and easier to implement but will produce IOs equivalent to updating the entire table2. If you have a smallish table i would go with this approach. If its a big table then you should look into building specific query sets to reduce IO
SQL: Add multiple rows from one table to another when no data for that date in new table
[ "", "sql", "sql-server", "sql-server-2012", "" ]
Can I make an update using common table expression in oracle database? I am getting error `ORA-00928: missing SELECT keyword` when I am trying this: ``` with average as (SELECT avg(salary) FROM instructor) update instructor set salary = case when salary <= average then salary * 1.05 else salary * 1.03 end ```
Since `average salary` just a *scalar value* you can do ``` update instructor set salary = case when salary <= (select avg(t.salary) from instructor t) then salary * 1.05 else salary * 1.03 end ``` In that case Oracle first *compute the average* (say `1234.4567`) and then perform the *update*.
> Can I do something like this in oracle database? Well, it is not about whether you could do it or not. It is about whether you need to do it or not. In your query I don't see any filter criteria. You want to update all the rows? I don't see a need of **CTE** in your case. When do you need a **CTE**, i.e. a with clause as a sub-query factoring method whenever you have a scenario where the sub-query is executed multiple times. You use a WITH clause to make sure the subquery is executed once, and the resultset is stored as a temp table. Yes, you could use **WITH** clause for an **UPDATE** statement. For example, ``` UPDATE TABLE t SET t.column1, t.column2 = (SELECT column1, column2 FROM ( WITH cte AS( SELECT ... FROM another_table ) SELECT * FROM cte ) ``` You could use a **MERGE** statement **USING** the **WITH** clause. For example, ``` SQL> MERGE INTO emp e USING 2 (WITH average AS 3 (SELECT deptno, AVG(sal) avg_sal FROM emp group by deptno) 4 SELECT * FROM average 5 ) u 6 ON (e.deptno = u.deptno) 7 WHEN MATCHED THEN 8 UPDATE SET e.sal = 9 CASE 10 WHEN e.sal <= u.avg_sal 11 THEN e.sal * 1.05 12 ELSE e.sal * 1.03 13 END 14 / 14 rows merged. SQL> ```
Oracle: Using CTE with update clause
[ "", "sql", "oracle", "common-table-expression", "" ]
I am having the syntax error in my code on a update statement, after reading multiple posts about it I cant figure out what is causing the error. The code is as follows: ``` if request.querystring("do")="customer" then New_customer = request.Form("customer") openconn con sSQL="SELECT RelationNumber FROM Relations WHERE RelationName='" & New_customer & "'" set rst = con.execute(sSQL) if rst.EOF then response.write "No relation found" else Relationnumber_update = rst("RelationNumber") sSQL2="SELECT Number FROM Orders WHERE Relation=" & Relationnumber_update & "" set rst2 = con.execute(sSQL2) if rst2.EOF then response.write("No order number found!") else if Relationnumber_update <> 1000 then Ordernumber_update = rst2("Nummer") sSQL3="UPDATE Bookings SET Order=" & Ordernumber_update & " WHERE ID=" & request("ID") con.execute(sSQL3) else response.write("Order number 1000 is not allowed!") end if end if end if closeconn con response.redirect("myPage.asp?action=page") response.end end if ``` The error happens on the line: `sSQL3="UPDATE Bookings SET Order=" & Ordernumber_update & " WHERE ID=" & request("ID")` **Things to know:** * The request queriestring is from the form where the user can choose a customer in a dropdown list. See code: `<form name="ChangeCustomer" method="post" action="myPage.asp?action=page&do=customer&ID=<%=rst("ID")%>" style="display:inline">` * The rst from ID is from a select statement before which works since I used in also in other code in the same way (that does work). * The openconn con is a function for accessing my database (it works, same reason as above) * Eacht select statement in this code has been tested for its output in a response.write. All the results that came out were the expected ones. **Short description of what happens in the code above** 1. A user changes the customer in the form (with a dropdown menu) and presses save (submit). 2. On submit the queriestring is launched, this was tested if the code actually comes to this and it does. 3. The chosen customer is saved in the var `New_customer`. 4. The relation number that matches the customer name is retrieved with the first select statement. 5. The right relation number is being put into the var `Relationnumber_update` 6. In the second query the right number is being searched for that equals the relation number. 7. A if/then follows which makes sure the Relationnumber\_update is not equal to 1000 (no updates should be done on this number) 8. If its not equal to 1000 then the found order number (second select statement) is stored in the var `ordernumber_update` 9. The update statement follows where Order field in the bookings table is being updated with the found ordernumber. 10. In the end (after the if) the connection is being closed and the page is being 'refreshed'. For some reasson I am getting the syntax error on my update statement but I have no clue why. I checked the data types of the Number field in the orders table, the Relationnumber from the Relations table and the Order field in the bookings table, they are all of type number/int. I also tried to update with a set number directly in the update statement instead of the ordernumber\_update var (like so: `sSQL3="UPDATE Bookings SET Order=6477 WHERE ID=" & request("ID")`) but this gives the same error..
Three things: 1. Read about Sql Injections 2. To debug sql statement "live", use ``` sSQL3="UPDATE Bookings ... Response.Write(sSQL3) 'con.execute(sSQL3) -- comment it out ``` run page, and test resulted sql query against database 3. If ID has string datatype (char, varchar, etc) then you should quote its value using '...' P.S. Use [Order] to deal with reserved words.
Be aware of the reserve words of database management system. Here you are using `Order`, which is a reserved one. So use this if you using **MySQL**: ``` sSQL3="UPDATE Bookings SET `Order`=" & Ordernumber_update & " WHERE ID=" & request("ID") &";" ``` Or in case of **SQL Server**: ``` sSQL3="UPDATE Bookings SET [Order]=" & Ordernumber_update & " WHERE ID=" & request("ID") &";" ``` Hope this will help others also. Thank you.
Syntax error in Update Statement ASP Classic
[ "", "sql", "ms-access", "asp-classic", "sql-update", "" ]
I'm trying to convert my Date which is (eg. 2012-04-20 05:54:59) format in into mm-yyyy. I came across some solutions that says you would need to convert into varchar . Is there any way using the Convert function ? Thanks :)
You can use `FORMAT` function, available from SQL Server 2012 onwards: ``` DECLARE @myDate DATETIME = '2012-04-20 05:54:59' SELECT FORMAT(@myDate, 'MM-yyyy') ``` **Output:** ``` 04-2012 ```
There might be a more graceful way to pull this off, but the below works. ``` Declare @dt datetime = GETDATE() SELECT LEFT('0' + CAST(MONTH(@dt) as varchar(2)),2) + '-' + CAST(YEAR(@dt) as char(4)) ``` btw my normal Date Conversion cheat sheet is [here](http://rdsrc.us/0WQlRn), but I'm not seeing MM-YYYY as one of the formats.
Convert SQL Server Date to mm-yyyy
[ "", "sql", "sql-server", "t-sql", "sql-server-2014", "" ]
I've this data: ``` Id Date Value 'a' 2000 55 'a' 2001 3 'a' 2012 2 'a' 2014 5 'b' 1999 10 'b' 2014 110 'b' 2015 8 'c' 2011 4 'c' 2012 33 ``` I want to filter out the **first** and the **last** value (when the table is sorted on the Date column), and only keep the other values. In case there are only two entries, nothing is returned. (Example for Id = 'c') ``` ID Date Value 'a' 2001 3 'a' 2012 2 'b' 2014 110 ``` I tried to use order by (RANK() OVER (PARTITION BY [Id] ORDER BY Date ...)) in combination with this article (<http://blog.sqlauthority.com/2008/03/02/sql-server-how-to-retrieve-top-and-bottom-rows-together-using-t-sql/>) but I can't get it to work. [**UPDATE**] All the 3 answers seem fine. But I'm not a SQL expert, so my question is which one has the fastest performance if the table has around 800000 rows and there a no indexes on any column.
You can do this with `EXISTS`: ``` SELECT * FROM Table1 a WHERE EXISTS (SELECT 1 FROM Table1 b WHERE a.ID = b.ID AND b.Date < a.Date ) AND EXISTS (SELECT 1 FROM Table1 b WHERE a.ID = b.ID AND b.Date > a.Date ) ``` Demo: [SQL Fiddle](http://www.sqlfiddle.com/#!3/2d8a69/6/0)
You can use `row_number` twice to determine the `min` and `max` dates and then filter accordingly: ``` with cte as ( select id, [date], value, row_number() over (partition by id order by [date]) minrn, row_number() over (partition by id order by [date] desc) maxrn from data ) select id, [date], value from cte where minrn != 1 and maxrn != 1 ``` * [SQL Fiddle Demo](http://sqlfiddle.com/#!3/6ffd9/3) --- Here's another approach using `min` and `max` for this without needing to use a ranking function: ``` with cte as ( select id, min([date]) mindate, max([date]) maxdate from data group by id ) select * from data d where not exists ( select 1 from cte c where d.id = c.id and d.[date] in (c.mindate, c.maxdate)) ``` * [More Fiddle](http://sqlfiddle.com/#!3/6ffd9/4)
How to filter out the first and last entry from a table using RANK?
[ "", "sql", "group-by", "rank", "" ]
I am using an a join query between three tables. My query doesn't display entirely my desired result. There are three tables `team`, `school` and `game`. For example, how can I display the total wins Lawrence North High School(`school_id = 11111`) has? Current query: ``` SELECT school.school_name FROM school INNER JOIN team ON school.school_id = team.school_id INNER JOIN game ON team.team_id = game.game_id WHERE school.school_id = 11111 ``` tables used: ``` SQL> Select * From Team; TEAM_ID SPORT_ID SCHOOL_ID TEAM_NAME ---------- ---------- ---------- ---------- 1 1 11111 WildCats 2 2 11111 WILDCATS 3 3 11111 WildCats 4 4 11111 WILDCATS 5 5 11111 WildCats 6 6 11111 WildCats 7 7 11111 WildCats 8 1 123134 Bears 9 2 123134 Bears 10 3 123134 BEARS 11 4 123134 Bears 12 5 123134 Bears 13 6 123134 BEars SQL> Select * From School; SCHOOL_ID SCHOOL_NAME SCHOOL_TYPE ---------- ----------------------------- ------------ 11546 Ivy Tech College COLLegE 11090 LAWRENCE Central Grade School GRADE SCHOOL 11111 Lawrence NORTH High School HIGH SCHooL 19283 Howe High SCHOOL High SchooL 123134 Lawrence Central High School HIGH SCHOOL 192 Little Big Horn Grade School GRADE SCHOOL SQL> Select * From Game; TEAM_ID GAME_DATE R ---------- --------- - 1 10-SEP-98 w 1 12-SEP-98 W 2 17-SEP-98 w 2 18-SEP-98 W 3 18-SEP-98 w 3 11-SEP-98 W 4 20-SEP-98 L 4 21-SEP-98 l 5 21-SEP-98 L 5 19-SEP-98 l 6 26-SEP-98 L 6 27-SEP-98 W 7 28-SEP-98 w 7 29-SEP-98 W 8 11-OCT-98 W 8 12-OCT-98 w 9 13-OCT-98 L 9 14-OCT-98 L 10 14-OCT-98 L 11 15-OCT-98 L 12 16-OCT-98 W 13 17-OCT-98 W 11 18-OCT-98 W 12 19-OCT-98 W 13 11-NOV-98 11 12-NOV-98 ```
Try this ``` SELECT count(*) as Total_wins FROM school INNER JOIN team ON school.school_id = team.school_id INNER JOIN game ON team.team_id = game.game_id WHERE school.school_id = 11111 and game.r = 'W' ```
You can use the `COUNT` With `Where` clause with `Group By` on the `schoolname`. ``` SELECT school.school_name, COUNT(game.R) as TotalWins FROM school INNER JOIN team ON school.school_id = team.school_id INNER JOIN game ON team.team_id = game.game_id WHERE school.school_id = 11111 AND game.R = 'W' ``` If you are doing it for the one school you can just remove the `Group By` but if you want to get the count for all schools you need to apply the `Group By` and remove the `Chool_id` filter from `Where` clause.
JOIN 3 tables and COUNT for result
[ "", "sql", "" ]
What is the necessary condition to have the same result from a `JOIN` and a `Cartesian Product` ? I don't think it is possible, if anyone can clear me will be great, thanks.I've searched but I couldn't an answer to my question.
If you are using SQL Server, the `CROSS JOIN` join type can achieve the same thing as a cartesian product. It matches every row from the left table to every row in the right. This is illustrated in the following SO question: [What is the difference between Cartesian product and cross join?](https://stackoverflow.com/questions/11861417/what-is-the-difference-between-cartesian-product-and-cross-join) And a little MS documentation also: <https://technet.microsoft.com/en-us/library/ms190690%28v=sql.105%29.aspx?f=255&MSPPError=-2147217396>
All three queries return the same result: ``` select * from t1 cross join t2 select * from t1,t2 select * from t1 join t2 on 1=1 ```
Same results with cartesian product and JOIN
[ "", "sql", "" ]
I have a dataset with a column of phone numbers. I want to filter this dataset using PROC SQL `WHERE` the length of the numbers is **at least** 7 digits. In normal SQL I can just apply a length function around a number and it works, however in SAS it won't let me apply it to a numerical column. My first instinct is to convert the column to a character and then find the length of that but I can only seem to state a size when I use the `put` function. However I dont even know the biggest size of my numbers as I can't calculate length! How do i find the length of a numerical value in SAS using PROC SQL?
Since you have not posted the sample dataset , so I have created one for myself Creating the sample dataset. Taking `phonenum` as `numeric` same as in your case. ``` data test; infile datalines; input phonenum : 8.; datalines; 123 1234 12345 123456 1234567 12345678 123456789 12345678910 ; run; ``` You are right in the approach, if you want to count the number of digits, it has to be converted to `char`, doing the following steps below: 1. Converting the `numeric` phonenum to `char` . Although it is obvious that the number of digits would not be greater than 32, still if you would like you can increase the count. 2. Using the `compress` function to `strip` off the blank characters 3. Using the `length` function to count the number of digits 4. In `proc sql\SAS` you can not use the newly created variable in the `where` statement just like that, but `proc sql` allows you to do so using the `calculated` keyword before such type of variables. --- ``` proc sql; select length(compress(put(phonenum,32.))) as phonelen from test where calculated phonelen > 6; quit; ``` Additionally, you could achieve the same using datasteps(SAS) as well like below: ``` data _null_; set test; phonelen=length(compress(input(phonenum,$32.))); if phonelen > 6; put phonelen=; run; ```
In SAS, `length()` takes a character string as argument(only). You would have to convert the numeric variable to character: ``` proc sql; select length(put(x,32. -l)) from test; quit; ``` to use that function. The `-l` left aligns the result (so the extra spaces are ignored). You can arbitrarily choose 32 (as that's much longer than it should be) or any other value at least 10 or so (determine this from your likely numeric values- can this have a country code, etc.). Of course, you could always just say ``` numvar ge 1000000 ``` which would do the same, no? And of course, a phone number should never be stored in a numeric field. 7 digit number takes 7 bytes as character, 8 as number, and while it contains 7 digits it's really not a numeric concept.
How to find the length of a numerical variable using PROC SQL
[ "", "sql", "sas", "numerical", "string-length", "proc-sql", "" ]
I have a database that contains data for multiple divisions within a company. Each division is given a code that allows certain data elements to be exclusive to them. This code is housed in the 'agency' column. Not all tables have an agency column, but I do not know which ones do or do not have this column. One of the divisions has had their agency code changed and this now needs to be be altered within the database across all tables. How do I script a change that will look across all tables, update 'value1' to 'value1a' in the agency column where agency = 'value1', but only run the update portion when/if the agency column is found to be present? I've tried several types of scripts, the two most prevalent being either: * trying to adapt sp\_msforeachtable to work, which doesn't seem to accomdodate a where clause * trying to generate a list of update statements by polling information\_schema using a script like this: ``` select 'update dbo.' + t.TABLE_NAME + ' set agency = value1 where agency = value1a' from INFORMATION_SCHEMA.TABLES t join INFORMATION_SCHEMA.COLUMNS c1 on T.TABLE_NAME = C1.TABLE_NAME join INFORMATION_SCHEMA.COLUMNS c2 on t.TABLE_NAME = c2.TABLE_NAME where t.TABLE_TYPE = 'BASE TABLE' and c1.COLUMN_NAME = 'agency' and c2.COLUMN_NAME = 'agency' ``` it gives me back a long list of scripts to run individually against each table, but then I get an error trying to run the result set: > Msg 207, Level 16, State 1, Line 1 Invalid column name 'Value1'. Let me know if you need me to clarify.
Your second option (generating the scripts) seems like the best one. Now, as for your error. Is your agency code a string? Then that is probably the problem: you need to enclose it in quotation marks. Since you are building a string, they will need to be doubled: ``` select 'update dbo.' + t.TABLE_NAME + ' set agency = ''value1a'' where agency = ''value1''' from INFORMATION_SCHEMA.TABLES t join INFORMATION_SCHEMA.COLUMNS c1 on T.TABLE_NAME = C1.TABLE_NAME join INFORMATION_SCHEMA.COLUMNS c2 on t.TABLE_NAME = c2.TABLE_NAME where t.TABLE_TYPE = 'BASE TABLE' and c1.COLUMN_NAME = 'agency' and c2.COLUMN_NAME = 'agency' ``` Your example had you setting agency to `'value1'` when it was previously `'value1a'`, but from your description of the requirements it sounds like it should be the other way around.
`value1` and `value1a` are a string values so they need to be enclosed in quotes, otherwise your script is considering them to be column names, as specified in the error. So try: ``` select 'update dbo.' + t.TABLE_NAME + ' set agency = ''value1'' where agency = ''value1a''' ```
Update values in column across multiple tables using where
[ "", "sql", "sql-server", "where-clause", "" ]
I have used the always true statement e.g. `1 = 1` in case statement of where clause in MYSQL with following syntax: ``` select * from tablename where (case when tablefield is not null then then tablefield = 'value' else 1 = 1 end) ``` I want to know how can i use else `1 = 1` (always true statement) in sqlserver/tsql case statement in where clause.
Sorry I just mixed up the query,I was intended to ask for a condition when parameter variable is null instead of tablefield In that case query may look like this : select \* from tablename where (case when parameterfield\_or\_variable is not null then then tablefield = 'value' else 1 = 1 end) or when using parameterfield for value (case when parameterfield\_or\_variable is not null then then tablefield = parameter\_field\_or\_variable else 1 = 1 end) The answer as per asked question @KHeaney is right. However the query in tsql/sqlserver as described in mysql, will be like this : select \* from tablename where tablefield = @parameterfield -- incase of comparing input parameter field with tablefield or @parameter is null so when @parameterfield will be null it will show all results otherwise it will restrict to only input value. Thanks
you would not use case you would just write a multi conditional statement. In your case it would look like ``` Where (tablefield = 'value' OR tablefield is null) ```
Case statement in sqlserver in where clause with else as always true
[ "", "mysql", "sql", "sql-server", "t-sql", "case", "" ]
I have a varchar2 column named NAME\_USER. for example the data is: JUAN ROMÄN but I try to show JUAN ROMAN, replace Á to A in my statement results. How Can I do that?. Thanks in advance.
Use **convert** function with the appropriate charset ``` select CONVERT('JUAN ROMÄN', 'US7ASCII') from dual; ``` below are the charset which can be used in oracle: ``` US7ASCII: US 7-bit ASCII character set WE8DEC: West European 8-bit character set WE8HP: HP West European Laserjet 8-bit character set F7DEC: DEC French 7-bit character set WE8EBCDIC500: IBM West European EBCDIC Code Page 500 WE8PC850: IBM PC Code Page 850 WE8ISO8859P1: ISO 8859-1 West European 8-bit character set ```
You could use `replace`, `regexp_replace` or `translate`, but they would each require you to map all possible accented characters to their unaccented versions. Alternatively, there's a function called `nlssort()` which is typically used to override the default language settings used for the `order by` clause. It has an option for accent-insensitive sorting, which can be creatively misused to solve your problem. `nlssort()` returns a binary, so you have to convert back to varchar2 using `utl_raw.cast_to_varchar2()`: ``` select utl_raw.cast_to_varchar2(nlssort(NAME_USER, 'nls_sort=binary_ai')) from YOUR_TABLE; ``` Try this, for a list of accented characters from the extended ASCII set, together with their derived, unaccented values: ``` select level+192 ascii_code, chr(level+192) accented, utl_raw.cast_to_varchar2(nlssort(chr(level+192),'nls_sort=binary_ai')) unaccented from dual connect by level <= 63 order by 1; ``` Not really my answer - I've used this before and it seemed to work ok, but have to credit this post: <https://community.oracle.com/thread/1117030> ETA: `nlssort()` can't do accent-insensitive without also doing case-insensitive, so this solution will always convert to lower case. Enclosing the expression above in `upper()` will of course get your example value back to "JUAN ROMAN". If your values can be mixed case, and you need to preserve the case of each character, and `initcap()` isn't flexible enough, then you'll need to write a bit of PL/SQL.
how replace accented letter in a varchar2 column in oracle
[ "", "sql", "oracle", "" ]
my question is if there is any way to do this: ``` INSERT INTO t1 (c1, c2, c3, c4, c5) VALUES (SELECT c1 FROM t2, 15, 2, 'Name', SELECT c5 FROM t4); ``` I know I have the `INSERT INTO t1 FROM SELECT`, but I can't use it for this special case, and I wanted to know if it would be possible to add selects in the insert values.
Sub query use with **`parenthesis`** use right there like: ``` INSERT INTO t1 (c1, c2, c3, c4, c5) VALUES ((SELECT c1 FROM t2), 15, 2, 'Name', (SELECT c5 FROM t4)); ```
Yes it is possible, but not in the syntax you mention: ``` INSERT INTO t1 (c1, c2, c3, c4, c5) SELECT t2.c1 , 15, 2, 'Name', t4.c5 FROM t2, t4 WHERE t2.some_field = t4.some_other_field ``` The `t2` and `t4` tables should have some logical join between them or a `WHERE` condition of their own.
SELECT on INSERT's VALUES
[ "", "mysql", "sql", "insert", "sql-update", "" ]
Using mysql, is it possible to iterate through results to further results. I think what I'm trying to do is group by other id grouped by transid, but apparently that isn't what this type of sorting is called. **Sample Data** ``` id transid other id ------------------------------ |1 |1 |1 | |2 |1 |1 | |3 |1 |1 | |4 |1 |2 | |5 |1 |2 | |6 |1 |2 | |7 |2 |1 | |8 |2 |1 | |9 |2 |1 | |10 |2 |2 | |11 |3 |1 | |12 |3 |1 | ``` **Expected Output** ``` RESULT: id transid other id ------------------------------ |1 |1 |1 | |4 |1 |2 | |7 |2 |1 | |10 |2 |2 | |11 |3 |1 | ```
``` select id,transid,other_id from table group by transid,other_id; ``` If your first column meant transid col ... Grouping all those rows with same values of transid,other\_id
Of course, you can use `GROUP BY` on multiple columns. ``` SELECT DISTINCT ON (id), transid, other_id GROUP BY transid, other_id; ```
How to group mysql first column and again by second id not losing unique combinations
[ "", "mysql", "sql", "" ]
``` SQL> desc invoices Name Null? Type ----------------------------------------- -------- ---------------------------- INVOICE_ID NOT NULL NUMBER(6) COMPANY_ID NUMBER(6) STUDENT_ID NUMBER(6) BILLING_DATE DATE SQL> ``` I would like to insert some sample data into this table. The `company_id` and `student_id` are foreign keys. This is what I'am entering: `INSERT INTO invoices VALUES (SEQ_INVOICE.NEXTVAL,[what1],[what2],SYSDATE);` I don't know what I am supposed to put in the what1 and what2 ``` SQL> desc companies Name Null? Type ----------------------------------------- -------- ---------------------------- COMPANY_ID NOT NULL NUMBER(6) COMPANY_NAME VARCHAR2(30) ADDRESS VARCHAR2(128) CONTACT_NO VARCHAR2(11) NO_OF_EMP NUMBER(10) SQL> SQL> desc students Name Null? Type ----------------------------------------- -------- ---------------------------- STUDENT_ID NOT NULL NUMBER(6) ST_FNAME VARCHAR2(16) ST_SNAME VARCHAR2(16) ADDRESS VARCHAR2(128) DOB DATE SQL> ```
you have to enter an existed `STUDENT_ID` from `students` table and an existed `COMPANY_ID` from `compaies` table to invoices. Consider you have data like next ``` COMPANY_ID COMPANY_NAME ADDRESS CONTACT_NO NO_OF_EMP ----------- ------------ -------- ---------- --------- 1 Blah LLC blah st. 123456 100 2 My Company My Street 987654321 50 ``` and ``` STUDENT_ID ST_FNAME ST_SNAME ADDRESS DOB ----------- --------- --------- -------- ------------ 11 Jim Carrey .... 1900.25.04 22 Jack Sparrow Carrib st. 1700.30.08 ``` then you can use `1` or `2` as `COMPANY_ID` (in your query [what1]) and `11` or `22` as `STUDENT_ID` (in your query [what2])
You need to first enter companies in "companies", and students in "students". Then use those ID's for [what1] and [what2]
How to insert data into tables that have foreign keys
[ "", "sql", "database", "oracle", "database-administration", "" ]
Consider a table named "Books" every one hour new entry will be created in UTC format. I want to fetch last 24 hours data (24 entries) in my local timezone. I tried this ``` select * from books where created_at >= DATE_SUB(NOW(),INTERVAL 24 HOUR); ``` How do I fetch records at runtime for the past 24 hours without setting timezone in mysql? Also the created data comes under three different id's, i need the data belonging to id=1.
(**EDITED**) Try this : ``` select * from books where DAYOFMONTH(DATE(created_at))<>DAYOFMONTH(NOW()) order by DATE(created_at) desc limit 24; ``` Now If you want to change the Time zone, for a session use this query : ``` SET GLOBAL TIME_ZONE = 'ASIA/CALCUTTA' ``` But the only disadvantage is you have to write it every time you restart MySQL. To change it permananently, Put the following in your mysql server configuration (e.g. my.cnf): ``` default-time-zone=ASIA/CALCUTTA ``` And the only last last option left is conversion in query itself : ``` select convert_tz(created_at,'UTC', 'ASIA/CALCUTTA ') MyTimeZone from books where DAYOFMONTH(DATE(created_at))<>DAYOFMONTH(NOW()) order by MyTimeZone desc limit 24; ```
Try this: ``` SELECT * FROM books WHERE created_at > DATE_SUB(NOW(),INTERVAL 24 HOUR); ```
Dynamic sql query for broken hours data
[ "", "mysql", "sql", "mysql-workbench", "" ]
I have a table called `MyTextstable (myTextsTable_id INT, myTextsTable_text VARCHAR(MAX))`. This table has around 4 million records and I am trying to remove any instance of the `ASCII` characters in the following range(s) the `VARCHAR(MAX)` column `myTextsTable_text`. * 00 - 08 * 11 - 12 * 14 - 31 * 127 I have written the following SQL query, which is taking under 10 minutes on SQL Server 2012, but failed to execute on SQL Server 2008 R2 even after two hours (so I stopped the execution). Please note I have restored the backup of a SQL Server 2008 R2 database on SQL Server 2012 (i.e. the data is exactly same). ``` BEGIN TRANSACTION [Tran1] BEGIN TRY UPDATE myTextsTable SET myTextsTable_text = REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(myTextsTable_text, CHAR(0), ''), CHAR(1), ''), CHAR(2), ''), CHAR(3), ''), CHAR(4), ''), CHAR(5), ''), CHAR(6), ''), CHAR(7), ''), CHAR(8), ''), CHAR(11), ''), CHAR(12), ''), CHAR(14), ''), CHAR(15), ''), CHAR(16), ''), CHAR(17), ''), CHAR(18), ''), CHAR(19), ''), CHAR(20), ''), CHAR(21), ''), CHAR(22), ''), CHAR(23), ''), CHAR(24), ''), CHAR(25), ''), CHAR(26), ''), CHAR(27), ''), CHAR(28), ''), CHAR(29), ''), CHAR(30), ''), CHAR(31), ''), CHAR(127), '') WHERE myTextsTable_text LIKE '%[' + CHAR(0) + CHAR(1) + CHAR(2) + CHAR(3) + CHAR(4) + CHAR(5) + CHAR(6) + CHAR(7) + CHAR(8) + CHAR(11) + CHAR(12) + CHAR(14) + CHAR(15) + CHAR(16) + CHAR(17) + CHAR(18) + CHAR(19) + CHAR(20) + CHAR(21) + CHAR(22) + CHAR(23) + CHAR(24) + CHAR(25) + CHAR(26) + CHAR(27) + CHAR(28) + CHAR(29) + CHAR(30) + CHAR(31) + CHAR(127) + ']%'; COMMIT TRANSACTION [Tran1]; END TRY BEGIN CATCH ROLLBACK TRANSACTION [Tran1]; --PRINT ERROR_MESSAGE(); END CATCH; ``` There are only 135 records affected. As the single `UPDATE` query wasn't working in SQL Server 2008, I have tried the following approach with a temp table. ``` BEGIN TRANSACTION [Tran1] BEGIN TRY IF OBJECT_ID('tempdb..#myTextsTable') IS NOT NULL DROP TABLE #myTextsTable; SELECT myTextsTable_id, myTextsTable_text INTO #myTextsTable FROM myTextsTable WHERE myTextsTable_text LIKE '%[' + CHAR(0) + CHAR(1) + CHAR(2) + CHAR(3) + CHAR(4) + CHAR(5) + CHAR(6) + CHAR(7) + CHAR(8) + CHAR(11) + CHAR(12) + CHAR(14) + CHAR(15) + CHAR(16) + CHAR(17) + CHAR(18) + CHAR(19) + CHAR(20) + CHAR(21) + CHAR(22) + CHAR(23) + CHAR(24) + CHAR(25) + CHAR(26) + CHAR(27) + CHAR(28) + CHAR(29) + CHAR(30) + CHAR(31) + CHAR(127) + ']%'; UPDATE #myTextsTable SET myTextsTable_text = REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(myTextsTable_text, CHAR(0), ''), CHAR(1), ''), CHAR(2), ''), CHAR(3), ''), CHAR(4), ''), CHAR(5), ''), CHAR(6), ''), CHAR(7), ''), CHAR(8), ''), CHAR(11), ''), CHAR(12), ''), CHAR(14), ''), CHAR(15), ''), CHAR(16), ''), CHAR(17), ''), CHAR(18), ''), CHAR(19), ''), CHAR(20), ''), CHAR(21), ''), CHAR(22), ''), CHAR(23), ''), CHAR(24), ''), CHAR(25), ''), CHAR(26), ''), CHAR(27), ''), CHAR(28), ''), CHAR(29), ''), CHAR(30), ''), CHAR(31), ''), CHAR(127), '') UPDATE myTextsTable SET myTextsTable_text = new.myTextsTable_text FROM myTextsTable INNER JOIN #myTextsTable new ON new.myTextsTable_id=myTextsTable.myTextsTable_id DROP TABLE #myTextsTable; COMMIT TRANSACTION [Tran1]; END TRY BEGIN CATCH ROLLBACK TRANSACTION [Tran1]; --PRINT ERROR_MESSAGE(); END CATCH; ``` However, the result is same. Works perfectly fine in SQL Server 2012, but not in SQL Server 2008 R2. I found that the `UPDATE` query was still executing even after two hours (the records were saved into the temp table (`#myTextsTable`) in a few minutes, I checked this later to make sure which part is taking longer). As the aforementioned two ways weren't working, I have tried using this using `TABLE` variables just to check if it makes any difference, but the result was same (i.e. works fine in SQL Server 2012 but not in SQL Server 2008 R2) ``` BEGIN TRANSACTION [Tran1] BEGIN TRY DECLARE @myTextsTable TABLE (myTextsTable_id INT, myTextsTable_text VARCHAR(MAX)) INSERT INTO @myTextsTable(myTextsTable_id, myTextsTable_text) SELECT myTextsTable_id, myTextsTable_text FROM myTextsTable WHERE myTextsTable_text LIKE '%[' + CHAR(0) + CHAR(1) + CHAR(2) + CHAR(3) + CHAR(4) + CHAR(5) + CHAR(6) + CHAR(7) + CHAR(8) + CHAR(11) + CHAR(12) + CHAR(14) + CHAR(15) + CHAR(16) + CHAR(17) + CHAR(18) + CHAR(19) + CHAR(20) + CHAR(21) + CHAR(22) + CHAR(23) + CHAR(24) + CHAR(25) + CHAR(26) + CHAR(27) + CHAR(28) + CHAR(29) + CHAR(30) + CHAR(31) + CHAR(127) + ']%'; UPDATE @myTextsTable SET myTextsTable_text = REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(myTextsTable_text, CHAR(0), ''), CHAR(1), ''), CHAR(2), ''), CHAR(3), ''), CHAR(4), ''), CHAR(5), ''), CHAR(6), ''), CHAR(7), ''), CHAR(8), ''), CHAR(11), ''), CHAR(12), ''), CHAR(14), ''), CHAR(15), ''), CHAR(16), ''), CHAR(17), ''), CHAR(18), ''), CHAR(19), ''), CHAR(20), ''), CHAR(21), ''), CHAR(22), ''), CHAR(23), ''), CHAR(24), ''), CHAR(25), ''), CHAR(26), ''), CHAR(27), ''), CHAR(28), ''), CHAR(29), ''), CHAR(30), ''), CHAR(31), ''), CHAR(127), '') UPDATE myTextsTable SET myTextsTable_updated = GETDATE() ,myTextsTable_updatedby = 'As per V87058' ,myTextsTable_text = new.myTextsTable_text FROM myTextsTable INNER JOIN @myTextsTable new ON new.myTextsTable_id=myTextsTable.myTextsTable_id COMMIT TRANSACTION [Tran1]; END TRY BEGIN CATCH ROLLBACK TRANSACTION [Tran1]; --PRINT ERROR_MESSAGE(); END CATCH; ``` Could anyone explain why this would happen? How to make this SQL query work in SQL Server 2008 R2? **Note:** I know that the string manipulations in database server/layer are not ideal and it would be recommended to do string manipulations in application layer and then save it in DB. But, I am trying to understand why this would be a problem in one version and why not in another version. > SQL Server 2012 > Microsoft SQL Server 2012 - 11.0.5058.0 (X64) > Standard Edition (64-bit) on Windows NT 6.3 (Build 9600: ) > (Hypervisor) > > SQL Server 2008 R2 > Microsoft SQL Server 2012 - 11.0.5058.0 (X64) > Standard Edition (64-bit) on Windows NT 6.3 (Build 9600: ) > (Hypervisor)
This is a [known issue](https://connect.microsoft.com/SQLServer/feedback/details/708179/indefinite-hang-with-replace-statement-on-varbinary-max) on SQL Server 2008 with LOB datatypes and certain collations. It is easy to reproduce ``` /*Hangs on 2008*/ DECLARE @VcMax varchar(max)= char(0) + 'a' SELECT REPLACE(@VcMax COLLATE Latin1_General_CS_AS, char(0), '') ``` Whilst hung it is CPU bound and seems to be in an infinite loop through these functions. [![enter image description here](https://i.stack.imgur.com/S2Gwi.png)](https://i.stack.imgur.com/S2Gwi.png) And the fix is easy too. Either use a non `MAX` datatype... ... or a binary collation ``` /*Doesn't Hang*/ DECLARE @VcMax varchar(max)= char(0) + 'a' SELECT REPLACE(@VcMax COLLATE Latin1_General_100_BIN2, char(0), '') ```
For anyone reading this in future, the following ways worked fine. Way 1. Changing the `COLLATION` on the `VARCHAR(MAX)` column in the `UPDATE SQL` query to `BINARY COLLATION` as Martin Smith suggested (please see the accepted answer). > REPLACE(myTextsTable\_text COLLATE Latin1\_General\_100\_BIN2, CHAR(0),... The solution will be as below: ``` GO BEGIN TRANSACTION [Tran1] BEGIN TRY IF OBJECT_ID('tempdb..#myTextsTable') IS NOT NULL DROP TABLE #myTextsTable; SELECT myTextsTable_id, myTextsTable_text INTO #myTextsTable FROM myTextsTable WHERE myTextsTable_text LIKE '%[' + CHAR(0) + CHAR(1) + CHAR(2) + CHAR(3) + CHAR(4) + CHAR(5) + CHAR(6) + CHAR(7) + CHAR(8) + CHAR(11) + CHAR(12) + CHAR(14) + CHAR(15) + CHAR(16) + CHAR(17) + CHAR(18) + CHAR(19) + CHAR(20) + CHAR(21) + CHAR(22) + CHAR(23) + CHAR(24) + CHAR(25) + CHAR(26) + CHAR(27) + CHAR(28) + CHAR(29) + CHAR(30) + CHAR(31) + CHAR(127) + ']%'; UPDATE #myTextsTable SET myTextsTable_text = REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(myTextsTable_text COLLATE Latin1_General_100_BIN2, CHAR(0), ''), CHAR(1), ''), CHAR(2), ''), CHAR(3), ''), CHAR(4), ''), CHAR(5), ''), CHAR(6), ''), CHAR(7), ''), CHAR(8), ''), CHAR(11), ''), CHAR(12), ''), CHAR(14), ''), CHAR(15), ''), CHAR(16), ''), CHAR(17), ''), CHAR(18), ''), CHAR(19), ''), CHAR(20), ''), CHAR(21), ''), CHAR(22), ''), CHAR(23), ''), CHAR(24), ''), CHAR(25), ''), CHAR(26), ''), CHAR(27), ''), CHAR(28), ''), CHAR(29), ''), CHAR(30), ''), CHAR(31), ''), CHAR(127), '') UPDATE myTextsTable SET myTextsTable_updated = GETDATE() ,myTextsTable_updatedby = 'As per V87058' ,myTextsTable_text = new.myTextsTable_text FROM myTextsTable INNER JOIN #myTextsTable new ON new.myTextsTable_id=myTextsTable.myTextsTable_id DROP TABLE #myTextsTable; COMMIT TRANSACTION [Tran1]; END TRY ``` Way 2: I have created a `SQL function` to replace these characters with `STUFF` instead of using `REPLACE` function. > Note: Please note the SQL function is written to my specific > requirement. As such, it only replaces characters in the following > range. * 00 - 08 * 11 - 12 * 14 - 31 * 127 -- ``` Go CREATE FUNCTION [dbo].RemoveASCIICharactersInRange(@InputString VARCHAR(MAX)) RETURNS VARCHAR(MAX) AS BEGIN IF @InputString IS NOT NULL BEGIN DECLARE @Counter INT, @TestString NVARCHAR(40) SET @TestString = '%[' + NCHAR(0) + NCHAR(1) + NCHAR(2) + NCHAR(3) + NCHAR(4) + NCHAR(5) + NCHAR(6) + NCHAR(7) + NCHAR(8) + NCHAR(11) + NCHAR(12) + NCHAR(14) + NCHAR(15) + NCHAR(16) + NCHAR(17) + NCHAR(18) + NCHAR(19) + NCHAR(20) + NCHAR(21) + NCHAR(22) + NCHAR(23) + NCHAR(24) + NCHAR(25) + NCHAR(26) + NCHAR(27) + NCHAR(28) + NCHAR(29) + NCHAR(30) + NCHAR(31) + NCHAR(127)+ ']%' SELECT @Counter = PATINDEX (@TestString, @InputString COLLATE Latin1_General_BIN) WHILE @Counter <> 0 BEGIN SELECT @InputString = STUFF(@InputString, @Counter, 1, '') SELECT @Counter = PATINDEX (@TestString, @InputString COLLATE Latin1_General_BIN) END END RETURN(@InputString) END GO ``` Then, the `UPDATE SQL` query (in my temp table approach) will be something like below: ``` UPDATE #myTextsTable SET myTextsTable_text = [dbo].RemoveASCIICharactersInRange(#myTextsTable_text) Go ``` My personal preferred way would be the first one.
SQL query working fine in SQL Server 2012, but failed to execute in SQL Server 2008 R2
[ "", "sql", "sql-server", "t-sql", "sql-server-2008-r2", "sql-server-2012", "" ]
I am trying to execute a batch insert as below: ``` SQL> INSERT INTO Z (q, w, e, r) VALUES ('7', 'A', '3', 'A'), ('8', 'B', '4', 'An'), ('9', 'C', '5', 'And'); * ERROR at line 1: ORA-00933: SQL command not properly ended ``` However it keeps failing. Could someone help me out why? Thanks!
``` INSERT INTO cellphone (phone_id, model, data, os) select '99997', 'Galaxy S IV', '4g', 'Android' from dual union all select '99998', 'Galaxy S V', '4g', 'Android' from dual ```
You can do this with one statement if you use `insert . . . select`: ``` INSERT INTO cellphone (phone_id, model, data, os) SELECT '99997', 'Galaxy S IV', '4g', 'Android' FROM DUAL UNION ALL SELECT '99998', 'Galaxy S V', '4g', 'Android' FROM DUAL UNION ALL SELECT '99999', 'Galaxy S VI', '4g', 'Android' FROM DUAL; ```
Batch SQL Insert is Failing
[ "", "sql", "oracle", "insert", "" ]
how to write query to get today's date data in `SQL server` ? ``` select * from tbl_name where date = <Todays_date> ```
The correct answer will depend on the type of your `datecolumn`. Assuming it is of type `Date`: ``` select * from tbl_name where datecolumn = cast(getdate() as Date) ``` If it is `DateTime`: ``` select * from tbl_name where cast(datecolumn as Date) = cast(getdate() as Date) ``` **Please note**: In SQL Server, a `datediff()` (or other calculation) on a column is NOT Sargable, whereas as `CAST(column AS Date)` is.
Seems Mitch Wheat's answer isn't [sargable](http://blog.sqlthoughts.com/2014/03/13/what-makes-a-query-sargable/), although I am not sure this is true. Please note: a `DATEDIFF()` (or other calculation) on LHS is NOT Sargable, whereas as `Cast(x to Date)` is. ``` SELECT * FROM tbl_name WHERE date >= cast(getdate() as date) and date < cast(getdate()+1 as date) ```
how to get current/Todays date data in sql server
[ "", "sql", "sql-server", "current-time", "" ]
How to I write a SQL script that return all the site IDs that are in all my tables: Tables in this image: ![enter image description here](https://i.stack.imgur.com/3xIeE.png) And this is what I want returned: ![enter image description here](https://i.stack.imgur.com/goGsd.png) ...because only site 1 & 2 are in all four tables. Database: SQL Azure (but I don't think that matters)
``` select SiteID from table1 t1 join table2 t2 on t1.SiteID = t2.SiteID join table3 t3 on t2.SiteID = t3.SiteID join table4 t4 on t3.SiteID = t4.SiteID join table5 t5 on t4.SiteID = t5.SiteID; ```
You can do this with a `join`: ``` select t1.id from table1 t1 join table2 t2 on t1.id = t2.id join table3 t3 on t1.id = t3.id join table4 t4 on t1.id = t4.id; ```
How to return IDs that are in each table
[ "", "sql", "azure-sql-database", "" ]
I have one table called 'ratings' ``` ID RATING 1 5 1 2 2 5 3 1 3 4 3 4 ``` And i want to find the average rating of each restaurant (same id) that is greater than or equal to than the overall average rating (avg rating of all the restaurants combined) So for example, restaurant ID 1 avg would be 3.5 and restaurant ID 3 would be 3. The overall avg in this case is a 3.5 So the table should return ``` ID RATING 1 3.5 2 5 ``` This is how I did it so far but I'm not sure how to compare it to the overall average. ``` SELECT x.id, AVG(x.rating) AS average FROM ratings GROUP BY x.rid; ``` So this returns a table with the average ratings for each restaurant ID, but How do i compare this to the total average of ratings without hardcoding it?
use `having` clause: ``` SELECT x.id, AVG(x.rating) AS average FROM ratings x GROUP BY x.rid Having AVG(x.rating)>(select AVG(rating) from ratings); ```
This is a good use-case for windowed function! `avg(rating) over () as global_avg` will return the overall average as a new column. Here the solution, using sub queries as well to "split steps": ``` select * from ( select id, avg(rating) as user_avg, global_avg from ( select id, rating, avg(rating) over () as global_avg from notes ) group by id, global_avg ) where user_avg >= global_avg order by id ``` This is powerful, you could add a new column "country" and calculate the avg by country: ``` +---+------+-------+ | id|rating|country| +---+------+-------+ | 1| 5| fr| | 1| 2| fr| | 2| 5| it| | 3| 1| it| | 3| 4| it| | 3| 4| it| | 4| 2| fr| +---+------+-------+ ``` ``` select * from ( select id, country, avg(rating) as user_avg, country_avg from ( select id, country, rating, avg(rating) over (partition by country) as country_avg from notes ) group by id, country, country_avg ) where user_avg >= country_avg order by id ```
SQL: Compare avg to overall avg
[ "", "sql", "" ]
I have inherited two tables, where the data for one is in hours, and the data for the other is in days. One table has planned resource use, the other holds actual hours spent ``` Internal_Resources | PeopleName | NoOfDays | TaskNo | |------------|----------|--------| | Fred | 1 | 100 | | Bob | 3 | 100 | | Mary | 2 | 201 | | Albert | 10 | 100 | TimeSheetEntries | UserName | PaidHours | TaskNumber | |----------|-----------|------------| | Fred | 7 | 100 | | Fred | 14 | 100 | | Fred | 7 | 100 | | Bob | 7 | 100 | | Bob | 21 | 100 | | Mary | 7 | 201 | | Mary | 14 | 100 | ``` What I need is a comparison of time planned vs time spent. ``` | name | PlannedDays | ActualDays | |--------|-------------|------------| | Albert | 10 | NULL | | Bob | 3 | 4.00 | | Fred | 1 | 4.00 | | Mary | NULL | 2.00 | ``` I've cobbled together something that almost does the trick: ``` SELECT UserName, ( SELECT NoOfDays FROM Internal_Resources as r WHERE r.PeopleName = e.UserName AND r.TaskNumber = ? ) AS PlannedDays, SUM ( Round( PaidHours / 7 , 2 ) ) as ActualDays FROM TimeSheetEntries e WHERE TaskNo = ? GROUP BY UserName ``` Which for task 100 gives me back something like: ``` | UserName | PlannedDays | ActualDays | |----------|-------------|------------| | Bob | 3 | 4 | | Fred | 1 | 4 | | Mary | 0 | 2 | ``` but lazy Albert doesn't feature! I'd like: ``` | UserName | PlannedDays | ActualDays | |----------|-------------|------------| | Albert | 10 | 0 | | Bob | 3 | 4 | | Fred | 1 | 4 | | Mary | 0 | 2 | ``` I've tried using variations on ``` SELECT * FROM ( SELECT ... ) AS plan INNER JOIN ( [second-query] ) AS actual ON plan.PeopleName = actual.UserName ``` What *should* I be doing? I suspect I need to squeeze a cross-join in there somewhere, but I'm getting nowhere... ( This going to be run inside a FileMaker ExecuteSQL() call, so I need pretty vanilla SQL... And no, I don't have control over the column or table names :-( ) EDIT: To be clear, I need the result set to include both users who had planned days and haven't worked on a task, as well as those who have worked on a task without having planned days... EDIT 2: I can kind of get what I want manually, but can't see how to combine the statements below: ``` SELECT people.name, PlannedDays, ActualDays FROM ( SELECT PeopleName as name FROM Internal_Resources WHERE TaskNo = 100 UNION SELECT DISTINCT UserName as name FROM TimeSheetEntries WHERE TaskNumber = 100 ORDER BY Name) AS people ``` gets me: ``` +--------+ | name | +--------+ | Albert | | Bob | | Fred | | Mary | +--------+ ``` and: ``` ( SELECT PeopleName AS name, NoOfDays AS PlannedDays FROM Internal_Resources WHERE TaskNo = 100 ) AS actual ``` gets me: ``` +--------+-------------+ | name | PlannedDays | +--------+-------------+ | Fred | 1 | | Bob | 3 | | Albert | 10 | +--------+-------------+ ``` and finally, ``` ( SELECT UserName AS name, SUM( Round( PaidHours / 7, 2 ) ) AS ActualDays FROM TimeSheetEntries WHERE TaskNumber = 100 GROUP BY UserName ) AS planned ``` gets me: ``` +------+------------+ | name | ActualDays | +------+------------+ | Bob | 4.00 | | Fred | 4.00 | | Mary | 2.00 | +------+------------+ ``` Now all (All! ha!) I want is to combine these into this: ``` +--------+-------------+------------+ | name | PlannedDays | ActualDays | +--------+-------------+------------+ | Albert | 10 | NULL | | Bob | 3 | 4.00 | | Fred | 1 | 4.00 | | Mary | NULL | 2.00 | +--------+-------------+------------+ ``` EDIT 3: I've tried combining it with something along the lines of: ``` SELECT people.name, PlannedDays, ActualDays FROM ( SELECT PeopleName as name FROM Internal_Resources WHERE TaskNo = 100 UNION SELECT DISTINCT UserName as name FROM TimeSheetEntries WHERE TaskNumber = 100 ORDER BY Name) AS people LEFT JOIN ( SELECT PeopleName AS name, NoOfDays AS PlannedDays FROM Internal_Resources WHERE TaskNo = 100 ) AS actual, ON people.name = actual.name LEFT JOIN ( SELECT UserName AS name, SUM( Round( PaidHours / 7, 2 ) ) AS ActualDays FROM TimeSheetEntries WHERE TaskNumber = 100 GROUP BY UserName ) AS planned ON people.name = planned.name; ``` but the syntax is clearly wonky.
Okay - this works: ``` SELECT people.name, COALESCE(PlannedDays, 0) as planned, COALESCE(ActualDays, 0) as actual FROM ( SELECT PeopleName as name FROM Internal_Resources WHERE TaskNo = 100 UNION SELECT DISTINCT UserName as name FROM TimeSheetEntries WHERE TaskNumber = 100 ORDER BY Name) AS people LEFT JOIN ( SELECT PeopleName AS name, NoOfDays AS PlannedDays FROM Internal_Resources WHERE TaskNo = 100 ) AS ir ON people.name = ir.name LEFT JOIN ( SELECT UserName AS name, SUM( Round( PaidHours / 7, 2 ) ) AS ActualDays FROM TimeSheetEntries WHERE TaskNumber = 100 GROUP BY UserName ) AS ts ON people.name = ts.name; ``` Giving: ``` +--------+---------+--------+ | name | planned | actual | +--------+---------+--------+ | Albert | 10 | 0.00 | | Bob | 3 | 4.00 | | Fred | 1 | 4.00 | | Mary | 0 | 2.00 | +--------+---------+--------+ ``` I thought there must be an easier way, and this looks simpler: ``` SELECT name, SUM(x) AS planned, SUM(y) AS actual FROM ( SELECT PeopleName AS name, NoOfDays AS x, 0 AS y FROM Internal_Resources WHERE TaskNo = 100 UNION SELECT UserName AS name, 0 AS x, SUM( PaidHours / 7 ) AS y FROM TimeSheetEntries WHERE TaskNumber = 100 GROUP BY UserName) AS source GROUP BY name; ``` But frustratingly - both work in MySQL and both FAIL in FileMaker's cut-down SQL version - SELECTing from a derived table doesn't appear to be supported. Finally - the trick to getting it to work in FileMaker SQL - subqueries are supported for IN and NOT IN... so a union of three queries - people who have planned days and have done some work, people who have done unplanned work, and people who haven't done planned work: ``` SELECT PeopleName as name, NoOfDays as planned, Sum( PaidHours / 7 ) as actual FROM Internal_Resources JOIN TimeSheetEntries ON PeopleName = UserName WHERE TaskNumber = 100 AND TaskNo = 100 GROUP BY PeopleName UNION SELECT UserName as name, 0 as planned, Sum( PaidHours / 7 ) as actual FROM TimeSheetEntries WHERE TaskNumber = 100 AND UserName NOT IN ( SELECT PeopleName FROM Internal_Resources WHERE TaskNo = 100 ) UNION SELECT PeopleName as name, NoOfDays as planned, 0 as actual FROM Internal_Resources WHERE TaskNo = 100 AND PeopleName NOT IN ( SELECT PeopleName as name FROM Internal_Resources JOIN TimeSheetEntries ON PeopleName = UserName WHERE TaskNumber = 100 AND TaskNo = 100 GROUP BY PeopleName ) ORDER BY name; ``` Hope this helps someone.
Invert the logic to read from `Internal_resources` in the outer query: ``` SELECT ir.UserName, NoOfDays as PlannedDays, (SELECT SUM ( Round( PaidHours / 7 , 2 )) FROM TimeSheetEntries e WHERE e.TaskNo = ? AND ir.PeopleName = e.UserName ) as ActualDays FROM Internal_Resources ir WHERE ir.TaskNumber = ? GROUP BY ir.UserName, NoOfDays; ```
Combining SQL grouped and ungrouped results with a cross join?
[ "", "sql", "subquery", "filemaker", "" ]
How to return rows where a column has 2 words (that is, strings separated by a space) in it? It must be purely using SQL. ``` SELECT * FROM table WHERE name (has 2 strings in it); ```
> I dont know the names when querying. Its a big dataset. I only have to check if the name contains a spacebar basically *(from a comment)*. If you want to distinguish names that have two parts from one-part and three-plus-part names, you can use [regular expression](http://dev.mysql.com/doc/refman/5.1/en/regexp.html#operator_regexp): ``` SELECT * FROM my_table WHERE name REGEXP '^[^ ]+[ ]+[^ ]+$' ``` This regular expression matches when the entire string consists of two non-empty parts containing no spaces, with one or more space separating them.
**This perfectly works for me** You can use 'AND' condition and Like Operator with wildcards (%). ``` SELECT * FROM table_name WHERE name LIKE '%Word1%' AND name LIKE '%Word2%' ```
Query to check if a certain row has 2 words
[ "", "mysql", "sql", "" ]
I'm trying to clean up some incorrect data: ``` id | name ------------- 1 | C 2 | A 3 | A 4 | B 5 | B 6 | B 7 | B 8 | X 9 | X 10 | A 11 | A 12 | A 13 | X 14 | X 15 | B 16 | C 17 | C 18 | X 19 | A 20 | A ``` What has happened is when the data has been entered, if the `name` field was NULL, the value from the previous iteration of a loop has not been cleared so it has been entered into the next row. The data should look like this: ``` id | name ------------- 1 | C 2 | A 3 | NULL 4 | B 5 | NULL 6 | NULL 7 | NULL 8 | X 9 | NULL 10 | A 11 | NULL 12 | NULL 13 | X 14 | NULL 15 | B 16 | C 17 | NULL 18 | X 19 | A 20 | NULL ``` Is there a way I can update the entire table in one swoop by setting all duplicates like this to NULL, while preserving the values where the column had an intended value?
We need to join our duplicates into chain, it easy to reach by: ``` select * from updateTable t1 left join updateTable t2 on t1.name = t2.name and t1.id+1 = t2.id ; +----+------+------+------+ | id | name | id | name | +----+------+------+------+ | 1 | C | NULL | NULL | | 2 | A | 3 | A | | 3 | A | NULL | NULL | | 4 | B | 5 | B | | 5 | B | 6 | B | | 6 | B | 7 | B | | 7 | B | NULL | NULL | | 8 | X | 9 | X | | 9 | X | NULL | NULL | | 10 | A | 11 | A | | 11 | A | 12 | A | | 12 | A | NULL | NULL | | 13 | X | 14 | X | | 14 | X | NULL | NULL | | 15 | B | NULL | NULL | | 16 | C | 17 | C | | 17 | C | NULL | NULL | | 18 | X | NULL | NULL | | 19 | A | 20 | A | | 20 | A | NULL | NULL | +----+------+------+------+ ``` Now we know our ids that should be updated. But we cannot run: ``` update updateTable set name = null where id in ( select t2.id from updateTable t1 left join updateTable t2 on t1.name = t2.name and t1.id+1 = t2.id where t2.id is not null ); ``` because we will receive error: ``` ERROR 1093 (HY000): You can't specify target table 'updateTable' for update in FROM clause ``` but we can avoid this bug by using temporary table for ids: ``` create temporary table updateTableTmp ( id int, primary key (id) ) engine=innodb; insert into updateTableTmp select t2.id from updateTable t1 left join updateTable t2 on t1.name = t2.name and t1.id+1 = t2.id where t2.id is not null ; update updateTable set name = null where id in ( select id from updateTableTmp ); select * from updateTable; +----+------+ | id | name | +----+------+ | 1 | C | | 2 | A | | 3 | NULL | | 4 | B | | 5 | NULL | | 6 | NULL | | 7 | NULL | | 8 | X | | 9 | NULL | | 10 | A | | 11 | NULL | | 12 | NULL | | 13 | X | | 14 | NULL | | 15 | B | | 16 | C | | 17 | NULL | | 18 | X | | 19 | A | | 20 | NULL | +----+------+ ```
User variables to mimic a row number per each name.Tested on my machine ``` SELECT @var:=name,@no:=0 FROM t ORDER BY id; UPDATE t join (select ID,NAME,(CASE WHEN NAME=@var THEN @no:=@no+1 ELSE @no:=1 AND @var:=NAME END) BLAH from T order by ID)X on T.ID=X.ID SET T.NAME= NULL where X.BLAH<>0 ```
MySQL - Update rows where value in column is the same as previous row
[ "", "mysql", "sql", "" ]
I have a table "logintracking" and fields are "attemptresult" and "attemptdate". ``` attemptdate attemptresult 2007-12-18 14:33:24.000 LOGOUT 2007-12-18 14:33:38.000 SUCCESS 2007-12-18 14:35:36.000 LOGOUT 2007-12-18 14:46:50.000 SUCCESS 2007-12-18 16:52:48.000 TIMEOUT 2007-12-18 16:57:33.000 SUCCESS 2007-12-18 18:49:49.000 TIMEOUT 2008-01-10 13:02:32.000 SUCCESS ``` and so on i want the result as: ``` DATE COUNT(login) 2007-12-18 14:00:00.000 1 2007-12-18 15:00:00.000 0 2007-12-18 16:00:00.000 0 2007-12-18 17:00:00.000 0 2007-12-18 18:00:00.000 0 2007-12-18 19:00:00.000 0 2007-12-18 20:00:00.000 0 2008-01-10 01:00:00.000 0 ``` i.e. each hour starting with the minimum attemptdate till the maximum attemptdate and correspondingly count of login and logout at particular time. please help
Without knowing how to compute for the Login count, here is what I came up: The idea is to generate hourly intervals of all dates in `LoginTracking`. Then `LEFT JOIN` that into the `LoginTracking` to achieve the result: ``` CREATE TABLE LoginTracking( AttemptDate DATETIME, AttemptResult VARCHAR(10) ) INSERT INTO LoginTracking VALUES ('2007-12-18 14:33:24.000', 'LOGOUT'), ('2007-12-18 14:33:38.000', 'SUCCESS'), ('2007-12-18 14:35:36.000', 'LOGOUT'), ('2007-12-18 14:46:50.000', 'SUCCESS'), ('2007-12-18 16:52:48.000', 'TIMEOUT'), ('2007-12-18 16:57:33.000', 'SUCCESS'), ('2007-12-18 18:49:49.000', 'TIMEOUT'), ('2008-01-10 13:02:32.000', 'SUCCESS'); ;WITH CteCross AS( SELECT lt.AttemptDate, N = x.N - 1 FROM( SELECT DISTINCT CAST(AttemptDate AS DATE) AS AttemptDate FROM LoginTracking )lt CROSS JOIN( SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9 UNION ALL SELECT 10 UNION ALL SELECT 11 UNION ALL SELECT 12 UNION ALL SELECT 13 UNION ALL SELECT 14 UNION ALL SELECT 15 UNION ALL SELECT 16 UNION ALL SELECT 17 UNION ALL SELECT 18 UNION ALL SELECT 19 UNION ALL SELECT 20 UNION ALL SELECT 21 UNION ALL SELECT 22 UNION ALL SELECT 23 UNION ALL SELECT 24 )x(N) ) SELECT AttemptDate = DATEADD(HOUR, cc.N, CAST(cc.AttemptDate AS DATETIME)), LoginCount = SUM(CASE WHEN lt.AttemptResult = 'SUCCESS' THEN 1 ELSE 0 END) FROM CteCross cc LEFT JOIN LoginTracking lt ON CAST(lt.AttemptDate AS DATE) = cc.AttemptDate AND DATEPART(HOUR, lt.AttemptDate) = cc.N GROUP BY cc.AttemptDate, cc.N ORDER BY AttemptDate DROP TABLE LoginTracking ```
Attempt Result is not so clear.you should explain why on 14th logincount=1 and why on 16th it is 0. Try this, ``` DECLARE @LoginTracking TABLE( AttemptDate DATETIME, AttemptResult VARCHAR(10) ) INSERT INTO @LoginTracking VALUES ('2007-12-18 14:33:24.000', 'LOGOUT'), ('2007-12-18 14:33:38.000', 'SUCCESS'), ('2007-12-18 14:35:36.000', 'LOGOUT'), ('2007-12-18 14:46:50.000', 'SUCCESS'), ('2007-12-18 16:52:48.000', 'TIMEOUT'), ('2007-12-18 16:57:33.000', 'SUCCESS'), ('2007-12-18 18:49:49.000', 'TIMEOUT'), ('2008-01-10 13:02:32.000', 'SUCCESS'); DECLARE @MinDate DateTime= (SELECT DATEADD(hour,DATEDIFF(hour, 0,min(AttemptDate)), 0) FROM @LoginTracking) DECLARE @MaxDate DateTime= (SELECT DATEADD(hour,DATEDIFF(hour, 0,max(AttemptDate)), 0) FROM @LoginTracking) ;WITH CTE AS ( SELECT @MinDate [DATE] UNION ALL SELECT DATEADD(hour, 1, [date]) FROM cte WHERE [date] < @MaxDate ) SELECT [DATE] ,isnull(( SELECT TOP 1 1 FROM @LoginTracking lt WHERE DATEADD(hour, DATEDIFF(hour, 0, AttemptDate), 0) = [Date] AND ( EXISTS ( SELECT AttemptResult FROM @LoginTracking lt WHERE DATEADD(hour, DATEDIFF(hour, 0, AttemptDate), 0) = [Date] AND lt.AttemptResult = 'LOGOUT' ) AND EXISTS ( SELECT AttemptResult FROM @LoginTracking lt WHERE DATEADD(hour, DATEDIFF(hour, 0, AttemptDate), 0) = [Date] AND lt.AttemptResult = 'SUCCESS' ) ) ), 0) FROM cte OPTION (MAXRECURSION 0) ```
sql server datetime hourly interval
[ "", "sql", "sql-server", "" ]
I have a very long `CASE` statement and it makes the query very long. Is there an alternative way? Does the long `CASE` statement impact the performance? I see people are recommending `COALESCE` but I'm not sure if this is a good alternative in my query. The example below, a, b, c, d, e, f are columns from a table but I'm just using the letter here to make an example. ``` UPDATE table SET totalAmt = CASE WHEN product_id = 1 THEN a*b*c*d WHEN product_id = 2 THEN c*d*e + 1 WHEN product_id = 3 THEN CASE WHEN term = 12 THEN b*c*d*e ELSE a*b*e*f END WHEN product_id = 4 THEN a+b+c+d etc..... END FROM table ```
I don't expect any performance issues from the `CASE` statement itself. Since your one query is making one pass at the data, it might perform much better than multiple queries for each product ID. The query may perform better with a `WHERE` clause-- if that is even possible with your long query. If the table is large, and it is indexed by product id, and the query is updating a small subset of products, you might get better performance breaking the query apart into separate `UPDATE` queries per product\_id. Otherwise you may end up with a table scan on a huge table. For example: ``` UPDATE table SET totalAmt = a*b*c*d WHERE product_id = 1 UPDATE table SET totalAmt = c*d*e + 1 WHERE product_id = 2 ``` If all the cases depend on `product_id`, then you could abbreviate the syntax like this: ``` CASE product_id WHEN 1 THEN a*b*c*d WHEN 2 THEN ... END ``` I would recommend using comments to make the code more readable. For example, if the products are a hard-coded set of known IDs, you might specify what they are. Similarly, it may help future code maintenance to explain the calculation: ``` UPDATE table SET totalAmt = CASE WHEN product_id = 1 -- table THEN a*b*c*d -- some explanation of calculation CASE WHEN product_id = 2 -- chair THEN ... ```
If the `UPDATE` doesn't need to run against every record, you can use `WHERE` to filter. As for your `CASE` expression, nothing springs to mind, I took out the nested `CASE` just out of my own preference, but don't think it has any performance impact: ``` UPDATE table SET totalAmt = CASE WHEN product_id = 1 THEN a*b*c*d WHEN product_id = 2 THEN c*d*e + 1 WHEN product_id = 3 AND term = 12 THEN b*c*d*e WHEN product_id = 3 THEN a*b*e*f WHEN product_id = 4 THEN a+b+c+d etc..... END FROM table WHERE product_id IN (1,2,3,4) ```
SQL Server: Alternative of long CASE statement
[ "", "sql", "sql-server-2008", "" ]
I am following an online course on databases. However, I really don't know what the next step is and how I can answer this question. Can anyone please help? This is what I have so far. ``` SELECT P.name FROM Persons P LEFT JOIN Knows K ON K.personA_id = P.id WHERE K.age >= P.age ``` ![enter image description here](https://i.stack.imgur.com/vebKU.jpg)
Try this : [SQL Fiddle](http://sqlfiddle.com/#!2/a08fcf/6) ``` SELECT distinct P1.name FROM Persons P1 INNER JOIN Knows K ON K.personA_id = P1.id INNER JOIN Persons P2 ON K.personB_id = P2.id AND P1.age - P2.age > 5 ``` It will output everybody that knows 1 person or more (personA) and that every person known is more than 5 years younger (personB)
The title of the question referred to using `ALL/EXISTS`. Here's are approaches that actually do that: ``` /* all */ select p.name from Persons as p where p.age - 5 > all ( select p2.age from Knows as k inner join Persons as p2 on p2.person_id = k.personB_id where p2.id = p.id ) /* not exists */ select p.name from Persons as p where not exists ( select p2.age from Knows as k inner join Persons as p2 on p2.person_id = k.personB_id where p2.id = p.id and p2.age > p.age - 5 ) ```
FOR ALL/EXIST Queries
[ "", "mysql", "sql", "exists", "forall", "" ]
I'm trying to create a new column in a table. I want to make it not nullable. And I don't want to use a default constraint. I tried the following query. But it fails. Please correct me here or suggest if a better option exists. ``` ALTER TABLE [dbo].[UCBCluster] ADD PBXClusterId INT NULL; UPDATE [dbo].[UCBCluster] SET PBXClusterId = 0 WHERE PBXClusterId IS NULL; ALTER TABLE [dbo].[UCBCluster] ALTER COLUMN PBXClusterId INT NOT NULL; ``` Error: > Msg 207, Level 16, State 1, Line 5 > Invalid column name 'PBXClusterId'. Thanks
yo have to keep default costraint then only you can change to not null ``` ALTER TABLE [dbo].[UCBCluster] add constraint cnt_column Default '' for PBXClusterId ALTER TABLE tb_TableName ALTER COLUMN PBXClusterId int NOT NULL ```
You can't run all of this in a single batch, because SQL Server will parse it at the start of execution, and at that time, no `PBXClusterId` column exists yet. You need to run this in **three separate batches** - either by just highlighting it in Management Studio, or if you want to run it as one, you need to put `GO` delimiters between your steps: ``` ALTER TABLE [dbo].[UCBCluster] ADD PBXClusterId INT NULL; GO; UPDATE [dbo].[UCBCluster] SET PBXClusterId = 0 WHERE PBXClusterId IS NULL; GO; ALTER TABLE [dbo].[UCBCluster] ALTER COLUMN PBXClusterId INT NOT NULL; GO; ``` It is generally not a good idea to run DDL statements (data definition language - statements to *modify* your database *structure*) and DML statements (data manipulation language - adding or updating data) in the same batch of SQL statement in SQL Server
creating a sql column updating it
[ "", "sql", "sql-server-2008", "" ]
The table I am working with uses two rows to record each 'transaction': one row to identify the party acting, the second row to identify the party acted upon. The data contents differ only in the value of one field, a logical flag of 'Y' or 'N'. For example, I might have this: ``` E-num E-date Client Actor 1234 2013-05-02 ACME Y 1234 2013-05-02 ALLIED N ``` What I would like to report is this: ``` E-num E-date For Against 1234 2013-05-02 ACME ALLIED ``` Thanks, folks. Terry
just join the table on the `E-num` and pick the rows as required. If you are using MySQL, the Query would be ``` SELECT t1.E-num, t1.E-date, t1.Client AS 'For', t2.Client AS Against FROM `transaction` t1 INNER JOIN `transaction` t2 ON t1.E-num = t2.E-num WHERE t1.Actor = 'Y' AND t2.Actor = 'N' ``` Untest, but it should give you an idea.
An ORACLE version: use a self-join of your table: ``` SELECT t1."E-num", t1."E-date", t1."Client" as "For", t2."Client" as "Against" FROM transaction t1, transaction t2 WHERE t1."E-num" = t2."E-num" AND t1."E-date" = t2."E-date" AND t1."Actor" = 'Y' AND t2."Actor" = 'N'; ```
SQL to combine two records into one
[ "", "sql", "oracle11g", "" ]
I have some complex join statement and want to make it easier.(or stay it but make working right :)) ``` CREATE TABLE #Temp1 ( ID INT IDENTITY, Name1 VARCHAR(100), Value1 INT ) CREATE TABLE #Temp2 ( ID INT IDENTITY, Name2 VARCHAR(100), Value2 INT, Value1 INT ) INSERT INTO #Temp1 SELECT 'Nm_1', 111 UNION ALL SELECT 'Nm_2', 222 INSERT INTO #Temp2(Name2, Value2) SELECT 'Nm_3', 333 UNION ALL SELECT 'Nm_4', 444 UNION ALL SELECT 'Nm_5', 555 UNION ALL SELECT 'Nm_6', 666 UNION ALL SELECT 'Nm_7', 777 UNION ALL SELECT 'Nm_8', 888 UNION ALL SELECT 'Nm_9', 999 UNION ALL SELECT 'Nm_4', 444 UNION ALL SELECT 'Nm_5', 555 UNION ALL SELECT 'Nm_6', 666 UNION ALL SELECT 'Nm_7', 777 UNION ALL SELECT 'Nm_8', 888 UNION ALL SELECT 'Nm_9', 999 UNION ALL SELECT 'Nm_10',100 UNION ALL SELECT 'Nm_11',110 ``` Here is two tables. First table in ordinary which can have any number of rows. Second depends on first one.I explain how. First row in #Temp2 table is static, rows from 2 to 7 repeated as much as count from #Temp1 is and last 3 rows are also static. In my examle I have two row in #Temp1, so ``` SELECT 'Nm_4', 444 UNION ALL SELECT 'Nm_5', 555 UNION ALL SELECT 'Nm_6', 666 UNION ALL SELECT 'Nm_7', 777 UNION ALL SELECT 'Nm_8', 888 UNION ALL SELECT 'Nm_9', 999 UNION ALL ``` are twice, if i have 3 there will be 6 \* 3 row. Now I want to make update Value1 column in #Temp2 table in such way. > select Value1 from #Temp1 where ID =1 this value would be written in #Temp2 table where ID>=2 and ID<=7 > select Value1 from #Temp1 where ID =2 should be in #Temp2 table where ID>=8 and ID<=13. I try to write join like this ``` UPDATE #Temp2 SET Value1 = a.Value1 FROM #Temp1 AS a INNER JOIN #Temp2 AS b ON 2*a.ID - b.ID IN (-(2*a.ID-1)-(a.ID-2), -(2*a.ID-1)-(a.ID-1), -(2*a.ID-1)-(a.ID-0), -(2*a.ID-1)-(a.ID+1), -(2*a.ID-1)-(a.ID+2), -(2*a.ID-1)-(a.ID+3)) ``` But it's wrong, if you run this script you can see. Can anybody help? **Join should be obviously made with ID column**
This simple join work: ``` UPDATE #Temp2 SET Value1 = a.Value1 FROM #Temp1 AS a INNER JOIN #Temp2 AS b ON b.ID BETWEEN (a.ID*6-4) AND (a.ID*6+1) ```
You could try this which should solve your problem: ``` DECLARE @Counter int = 0, @Rows int Set @Rows = (Select count(*) from #Temp1) While (@Counter < @Rows) Begin Update #Temp2 Set Value1 = (Select Value1 from #Temp1 where ID = @Counter + 1) Where ID between 2 + (@Counter * 6) and 7 + (@Counter * 6) Set @Counter = @Counter + 1 End ```
Complex update/join in sql
[ "", "sql", "sql-server", "t-sql", "" ]
I am trying to pull all data for the last 24 hours but starting from the current time. If the current date-time is 5/3 and the time is 11:30 then i want to pull the last 24 hours from 11:30. The data type for date field is `datetime` and it has only the date and time values without the seconds. Here is my current query ``` select Name, Location, myDate from myTable where myDate>= getdate()-24 ``` the query above is giving me everything but i only want from the current time. this is how myDate look like in the table ``` 2015-03-05 10:30:00.000 2015-03-05 11:00:00.000 2015-03-05 11:30:00.000 2015-03-05 12:00:00.000 2015-03-05 12:30:00.000 2015-03-05 13:00:00.000 2015-03-05 13:30:00.000 2015-03-05 14:00:00.000 2015-03-05 14:30:00.000 ```
To be more explicit with your intentions, you may want to write your query like so: ``` select Name, Location, myDate from myTable where myDate>= DATEADD(hh, -24, GETDATE()) ``` [SQL Server DATEADD](https://msdn.microsoft.com/en-us/library/ms186819.aspx)
I believe the issue is with: ``` select Name, Location, myDate from myTable where myDate>= getdate()-24 ``` The -24 as this would be -24 days try: ``` select Name, Location, myDate from myTable where myDate>= getdate()-1 ``` An alternative would be to use the date add function: <http://www.w3schools.com/sql/func_dateadd.asp> ``` DATEADD(datepart,number,date) ``` In your situation you could: ``` select Name, Location, myDate from myTable where myDate>= DATEPART (dd, -1, GETDATE()) ``` Where we are adding negative one dd (days)
How to get last 24 hours from current time-stamp?
[ "", "sql", "sql-server", "t-sql", "" ]
I have a question about writing a sub-query in Microsoft T-SQL. From the original table I need to return the name of the person with the second most pets. I am able to write a query that returns the number of perts per person, but I'm not sure how to write a subquery to return rank #2. Original table: ``` +—————————-——+———-————-+ | Name | Pet | +————————————+————-————+ | Kathy | dog | | Kathy | cat | | Nick | gerbil | | Bob | turtle | | Bob | cat | | Bob | snake | +—————————-——+—————-———+ ``` I have the following query: ``` SELECT Name, COUNT(Pet) AS NumPets FROM PetTable GROUP BY Name ORDER BY NumPets DESC ``` Which returns: ``` +—————————-——+———-————-+ | Name | NumPets | +————————————+————-————+ | Bob | 3 | | Kathy | 2 | | Nick | 1 | +—————————-——+—————-———+ ```
You are using TSQL So: ``` WITH C AS ( SELECT COUNT(Pet) OVER (PARTITION BY Name) cnt ,Name FROM PetTable ) SELECT TOP 1 Name, cnt AS NumPets FROM C WHERE cnt = 2 ```
This is how you could use ROW\_NUMBER to get the result. ``` SELECT * FROM( SELECT ROW_NUMBER() OVER (ORDER BY COUNT(name) DESC) as RN, Name, COUNT(NAME) AS COUNT FROM PetTable GROUP BY Name ) T WHERE T.RN = 2 ```
SQL subquery to return rank 2
[ "", "sql", "" ]
In my db2 database I have three tables. 1. Computer 2. Computer\_Software 3. Software This stores the relationship that each computer has multiple software items installed. So Computer\_Software just has a foreign key to Computer and Software. I need a query that returns for a given set of computers, the list of software which is installed on ALL of them. So it would be something like this: ``` select id from software where computer_id = 1 intersect select id from software where computer_id = 2 intersect select id from software where computer_id = 5 ``` But I need to supply the computer\_ids in a list. Is there some kind of nested select type way I can do this?
If your `computer_software` table has a column `computer_id` and `software_id` and there is one row per computer and software on that computer -- as I imagine -- then you can count the rows where the computer is either 1/2/5, group by computer\_id, having a count equal to 3 (the software appears on all the 3 computers): ``` select software_id from computer_software where computer_id in (1, 2, 5) group by software_id having count(*) = 3 ```
John, I'm a bit rusty but using a subclause with the IN statement should do the trick e.g. ``` select software.id from software where software.id in ( select computer_software.softwareid from computer_software where computer_software.computerid in (1,2,5) ) ``` This won't be the most efficient way to get a result. I guess you would need to use an outer join to increase the efficiency of the query. But if you don't have 100,000 of rows in the computer and software tables it should do the trick as a starting point.
SQL to find set of children with common parents
[ "", "sql", "db2", "" ]
I have to find the highest price from three different tables but I do not know how to get the max of P now. PC(model,speed,ram,hd,price) Laptop(model,speed,ram,hd,screen,price) Printer(model,color,type,price) ``` SELECT model FROM ((SELECT MAX(price) FROM PC) UNION (SELECT MAX(price) FROM Laptop) UNION (SELECT MAX(price) FROM Printer)) AS P WHERE MAX(P); ``` Could i just add MAX in front? like ``` SELECT model FROM MAX((SELECT MAX(price) FROM PC) UNION (SELECT MAX(price) FROM Laptop) UNION (SELECT MAX(price) FROM Printer)); ```
If you want the model, use `union all` and `order by`. You don't specify the database, so here is the ANSI standard solution: ``` SELECT model, price FROM ((SELECT model, price FROM PC) UNION ALL (SELECT model, price FROM Laptop) UNION ALL (SELECT model, price FROM printer) ) p ORDER BY price desc FETCH FIRST 1 ROW ONLY; ``` The only part of this that would really differ among databases is the `FETCH FIRST 1 ROW ONLY`. This might be `limit`, `top`, or some other construct.
If you want to get only 1 record with highest price you should use `TOP 1`. Something like that: ``` SELECT TOP 1 model FROM ((SELECT MAX(price) FROM PC) UNION (SELECT MAX(price) FROM Laptop) UNION (SELECT MAX(price) FROM Printer)) AS P WHERE MAX(P) ORDER BY P DESC; ```
Finding the highest price from three different tables
[ "", "sql", "max", "" ]
I'm trying to develop a query to validate user data entry and I'm stuck. Basically, we're working with information about a daily 24 hour composite water sample. The user will enter the "COLDATE" which is the end of the composite and is stored in DATETIME format like "2015-03-02 04:00:00.000". Then they will enter the "Compstartdate" which is varchar(8) and looks like "03/02/15". Finally, they will enter the "Compstarttime" which is varchar(5) and looks like "04:01". Don't blame me, I didn't set it up this way and let's assume that fixing the data types is not an option. The rule that I am dealing with is that the "Compstart(date/time)" for one day needs to match the "COLDATE" for the previous day. So far, I can only figure out how to see if the "COLDATE(day) - 1 day" is equal to the "Compstartdate(day)". In other words, I can easily do logical comparisons within one record but I have no idea how to compare two records. Also, we're only talking about 2000 records so performance considerations are not important, as evidenced by my use of a case statement. By this I mean that a solution which involves a cursor or while-loop would be perfectly acceptable to me. Here's what I have so far: ``` SELECT S.[SAMPNO] ,S.[LOCCODE] ,S.[COLDATE] ,U.[Compstartdate] ,U.[Compstarttime] FROM [dbo].[SAMPLE] as S JOIN [dbo].[SUSERFLDS] as U on S.SAMPNO = U.SAMPNO Where Case When DATEPART(DAY, Convert(VARCHAR(10),U.Compstartdate,101)) != DATEPART(DAY, DATEADD(DAY, -1, S.COLDATE)) Then 'Yes' ELSE 'NO' END ='YES' ``` Edit: Let me try to explain the problem better. If I collect a 24-hour composite sample today and enter the information about the sample into the database, I'm going to enter the date/time that I collected the sample(end of composite) and the date/time that the composite started. Because it is a 24 hour composite, the start date/time of today's sample should equal the end time(COLDATE) of yesterday's sample. So I need to take two samples with the same LOCCODE but with COLDATE one day apart. Then see if the COLDATE for the earlier sample is equal to the Compstartdate/time of the later sample. Edit #2: Here is some sample data. ``` create table [SAMPLE] ( SAMPNO int, LOCCODE char(7), COLDATE datetime ); create table SUSERFLDS ( SAMPNO int, Compstartdate char(8), Compstarttime char(5) ); SET DATEFORMAT mdy; insert into [SAMPLE] values (11,'Sample1','2015-03-02 04:00:00.000'); insert into [SAMPLE] values (12,'Sample1','2015-03-03 04:00:00.000'); insert into [SAMPLE] values (13,'Sample1','2015-03-04 04:00:00.000'); insert into [SAMPLE] values (14,'Sample1','2015-03-05 04:00:00.000'); insert into SUSERFLDS values (11, '03/01/15', '04:00'); insert into SUSERFLDS values (12, '03/02/15', '04:00'); insert into SUSERFLDS values (13, '03/03/15', '05:00'); insert into SUSERFLDS values (14, '03/04/15', '04:00'); --Compstartdate/time for SAMPNO 12 --does match COLDATE for SAMPNO 11 --Compstartdate/time for SAMPNO 13 --should match COLDATE for SAMPNO 12 ```
Finally figured it out. Here is the query that gives me what I was looking for: ``` set dateformat mdy; With CTE1 as ( Select S1.SAMPNO as SAMPNO1 ,S1.COLDATE as COLDATE1 ,S1.LOCCODE as LOCCODE1 ,CAST( U1.Compstartdate +' '+ U1.Compstarttime as datetime) as Compstart1 From [SAMPLE] as S1 join [SUSERFLDS] as U1 on S1.SAMPNO = U1.SAMPNO ), CTE2 as ( Select S2.SAMPNO as SAMPNO2 ,S2.COLDATE as COLDATE2 ,S2.LOCCODE as LOCCODE2 ,CAST( U2.Compstartdate +' '+ U2.Compstarttime as datetime) as Compstart2 From [SAMPLE] as S2 join [SUSERFLDS] as U2 on S2.SAMPNO = U2.SAMPNO ) SELECT LOCCODE1 ,SAMPNO1 ,SAMPNO2 ,COLDATE1 ,COLDATE2 ,Compstart1 From CTE1 join CTE2 on LOCCODE1 = LOCCODE2 and COLDATE2 = DATEADD(Day, -1, COLDATE1) Where Compstart1 != COLDATE2 ``` Let me know if you see any fatal flaws.
I think you are getting confused - there is no need to loop through the table - that's effectively what the join does. Sadly SQLFiddle seems to be having difficulties at the moment. This is what I was going to set up as an example: ``` create table SAMPLE ( SAMPNO int, LOCCODE char(1), LOCDESCR char(1), LOGBATCH char(1), LOGUSER char(1), COLDATE datetime ); create table SUSERFLDS ( SAMPNO int, Compstartdate char(8), Compstarttime char(5) ); SET DATEFORMAT mdy; insert into SAMPLE values (1, 'x','x','x','x','2015-03-01 04:00:00.000'); insert into SAMPLE values (2, 'x','x','x','x','2015-03-02 04:00:00.000'); insert into SAMPLE values (3, 'x','x','x','x','2015-03-03 04:00:00.000'); insert into SAMPLE values (4, 'x','x','x','x','2015-03-04 04:00:00.000'); insert into SAMPLE values (5, 'x','x','x','x','2015-03-05 04:00:00.000'); insert into SUSERFLDS values (2, '03/02/15', '04:00'); insert into SUSERFLDS values (3, '03/04/15', '04:00'); insert into SUSERFLDS values (4, '03/05/15', '04:00'); insert into SUSERFLDS values (5, '03/06/15', '05:00'); ``` --- ``` set dateformat mdy; with example as ( select CAST( compstartdate +' '+ compstarttime as datetime) as compdatetime from superflds) select * from sample where 1 = ( select count(*) from example where DATEPART(dy, compdatetime) = DATEPART(dy, coldate) + 1 and DATEPART(hh, compdatetime) = DATEPART(hh, coldate) and DATEPART(mi, compdatetime) = DATEPART(mi, coldate) ) ``` Please ask any questions or add clarifications in the comments.
Tricky data entry validation SQL query
[ "", "sql", "sql-server-2008", "while-loop", "cursor", "logic", "" ]
For pagination purposes, I need a run a query with the `LIMIT` and `OFFSET` clauses. But I also need a count of the number of rows that would be returned by that query without the `LIMIT` and `OFFSET` clauses. I want to run: ``` SELECT * FROM table WHERE /* whatever */ ORDER BY col1 LIMIT ? OFFSET ? ``` And: ``` SELECT COUNT(*) FROM table WHERE /* whatever */ ``` At the same time. Is there a way to do that, particularly a way that lets Postgres optimize it, so that it's faster than running both individually?
**Yes.** With a simple window function. ### Add a column with the total count ``` SELECT *, count(*) OVER() AS full_count FROM tbl WHERE /* whatever */ ORDER BY col1 OFFSET ? LIMIT ? ``` Be aware that the cost will be substantially higher than without the total number. Postgres has to actually *count all qualifying rows* either way, which imposes a cost depending on the total number. See: * [Best way to get result count before LIMIT was applied](https://stackoverflow.com/questions/156114/best-way-to-get-result-count-before-limit-was-applied-in-php-postgresql/8242764#8242764) Two separate queries (one for the result set, one for the total count) may or may not be faster. But the overhead of executing two separate queries and processing results often tips the scales. Depends on the nature of the query, indexes, resources, cardinalities ... **However**, [as Dani pointed out](https://stackoverflow.com/questions/28888375/run-a-query-with-a-limit-offset-and-also-get-the-total-number-of-rows/28888696?noredirect=1#comment88418369_28888696), when `OFFSET` is at least as great as the number of rows returned from the base query, no rows are returned. So we get no `full_count`, either. If that's a rare case, just run a second query for the count in this case. If that's not acceptable, here is a **single query always returning the full count**, with a CTE and an `OUTER JOIN`. This adds more overhead and only makes sense for certain cases (expensive filters, few qualifying rows). ``` WITH cte AS ( SELECT * FROM tbl WHERE /* whatever */ -- ORDER BY col1 -- ① ) SELECT * FROM ( TABLE cte ORDER BY col1 LIMIT ? OFFSET ? ) sub RIGHT JOIN (SELECT count(*) FROM cte) c(full_count) ON true; ``` ① Typically it does not pay to add (the same) `ORDER BY` in the CTE. That forces all rows to be sorted. With `LIMIT`, typically only a small fraction has to be sorted (with "top-N heapsort"). You get one row of null values, with the `full_count` appended if `OFFSET` is too big. Else, it's appended to every row like in the first query. If a row with all null values is a possible valid result you have to check `offset >= full_count` to disambiguate the origin of the empty row. This still executes the base query only once. But it adds more overhead to the query and only pays if that's less than repeating the base query for the count. Either way, the total count is returned with every row (redundantly). Doesn't add much cost. But if that's an issue, you could instead ... ### Add a row with the total count The added row must match the row type of the query result, and the count must fit into the data type of one of the columns. A bit of a hack. Like: ``` WITH cte AS ( SELECT col1, col2, int_col3 FROM tbl WHERE /* whatever */ ) SELECT null AS col1, null AS col2, count(*)::int AS int_col3 -- maybe cast the count FROM cte UNION ALL ( -- parentheses required TABLE cte ORDER BY col1 LIMIT ? OFFSET ? ); ``` Again, sometimes it may be cheaper to just run a separate count (still in a single query!): ``` SELECT null AS col1, null AS col2, count(*)::int AS int_col3 FROM tbl WHERE /* whatever */ UNION ALL ( -- parentheses required SELECT col1, col2, int_col3 FROM tbl WHERE /* whatever */ ORDER BY col1 LIMIT ? OFFSET ? ); ``` About the syntax shortcut `TABLE tbl`: * [Is there a shortcut for SELECT \* FROM?](https://stackoverflow.com/questions/30275979/is-there-a-shortcut-for-select-from/30276023#30276023)
While [Erwin Brandstetter](https://stackoverflow.com/users/939860/erwin-brandstetter)'s answer works like a charm, it returns the total count of rows **in every row** like following: ``` col1 - col2 - col3 - total -------------------------- aaaa - aaaa - aaaa - count bbbb - bbbb - bbbb - count cccc - cccc - cccc - count ``` You may want to consider using an approach that returns total count **only once**, like the following: ``` total - rows ------------ count - [{col1: 'aaaa'},{col2: 'aaaa'},{col3: 'aaaa'} {col1: 'bbbb'},{col2: 'bbbb'},{col3: 'bbbb'} {col1: 'cccc'},{col2: 'cccc'},{col3: 'cccc'}] ``` SQL query: ``` SELECT (SELECT COUNT(*) FROM table WHERE /* sth */ ) as count, (SELECT json_agg(t.*) FROM ( SELECT * FROM table WHERE /* sth */ ORDER BY col1 OFFSET ? LIMIT ? ) AS t) AS rows ```
Run a query with a LIMIT/OFFSET and also get the total number of rows
[ "", "sql", "postgresql", "count", "pagination", "limit", "" ]
I'm getting this error when I'm putting outside the loop in side casting working . > Msg 137, Level 15, State 2, Line 2 > Must declare the scalar variable "@numCount" Can you please tell me what I'm doing wrong ?? Thanks ``` create table #myTemp ( rowid int identity (1,1), Name varchar(200), email varchar(200), flag bit ) select * from #myTemp declare @name varchar(200), @email varchar(200) declare @numCount int = 20 WHILE (1 <= @numCount) BEGIN SET @name = 'My Name '+ CAST(@numCount AS VARCHAR) SET @email = 'Email'+ CAST(@numCount AS VARCHAR) INSERT INTO #myTemp(Name, Email, flag) VALUES (@name, @email, 1) SET @numCount = @numCount - 1 END GO PRINT 'My String' + CAST(@numCount AS VARCHAR) + '***' ```
Declaring and setting in the same statement is perfectly valid in SQL Server 2008 R2. The issue is your GO ``` GO -- < -- starts a new batch and thus the declaration on @NumCount is totally unrealted to this Print. PRINT 'My String' + CAST(@numCount AS VARCHAR) + '***' ```
It has to do your variable being only accessible in your batch. Get rid of **"GO"**. And it should work. ``` create table #myTemp (rowid int identity (1,1), Name varchar(200),email varchar(200),flag bit) select * from #myTemp declare @name varchar(200),@email varchar(200) declare @numCount int = 20 while (1 <= @numCount) begin SET @name = 'My Name '+ CAST(@numCount AS VARCHAR) SET @email = 'Email'+ CAST(@numCount AS VARCHAR) INSERT INTO #myTemp (Name,Email,flag) values (@name,@email,1) SET @numCount = @numCount - 1 END PRINT 'My String' + CAST(@numCount AS VARCHAR) + '***' ```
Must declare the scalar variable "@numCount"
[ "", "sql", "database", "sql-server-2008", "stored-procedures", "" ]
I'm using Oracle. Having a table as: ``` Year Type Value 2011 1 500 2011 2 550 2011 3 600 ... ... 2012 1 600 2012 2 750 2012 3 930 ``` I needed to subtract all the values from different types, grouped by year. The operation would be: For 2011 -> 1-2-3 (500-550-600) For 2012 -> 1-2-3 (600-750-930) For ... The result shoud be: ``` Year Value 2011 -650 2012 -1080 ... ... ``` I couldn't do it but here on stack overflow this query was suggested, and it worked: ``` select sum(case when type = 1 then value else - value end) as value from table t group by year; ``` But now, i have another situation. I need to do the same thing but not 1-2-3-4-5-... But 1+2-3-4-5-6.... To to this i tried this both queries, with no sucess: ``` select sum(case when type = 1 then value when type = 2 then value else - value end) as value from table t group by year; ``` This resulted in the second value has its value doubled. ``` select sum(case when type = 1 then value else - value end)+case when type = 2 then value as value from table t group by year; ``` This resulted in a correct type 2 value, but,as this type 2 only accurs in some years, the other years show up as null. So, the final calculation is correct for type 2 (the years it is in) but for every other year, for each type 2 does not exist, it returns null. I'm just not managing to get this query working.. Any idea would be greatly appreciated! Thanks
The determination of whether the number should be treated as positive or negative needs to happen inside of your aggregation `SUM()`. So: ``` select sum(case when type = 1 OR type = 2 then value else - value end) from table t group by year; ``` Because both 1 and 2 are positive in your 1+2-3-4-5-6... formula (the 1 being positive is implied), so you need the OR to make sure both are positive and everything else is negative. Then it will be summed and your golden.
For sake of completeness, two alternate solutions: ``` select sum(case when type in (1, 2) then value else -value end) from t group by year; ``` Or, using a simple case expression: ``` select sum(case type when 1 then value when 2 then value else -value end) from t group by year; ```
SQL Conditional aggregation
[ "", "sql", "oracle", "conditional-aggregation", "" ]
I am maintaining a web project using Java & mysql. One mysql table has over 10 million records, I did partition the table by date, so that to reduce rows in each partition. Indexes are also added properly according to queries. In most query, only the first 1 or 2 partition is used, and the sum of records in those used partitions are less than 200m, it's still pretty quick. But a few of the queries need to load over 10 partitions to do some statictis, thus over 10m records is involved in a single query, this is quite slow, and it becomes worse as data grows. Part of the table: ``` id(int), amount(double), type(varchar), user_id(int), event_date(timestamp) -- `id` is primary key, `type` has index, ``` One of the big query is similar to: ``` select count(id), sum(amount) group by (type) where event_date between '2014-01-01' and '2014-12-31 23:59:59' and amount >= 10 -- The start & end datetime and the amount range might change. ``` **My question is:** How to make the query that involves over 10m records quicker? Here is my guess, but not sure: * Use mysql cluster? (I never used that before.) * Use big cache. (Memcache, but the big query is not frequently used.)
There are several other things you can do for performance improvement. * Analyze the query and introduce indexes as required. * Identify data access pattern of the application and you can cache only the frequently accessed data to reduce disk I/O.
I would shoot for doing some pre-aggregations and store them as tables if the old data is static. Then have your initial queries based on the pre-aggregate tables and once someone wants more detail, then go to the granular level of the data. You could create many different summary / aggregate tables, even if you do grouping by say 2-3 fields (not knowing your data cotext/structures). But consider this... If you have 10m records, and you do aggregate data for all static / old data grouped on say... 3 fields, and this reduces the set down to even 1 million records, that helps. Then, if you were interested in totals on just one of the criteria, then you could query the aggregate by 3 fields but group by 1, so your 1m records is the basis of the query and not the full 10. Once a user finds a particular need of intersection of fieldX=??? and fieldY=???, then go to your 10m record set for the full raw data that may be desired.
Mysql - query table with over 10m data
[ "", "mysql", "sql", "bigdata", "" ]
I have a table that looks something like this: ``` date | product | price -------------------------------- 17/01/2015 | milk | 2.54 18/01/2015 | milk | 2.47 23/01/2015 | milk | 2.61 21/01/2015 | eggs | 1.35 04/02/2015 | eggs | 1.36 27/01/2015 | eggs | 1.31 ``` What I need is a select that returns me the latest price of each product, that is the one with the maximum date. Desired result here would be: ``` 23/01/2015 | milk | 2.61 04/02/2015 | eggs | 1.36 ``` I tried this: ``` select max(date), product, price FROM table GROUP BY product, price ``` but it didn't work as expected.
Use a correlated subquery to find each product's last date: ``` select date, product, price from table t1 where date = (select max(date) from table t2 where t1.product = t2.product) ``` (After reading Damien\_The\_Unbeliever's comment I want to add that if several entries exist with same max date for a product, they will all be returned.)
Try like this: ``` select max(date), product,price from table group by product, price ```
Return rows with maximum value of a column
[ "", "sql", "t-sql", "" ]
How does Min function work on dates ? If 2 records have the same date and time stamp , the min function returns 1. Does it pull records based on when it was put into the table ?
[`MIN`](https://msdn.microsoft.com/en-us/library/ms179916.aspx) is an aggregate function so it will return 1 record in your question's case. Since the two records have the same date and timestamp it doesn't matter which date and timestamp are returned (they're the same). Finally, the time the records were inserted is not considered.
MIN() returns the smallest of all selected values of a column. It seems to me that your statement may simply be asking if a minimum exists. Please post your sql statement. possibly this is what you need: ``` SELECT MIN (date) AS "Min Date" FROM tablename; ```
SQL Min function on Date
[ "", "sql", "" ]
I have a `mail` table that looks like this: ``` CREATE TABLE `mail` ( `id` int(20) NOT NULL auto_increment, `userto` varchar(30) collate utf8_unicode_ci NOT NULL, `userfrom` varchar(30) collate utf8_unicode_ci NOT NULL, `message` longtext collate utf8_unicode_ci NOT NULL, `seen` int(1) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; ``` I want to do something like this: ``` SELECT userfrom FROM mail WHERE userto = "admin" ``` however, I don't want any repeats in the output. How would I do this?
You have to use `DISTINCT` for non repeative record like ``` SELECT DISTINCT userfrom FROM mail WHERE userto = "admin" ``` Check [DISTINCT](http://dev.mysql.com/doc/refman/5.0/en/distinct-optimization.html)
I believe you need to use the DISTINCT keyword. Useful link => <http://www.w3schools.com/sql/sql_distinct.asp>
How do I get a list of unique row values?
[ "", "sql", "" ]
I have some problems with joining two tables when foreign key is no set directly. I have two tables: TABLE A ``` {A.ID} {A.NAME} {Parentid} {A_FK} (foreign key) A.ID1 A.NAME1 NULL A_FK1 A.ID2 A.NAME2 NULL A_FK2 A.ID3 A.NAME3 A.ID2 NULL A.ID4 A.NAME4 NULL A.FK4 OtherA OtherId Other Other ``` Table B ``` {B.ID} {B.Code} A.FK1 some_text1 A.FK2 some_text2 A.FK4 some_text3 B.ID1 some_text4 ``` In table A. A.ID3 does not have FK but it has ParentID that point to A.ID2 which has ForeingKey. I would like to have expected: ``` {A.ID} {A.NAME} {B.Code} A.ID1 A.NAME1 some_text1 A.ID2 A.NAME2 some_text2 A.ID3 A.NAME3 some_text2 A.ID4 A.NAME4 some_text3 ``` Can anyone help me with this join?
If your parent-child relationship can be of multiple levels, then you need to write a recursive query something like this: ``` CREATE TABLE TableA(ID INT, Name VARCHAR(10), ParentID INT NULL, FK INT NULL ); CREATE TABLE TableB( ID INT, Code VARCHAR(50)); INSERT INTO TableA VALUES (1, 'Name1', NULL, 1); INSERT INTO TableA VALUES (2, 'Name2', NULL, 2); INSERT INTO TableA VALUES (3, 'Name3', 2, NULL); INSERT INTO TableA VALUES (4, 'Name4', NULL, 4); INSERT INTO TableA VALUES (5, 'Name4', 3, NULL); INSERT INTO TableB VALUES (1, 'Some Text 1'); INSERT INTO TableB VALUES (2, 'Some Text 2'); INSERT INTO TableB VALUES (4, 'Some Text 3'); WITH X (ID, NAME, FK) AS ( SELECT ID, NAME, FK FROM TABLEA WHERE PARENTID IS NULL UNION ALL SELECT T.ID, T.NAME, X.FK FROM TABLEA T INNER JOIN X ON (T.PARENTID = X.ID) ) SELECT X.ID, X.NAME , TABLEB.CODE FROM X INNER JOIN TABLEB ON (X.FK = TABLEB.ID); ```
You could first get all rows to be joined on `TableB` using `FK` and `UNION ALL` them with rows to be joined on `TableA` using `ParentID`: **SAMPLE DATA** ``` CREATE TABLE TableA( ID INT, Name VARCHAR(10), ParentID INT NULL, FK INT NULL ) CREATE TABLE TableB( ID INT, Code VARCHAR(50) ) INSERT INTO TableA VALUES (1, 'Name1', NULL, 1), (2, 'Name2', NULL, 2), (3, 'Name3', 2, NULL), (4, 'Name4', NULL, 4); INSERT INTO TableB VALUES (1, 'Some Text 1'), (2, 'Some Text 2'), (4, 'Some Text 3'); ``` **QUERY** ``` SELECT a.ID, a.Name, b.Code FROM TableA a INNER JOIN TableB b ON a.FK = b.ID WHERE a.ParentID IS NULL UNION ALL SELECT a1.ID, a1.Name, b.Code FROM TableA a1 INNER JOIN TableA a2 ON a2.ID = a1.ParentID INNER JOIN TableB b ON a2.FK = b.ID WHERE a1.FK IS NULL ORDER BY ID ``` **RESULT** ``` ID Name Code ----------- ---------- ------------------- 1 Name1 Some Text 1 2 Name2 Some Text 2 3 Name3 Some Text 2 4 Name4 Some Text 3 ```
Joining two tables with reference to foreign key
[ "", "mysql", "sql", "sql-server", "" ]
I am following an online course on databases. However, I don't know how to proceed with this question. Can anyone help? This is my code: ``` SELECT distinct name FROM Persons P, Knows K WHERE K.personA_id = P.id AND K.personB_id = P.id GROUP BY name HAVING SUM(K.id) = 2 ``` ![enter image description here](https://i.stack.imgur.com/cNK7M.jpg)
``` SELECT distinct p.name FROM Knows K LEFT JOIN Persons P, ON K.personA_id = P.id WHERE K.personB_id IN ( SELECT id FROM Persons WHERE age>60 ) GROUP BY name HAVING COUNT(K.personB_id) = 2 ```
``` SELECT P.name FROM Persons P WHERE ( SELECT COUNT(*) FROM Knows K JOIN Persons P2 ON K.personB_id = P2.id WHERE K.personA_id = P.id AND P2.age >= 60 ) = 2 ``` but if you want to know how use the having ``` SELECT P.name FROM Persons P JOIN Knows K ON K.personA_id = P.id JOIN Persons P2 ON P2.id = K.personB_id WHERE P2.age >= 60 GROUP BY P.id HAVING COUNT(*)=2 ``` Pay attention, this query will work ONLY with MySql
Problems with HAVING query
[ "", "mysql", "sql", "exists", "forall", "" ]
I am querying a table that contains a comments section. The comments section can contain part numbers of variable lengths. If, within the comments section I ensure that the part numbers are wrapped in quotes ("partnumberA"), is it possible query that field to pull everything in between the quotes (even if the part numbers could vary in length)? Production notes are stored in an NVARCHAR field. Here is some sample data: 3/6/2015 (blujo) - "3490-0001023-02" PO46709 Due 3/10 (RW24718)
You can use a combination of `substring` and `charindex` functions. ``` declare @Comments varchar(50) set @Comments = '3/6/2015 (blujo) - "3490-0001023-02" PO46709 Due 3/10 (RW24718)' select substring(@Comments, charindex('"',@Comments)+1, charindex('"',@Comments, charindex('"',@Comments)+1)-charindex('"',@Comments)-1) ``` The first `charindex` finds the first quote and starts on the following character. The second and third `charindex` finds the next quote after the first.
Here's a sample using a variable. Of course your query would reference a table column instead: ``` declare @x varchar(256) = '3/6/2015 (blujo) - "3490-0001023-02" PO46709 Due 3/10 (RW24718)'; select substring( @x, charindex('"', @x) + 1, charindex('"', @x, charindex('"', @x) + 1) - charindex('"', @x) - 1 ) ```
Want to pull variable info from a Comments section
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I apologize if this is fairly basic MySQL, but I'm at a loss. I have two tables: `users` and `us_geolocations` and I would like to create a View of the geolocations of the users. My first issue is that I have multiple `lat` and `long` entries for some cities (and even some zip codes) in my `us_geolocations`. And my second issue is that I have `zip` as an optional parameter in my `users` table. What I would like is to have a View with all distinct `city`, `state`, `zip`, `lat` and `long` values, where the `city` and `state` are in the `users` table. What I have now is: ``` CREATE VIEW `active_us_geolocations` AS SELECT DISTINCT `users`.`city` AS `city`, `users`.`state` AS `state`, `users`.`zip` AS `zip`, `us_geolocations`.`lat` AS `lat`, `us_geolocations`.`long` AS `long` FROM (`users` LEFT JOIN `us_geolocations` ON (((`users`.`city` = `us_geolocations`.`city`) AND (`users`.`state` = `us_geolocations`.`state`) #Only if there is a `users`.`zip` AND (`users`.`zip` = `us_geolocations`.`zip`)))) ``` But when the `zip` is empty in `users`, I also get empty `lat` and `long`. Do I need to separate the required parameters and optional parameters into their own queries, or is it possible to do them in a specific type of join? And if it does need to be in separate sections, how would I go about designating that?
Is this what you're looking for? ``` CREATE VIEW `active_us_geolocations` AS SELECT DISTINCT `users`.`city` AS `city`, `users`.`state` AS `state`, `users`.`zip` AS `zip`, `us_geolocations`.`lat` AS `lat`, `us_geolocations`.`long` AS `long` FROM `users` LEFT JOIN `us_geolocations` ON `users`.`city` = `us_geolocations`.`city` AND `users`.`state` = `us_geolocations`.`state` AND (`users`.`zip` = `us_geolocations`.`zip` OR `users`.`zip` IS NULL OR `users`.`zip = '') ``` **Edit:** The solution above selects all active city/state/zip/lat/long combinations by matching user addresses with lat/long values by making sure that the city and state match, and then also checking the zip if it is present. Perhaps instead you wish to match user addresses with lat/long values if either the city and state match *or* the zip matches. This is a slightly different query, shown here: ``` CREATE VIEW `active_us_geolocations` AS SELECT DISTINCT `users`.`city` AS `city`, `users`.`state` AS `state`, `users`.`zip` AS `zip`, `us_geolocations`.`lat` AS `lat`, `us_geolocations`.`long` AS `long` FROM `users` LEFT JOIN `us_geolocations` ON `users`.`city` = `us_geolocations`.`city` AND `users`.`state` = `us_geolocations`.`state` OR `users`.`zip` = `us_geolocations`.`zip` ```
``` CREATE VIEW `active_us_geolocations` AS SELECT DISTINCT `users`.`city` AS `city`, `users`.`state` AS `state`, `users`.`zip` AS `zip`, `us_geolocations`.`lat` AS `lat`, `us_geolocations`.`long` AS `long` FROM (`users` LEFT JOIN `us_geolocations` ON (((`users`.`city` = `us_geolocations`.`city`) AND (`users`.`state` = `us_geolocations`.`state`) WHERE `users`.`zip` IS NOT NULL ```
How to I join two tables with two required parameters and one optional parameter?
[ "", "mysql", "sql", "" ]
I have a table that looks like this ``` [UniqueID]-[1]-[2]-[3]-[etc... to 250 ------------------------------ 00000A | 0 | 0 | 1 | 00000B | 0 | 1 | 0 | 00000C | 0 | 1 | 1 | ``` I pivoted that table from another table with two columns, the unique ID and the number (1, 2, 3, etc.) I have another table which has two columns, a number and an elimination. For example, if the number is 2 and the elimination is 3, I would take the unique ID 00000C, go to column 3, and change the 1 to a 0. In the past, I have manually written: ``` UPDATE [TABLE] SET [3] = 0 WHERE [2] = 1 ``` I need to do this about 150 times, so it would be much more concise to write a query to read from the second table to modify the first. Additionally, when I have to make changes, I will only need to modify the table instead of making changes to the query itself. I know I can probably do this by pivoting the second table and using Dynamic SQL, and I will do that if I have to, but I'm wondering if any of you have some other ideas to solve this problem. Basically, what I am looking to do is this: ``` UPDATE [TABLE] SET [(SELECT elim FROM ElimTbl)] = 0 WHERE [(SELECT num FROM ElimTbl)] = 1 ``` I know that's invalid, but I'm hoping someone has a better idea. Thank you for your time!
Target table is not normalized because `[1], [2], ..., [150]` are nothing more than a repeating group of columns ([1](https://stackoverflow.com/questions/23194292/normalization-what-does-repeating-groups-mean), [2](https://www.simple-talk.com/sql/learn-sql-server/facts-and-fallacies-about-first-normal-form/)). This means that target table breaks first normal form. This problem generates another problem: `UPDATE` statement should include the same expression with small modifications (a [sub]query that finds *elimination*) 150th times. Instead, I would use a normalized target table and when is needed, data from target table can be easily pivoted using PIVOT operator: ``` /* [UniqueID]-[1]-[2]-[3]-etc... 150 ------------------------------ 00000A | 0 | 0 | 1 | 00000B | 0 | 1 | 0 | 00000C | 0 | 1 | 1 | */ DECLARE @Target TABLE ( UniqueID VARCHAR(6) NOT NULL, Num INT NOT NULL, PRIMARY KEY (UniqueID, Num), Value BIT NOT NULL ); INSERT @Target VALUES ('00000A', 3, 1), ('00000B', 2, 1), ('00000C', 2, 1), ('00000C', 3, 1); DECLARE @Source TABLE ( UniqueID VARCHAR(6) NOT NULL, PRIMARY KEY (UniqueID), Num INT NOT NULL ); INSERT @Source VALUES ('00000B', 3), ('00000C', 2); SELECT * FROM @Target SELECT * FROM @Source ``` *-- Intermediate query* ``` SELECT s.*, x.* FROM @Source s OUTER APPLY ( SELECT TOP(1) * FROM @Target t WHERE t.Num = s.Num AND t.Value = 1 AND t.UniqueID >= s.UniqueID ORDER BY t.UniqueID ) x /* Results UniqueID Num UniqueID Num Value -------- --- -------- --- ----- 00000B 3 00000C 3 1 00000C 2 00000C 2 1 */ ``` *-- Final query* ``` UPDATE t --| or DELETE t SET Value = 0 --| FROM @Target AS t WHERE EXISTS ( SELECT * FROM @Source s CROSS APPLY ( SELECT TOP(1) * FROM @Target t WHERE t.Num = s.Num AND t.Value = 1 AND t.UniqueID >= s.UniqueID ORDER BY t.UniqueID ) x WHERE x.UniqueID = t.UniqueID ) SELECT * FROM @Target /* Results: UniqueID Num Value -------- ----------- ----- 00000A 3 1 00000B 2 1 00000C 2 0 00000C 3 0 */ ``` *-- Pivot* ``` ;WITH CteSource AS (SELECT UniqueID, Num, CONVERT(TINYINT, Value) AS ValueAsInt FROM @Target) SELECT pvt.* FROM CteSource s PIVOT( MAX(s.ValueAsInt) FOR s.Num IN ([1], [2], [3], /*...*/ [150]) ) pvt /* UniqueID 1 2 3 150 -------- ---- ---- ---- ---- 00000A NULL NULL 1 NULL --> NULLs can be replaced with 0 with ISNULL / COALESCE 00000B NULL 1 NULL NULL 00000C NULL 0 0 NULL */ ```
``` Update t1 Set t1.value = t2.value FROM t1 INNER JOIN t2 ON t1.KEY = t2.KEY ```
T-SQL: Can I update a table using rows from another table?
[ "", "sql", "sql-server", "t-sql", "" ]
I have tables as follows: ``` inverter [ id, address, ... ] string [ id, inverter_id (foreign key), ... ] ``` I want to select all "inverters", together with the number of "strings" attached to them. I tried this query here, but it gives me empty result, so how can I do this? ``` SELECT inverter.*, COUNT(string.*) as string_count FROM inverter LEFT JOIN string ON string.inverter_id = inverter.id ORDER BY address ``` I am using SQLite3. --- Here's a dump of the test tables I have now: ``` CREATE TABLE `inverter` (`id` INTEGER NULL PRIMARY KEY AUTOINCREMENT, `address` VARCHAR(3) NULL, `comment` VARCHAR(250) NULL); INSERT INTO "inverter" ("id","address","comment") VALUES ('2','A1','North side'); INSERT INTO "inverter" ("id","address","comment") VALUES ('3','A2','Broken inverter'); INSERT INTO "inverter" ("id","address","comment") VALUES ('4','A3',''); INSERT INTO "inverter" ("id","address","comment") VALUES ('5','A4','South-west corner'); CREATE TABLE `string` (`id` INTEGER NULL PRIMARY KEY AUTOINCREMENT, `address` VARCHAR(3) NULL, `inverter_id` INTEGER NULL, `comment` VARCHAR(250) NULL, FOREIGN KEY (`inverter_id`) REFERENCES `inverters` (`id`) ON DELETE SET NULL); INSERT INTO "string" ("id","address","inverter_id","comment") VALUES ('1','XX','3',''); INSERT INTO "string" ("id","address","inverter_id","comment") VALUES ('2','XY','3','Ahoj jak se máš'); INSERT INTO "string" ("id","address","inverter_id","comment") VALUES ('3','XZ','4','Moo'); ```
It seems SQLite3 chokes on `count(s.*)` so try this instead: ``` select i.*, count(s.id) from inverter i left join string s on i.id = s.inverter_id group by i.address; ``` This gives: ``` 2|A1|North side|0 3|A2|Broken inverter|2 4|A3||1 5|A4|South-west corner|0 ```
Using an aggregate always means that you are grouping, and if you don't specify the grouping it will be a single group containing all records. Group on the fields that you get from the `inverter` table. Also use a single field in the count instead of `string.*`: ``` select inverter.id, inverter.address, count(string.id) as string_count from inverter left join string on string.inverter_id = inverter.id group by inverter.id, inverter.address order by inverter.address ```
Count rows from another table referencing current row by foreign key
[ "", "sql", "sqlite", "" ]
I have a SQL with some conditions, for example: ``` SELECT CASE WHEN RIGHT(CAST(COLUMN_A as nvarchar(max)),1) = '.' THEN SUBSTRING (COLUMN_A, DATALENGTH(COLUMN_A) - 4, 4) ELSE SUBSTRING (COLUMN_A, DATALENGTH(COLUMN_A) - 3, 4) END AS COL_1 ``` In the same select I have others check using the rule above and I have to repeat it all. I am looking for a way to do not repeat the rest of the code, something like: ``` CASE WHEN COL_1 LIKE 'AAAA' OR COL_1 LIKE 'BBBB' OR COL_1 LIKE 'CCCC' OR COL_1 LIKE 'DDDD' THEN 1 ELSE 0 END AS Code ``` Using the result of the COL\_1 above to do the others checks. How can I do that? Thanks.
You can use nested SELECT command like this: ``` SELECT CASE WHEN x.COL_1 LIKE 'AAAA' OR x.COL_1 LIKE 'BBBB' OR x.COL_1 LIKE 'CCCC' OR x.COL_1 LIKE 'DDDD' THEN 1 ELSE 0 END AS Code FROM ( SELECT CASE WHEN RIGHT(CAST(COLUMN_A as nvarchar(max)),1) = '.' THEN SUBSTRING (COLUMN_A, DATALENGTH(COLUMN_A) - 4, 4) ELSE SUBSTRING (COLUMN_A, DATALENGTH(COLUMN_A) - 3, 4) END AS COL_1 ) x ```
``` SELECT * FROM ( SELECT list_of_columns , CASE WHEN this=cool THEN 1 ELSE 0 END As computed_column FROM ... ) As a_subquery WHERE computed_column = 1 ```
Shorten SQL conditions
[ "", "sql", "sql-server-2008-r2", "" ]