Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I am trying to count first then divide by variable and then multiply by 100. For instance, my variable is 16 the the total\_count is 15 then i expect to see 93.75, that is 15/16\*100. Something is not right with my calculation. ``` declare @myVar int set @myVar = 16 select S.FAC_ID, NAME, COUNT(TTL_COUNT)/@myVar*100 AS FINAL_RESULT from MYTABLE GROUP BY S.FAC_ID, NAME ```
``` COUNT(TTL_COUNT)*100/@myVar ``` That should be the right calculation And as pointed out it should be either double or float ``` declare @myVar double ```
Declare your variable to be a float or other type with a decimal point. SQL Server does integer division by default: ``` declare @myVar float; set @myVar = 16; select S.FAC_ID, NAME, COUNT(TTL_COUNT)/@myVar AS FINAL_RESULT from MYTABLE group by S.FAC_ID, NAME; ``` If you want the multiplication by 100: ``` select S.FAC_ID, NAME, 100*COUNT(TTL_COUNT)/@myVar AS FINAL_RESULT ```
count then divide by a variable in sql
[ "", "sql", "sql-server", "t-sql", "" ]
I want to set table name in a dynamic SQL query. I tried successfully for parameter as following: ``` /* Using sp_executesql */ /* Build and Execute a Transact-SQL String with a single parameter value Using sp_executesql Command */ /* Variable Declaration */ DECLARE @EmpID AS SMALLINT DECLARE @SQLQuery AS NVARCHAR(500) DECLARE @ParameterDefinition AS NVARCHAR(100) /* set the parameter value */ SET @EmpID = 1001 /* Build Transact-SQL String by including the parameter */ SET @SQLQuery = 'SELECT * FROM tblEmployees WHERE EmployeeID = @EmpID' /* Specify Parameter Format */ SET @ParameterDefinition = '@EmpID SMALLINT' /* Execute Transact-SQL String */ EXECUTE sp_executesql @SQLQuery, @ParameterDefinition, @EmpID ``` Now I want to take `TABLE NAME` dynamically using a parameter but I've failed to do that. Please guide me.
Table names cannot be supplied as parameters, so you'll have to construct the SQL string manually like this: ``` SET @SQLQuery = 'SELECT * FROM ' + @TableName + ' WHERE EmployeeID = @EmpID' ``` However, make sure that your application does not allow a user to directly enter the value of `@TableName`, as this would make your query susceptible to SQL injection. For one possible solution to this, see [this answer](https://stackoverflow.com/a/29082755/1594867).
To help guard against SQL injection, I normally try to use functions wherever possible. In this case, you could do: ``` ... SET @TableName = '<[db].><[schema].>tblEmployees' SET @TableID = OBJECT_ID(TableName) --won't resolve if malformed/injected. ... SET @SQLQuery = 'SELECT * FROM ' + QUOTENAME(OBJECT_NAME(@TableID)) + ' WHERE EmployeeID = @EmpID' ```
How to set table name in dynamic SQL query?
[ "", "sql", "sql-server", "sql-server-2008", "parameters", "dynamicquery", "" ]
I have a system that holds some big amount of data. The database used is SQL Server. One of the tables have around 300000 rows, and there are quite a few number of tables of this size. There happens regular updates on this table - we say this as "transactional database" where transactions are happening. Now, we need to implement a reporting functionality. Some of the architect folks are proposing a different database which is a copy of this database + some additional tables for reporting. They propose this because they do not want to disrupt the transactional database functionality. For this, data has to be moved to the reporting database frequently. My question here is, is it really required to have second database for this purpose? Can we use the transactional database itself for reporting purposes? Since the data has to be moved to a different database, there will be latency involved which is not the case if the transactional database itself is used for reporting. Expecting some expert advice.
You need to do some research into ETLs, Data Warehousing and Reporting databases, as I think your architects may be addressing this in a good way. Since you don't give details of the actual reports I'll try and answer the general case. (Disclaimer: I work in this field and we have products geared to this) Transactional databases are optimised for a good balance between read/update/insert, and the indexes and table normalisations are geared to this effect. Reporting databases are geared to be very very optimal for read access over and above all other things. This means that the 'normal' normalisation rules that one would apply to a transactional database won't apply. In fact high degrees of de-normalisation may be in place to make the report queries way more efficient and simpler to manage. Running complex (especially aggregations over extended data ranges such as historical time frames) queries on transactional database, may impact the performance such that the key users of the database - the transaction generators could be negatively impacted. Though a reporting database may not be required in your situation you may find that the it's simpler to keep the two use cases separate. Your concern about the data latency is a real one. This can only be answered by the business users who will consume the reports. Often people say "We want real time info" when in fact lots if not all of their requirements are covered with non real time info. The acceptable degree of data staleness can only be answered by them In fact I'd suggest that you take your research slight further and look at multidimensional cubes for your report concerns as opposed just reporting databases. There are designed abstract your reporting concerns to whole new level.
I second Hubson's answer. I myself may not a decent sql server developers, but I have faced with big tables (around 1m rows). So more or less I have the experience for this. Referencing to [this SE answer](http://ask.sqlservercentral.com/questions/28306/two-databases-on-two-drives-performance-increase.html), I can say that multiple DB on same harddisk won't give performance boost due to I/O capacity of harddisk. If you can somehow put the reporting DB to different harddisk, then you can gain the benefit by having one hdd intensive on `I/O`, and other in `read only`. And if both databases exists in same instance, it shares the same `memory` and `tempdb`, which gives no benefit to performance or reducing I/O cost at all. Moreover, 300k rows is not a big deal, unless it is joined with 3 other 300k tables, or having a very complex query that requires data cleanup, etc. It is different though if your **data growth rate** is increasing fast in the future. What you can do to increase the performance of report, without having involving the performance impact for operational db? 1. Proper indexing Beside requiring some storage, proper indexing can lead to faster data processing and you will be amazed with how it speed up processes. 2. Proper locking `NoLock` imho is the best to use for reporting, unless you use different locking strategy than serialized one in database. Some skew in report result caused by uncommitted transaction usually not matter much. 3. Summarize data A scheduled process to generate summarized data can also be used to prevent re-calculation for report reading. Edit: So, what is the benefit of having the second database? It is beneficial though to has it, even though does not give direct benefit to performance. Second database can be used to keep the transaction db clean and separated with reporting activity. Its benefits: 1. Keeping the materialized data For example a summary of total profit generated each month can be stored in table which belong to this specific db 2. Keeping the reporting logics 3. You can secure access for specific people which is different with transactional db 4. The file generated for db is separated with transactional. It is easier for backup/restore (and separating with transactional) and when you want to move to different harddisk, then it is easier **In short**, adding another normal database for this situation will not give much benefit in performance, unless it is done right (separate the harddisk, separate the server, etc). However second database gives benefit in maintainability aspects and security strategies though.
Databases for reporting and daily transactions
[ "", "sql", "sql-server", "database", "architecture", "" ]
I attempted copying a database using the wizard and it failed halfway through. Then the original database disappeared from my SQL Management Studio explorer. I am fearful that the database was dropped but, I know that Copy database does not drop, it just detaches then reattaches. I am assuming it failed to reattach. I attempted a restore and failed, and can see the Database listed, so I am sure it still exists. Please tell me the Copy wizard did not drop my database!!!
Assuming that you used the faster Detach->Copy->Attach method, your database will still be out there. However, because the process failed in the middle of the process it may have never been re-attached. You need to know where your data and log file(s) is/are. From there you open SSMS, right-click "Databases" and choose "Attach" ![attach](https://i.stack.imgur.com/MKtCq.png) then you'd need to select the proper data file ![files](https://i.stack.imgur.com/50Mol.png) It should also find your log ![log](https://i.stack.imgur.com/Q1E1J.png) but if it doesn't then you'll need to search for that as well.
I don't know the answer, but here are some further suggestions to keep in mind: 1: Turn the computer off and on again to ensure SQL Management Studio is no longer running in the background (in some corrupted state). 2: Download some other database tool (such as Toad) and see if you can access the database. 3: Alternately, write a Java program that opens the database and reads a table to verify Java JDBC can access it. 4: Reload SQL Management Studio. 5: If SQL Management Studio partly copied your database to another location and that copy is corrupted, SQL Management Studio may hang on accessing that copy. See if you can locate it and delete the copy. You can find it by looking for a file extension that is the same as your database extension and is dated today (assuming your corruption happened today). Also see if SQL Managment Studio created other supporting files. Remove them too.
SQL Copy Database failed. Now cannot see Database anymore
[ "", "sql", "sql-server", "" ]
I have a list of members of a club together with datetime that they attended the club. They can attend the club several times in a single day. I need to know how many Sundays did each member attend over a given period (regardless how many times within a single Sunday). I have a table that lists each attendance, made up of member number and the attendance datetime. Eg In this example 13/1 and 20/1 are Sundays ``` MEMBER ATTENDANCE 12345 13/1/13 09:00 12345 13/1/13 15:00 12345 14/1/13 08:00 56789 13/1/13 10:00 56789 13/1/13 15:00 56789 13/1/13 21:00 56789 14/1/13 10:00 56789 20/1/13 09:00 24680 14/1/13 08:00 24680 15/1/13 07:00 ``` Ideally I would like to see this returned: ``` MEMBER # OF SUNDAYS 12345 1 56789 2 24680 0 ```
I think you need this: ``` select Member, count(distinct dateadd(day, datediff(day, 0, Attendance), 0)) as NumberOfSundays from t where datepart(dw, Attendance) = 6 group by Member ; ``` The complicated count is really doing: ``` count(distinct cast(Attendance as date)) ``` but the `date` data type is not supported in SQL Server 2005. EDIT: Instead of `datepart(dw, Attendance) = 6`, you can use `datename(dw, Attendance) = 'Sunday'`.
**Try this** ``` SELECT MEMBER, SUM(CASE DATENAME(dw,ATTENDANCE) WHEN 'Sunday' THEN 1 ELSE 0 END) As [# OF SUNDAYS] FROM MemberTable WHERE ATTENDANCE between [YourStartDateAndTime] AND [YourEndDateAndTime] -- replace [YourStartDateAndTime] AND [YourEndDateAndTime] with your value or variable GROUP BY MEMBER ```
Count day of week occurances
[ "", "sql", "sql-server-2005", "" ]
I have 3 tables that are currently being joined using inner joins. The tables are: invoice, contract and meter. Some simplified sample data: ``` //invoice id | contract_id 1 | 123 //contract id | meter_id | supplier | end_date 123 | 100 | British Gas | 2013-12-20 456 | 100 | nPower | 2014-03-03 //meter id | meter-id 1 | 100 ``` My aim is to join the tables but retrieve only the latest (MAX) end\_date and get the supplier. Normally this wouldn't be a problem, but I only have contract 123 to join on, not contract 456. As shown, they both share the same meter\_id. ``` //Current query SELECT contract.supplier AS supplierName FROM invoice INNER JOIN contract ON contract.id=invoice.contract_id INNER JOIN meter ON meter.id=contract.meter_id ``` How do I do this? Is it via a nested select or something?? Thanks
``` SELECT * FROM ( SELECT meter_id, supplier, MAX(end_date) end_date FROM contract GROUP BY meter_id, supplier ) a JOIN contract c ON c.meter_id = a.meter_id AND a.end_date = c.end_date JOIN meter m ON m.meter-id = c.meter_id JOIN invoice i ON i.contract_id = c.id ```
It should be like : ``` SELECT contract.supplier AS supplierName FROM invoice INNER JOIN contract ON contract.id=invoice.contract_id INNER JOIN meter ON meter.meter_id=contract.meter_id order by end_date DESC limit 1 ```
MySQL - nested select statement for JOIN?
[ "", "mysql", "sql", "join", "" ]
I have few experiences with t sql and I have to write a stored. This is my stored: ``` USE myDatabase GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE PROCEDURE [dbo].[myStored] ( @myPar1 INT, @myPar2 SMALLDATETIME ) AS BEGIN SET NOCOUNT ON IF EXISTS ( SELECT 1 FROM myTable1 WHERE myPar1 = @myPar1 AND myPar2 = @myPar2 ) BEGIN DELETE FROM myTable1 WHERE myPar1 = @myPar1 AND myPar2 = @myPar2 END ELSE IF EXISTS ( SELECT 1 FROM myTable2 WHERE myPar2 = @myPar2 ) BEGIN INSERT INTO myTable1 (myField1, myField2, myField3, myField4) VALUES (@myPar1, @myPar2, '', 1) END ELSE IF EXISTS ( SELECT 1 FROM myTable3 WHERE myPar2 = @myPar2 ) BEGIN INSERT INTO myTable1 (myField1, myField2, myField3, myField4) VALUES (@myPar1, @myPar2, '', 1) END END ``` And these are my questions: 1 - Are there macroscopic errors? 2 - Someone suggest to use "SELECT CASE" someone else to use "IF ... ELSE", what's the difference? And what is the best option for my stored? 3 - I'm not sure about the use of the "BEGIN ... END" statement, in particular in combination with "IF ... ELSE" statement. What does it mean? Is it necessary to put "BEGIN ... END" inside the "IF ... ELSE" statement? Also for executing a single instruction?
**For a single IF Statement** ``` IF (Some Condition) --<-- If condition is true control will get inside the BEGIN -- BEGIN ..END Block and execute the Code inisde /* Your Code Here*/ END ``` All the single IF statements will check the Conditions Independently. **ONE IF with ONE ELSE** ``` IF (Some Condition) --<-- If condition is true control will get inside the BEGIN -- BEGIN ..END Block and execute the Code inisde /* Your Code Here*/ -- IF not true control will jump to Else block END ELSE --<-- You dont mention any condition here BEGIN /* Your Code Here*/ END ``` Only One block of code will execute IF true then 1st block Otherwsie ELSE block of code. **Multiple IFs and ELSE** ``` IF (Some Condition) --<--1) If condition is true control will get inside the BEGIN -- BEGIN ..END Block and execute the Code inisde /* Your Code Here*/ -- IF not true control will check next ELSE IF Blocl END ELSE IF (Some Condition) --<--2) This Condition will be checked BEGIN /* Your Code Here*/ END ELSE IF (Some Condition) --<--3) This Condition will be checked BEGIN /* Your Code Here*/ END ELSE --<-- No condition is given here Executes if non of BEGIN --the previous IFs were true just like a Default value /* Your Code Here*/ END ``` Only the very 1st block of code will be executed WHERE IF Condition is true rest will be ignored. **BEGIN ..END Block** After any IF, ELSE IF or ELSE if you are Executing more then one Statement you MUST wrap them in a `BEGIN..END` block. not necessary if you are executing only one statement BUT it is a good practice to always use BEGIN END block makes easier to read your code. **Your Procedure** I have taken out ELSE statements to make every IF Statement check the given Conditions Independently now you have some Idea how to deal with IF and ELSEs so give it a go yourself as I dont know exactly what logic you are trying to apply here. ``` CREATE PROCEDURE [dbo].[myStored] ( @myPar1 INT, @myPar2 SMALLDATETIME ) AS BEGIN SET NOCOUNT ON IF EXISTS (SELECT 1 FROM myTable1 WHERE myPar1 = @myPar1 AND myPar2 = @myPar2) BEGIN DELETE FROM myTable1 WHERE myPar1 = @myPar1 AND myPar2 = @myPar2 END IF EXISTS (SELECT 1 FROM myTable2 WHERE myPar2 = @myPar2) BEGIN INSERT INTO myTable1(myField1, myField2, myField3, myField4) VALUES(@myPar1, @myPar2, '', 1) END IF EXISTS (SELECT 1 FROM myTable3 WHERE myPar2 = @myPar2) BEGIN INSERT INTO myTable1(myField1, myField2, myField3, myField4) VALUES(@myPar1, @myPar2, '', 1) END END ```
1. I don't see any macroscopic error 2. **IF ELSE** statement are the one to use in your case as your insert or delete data depending on the result of your IF clause. The **SELECT CASE** expression is useful to get a result expression depending on data in your **SELECT** statement but not to apply an algorithm depending on data result. 3. See the **BEGIN END** statement like the curly brackets in code **{ *code* }**. It is not mandatory to put the **BEGIN END** statement in T-SQL. In my opinion it's better to use it because it clearly shows where your algorithm starts and ends. Moreover, if someone has to work on your code in the future it'll be more understandable with BEGIN END and it'll be easier for him to see the logic behind your code.
t sql "select case" vs "if ... else" and explaination about "begin"
[ "", "sql", "sql-server", "t-sql", "stored-procedures", "switch-statement", "" ]
I'm trying to combine the `GROUP BY` function with a MAX in oracle. I read a lot of docs around, try to figure out how to format my request by Oracle always returns: > ORA-00979: "not a group by expression" Here is my request: ``` SELECT A.T_ID, B.T, MAX(A.V) FROM bdd.LOG A, bdd.T_B B WHERE B.T_ID = A.T_ID GROUP BY A.T_ID HAVING MAX(A.V) < '1.00'; ``` Any tips ? **EDIT** It seems to got some tricky part with the datatype of my fields. * `T_ID` is `VARCHAR2` * `A.V` is `VARCHAR2` * `B.T` is `CLOB`
After some fixes it seems that the major issue was in the `group by` YOu have to use the same tables in the `SELECT` and in the `GROUP BY` I also take only a substring of the CLOB to get it works. THe working request is : ``` SELECT TABLE_A.ID, TABLE_A.VA, B.TRACE FROM (SELECT A.T_ID ID, MAX(A.V) VA FROM BDD.LOG A GROUP BY A.T_ID HAVING MAX(A.V) <= '1.00') TABLE_A, BDD.T B WHERE TABLE_A.ID = B.T_id; ```
I'm very familiar with the phenomenon of writing queries for a table designed by someone else to do something almost completely different from what you want. When I've had this same problem, I've used. ``` GROUP BY TO_CHAR(theclob) ``` and then of course you have to `TO_CHAR` the clob in your outputs too. Note that there are 2 levels of this problem... the first is that you have a clob column that didn't need to be a clob; it only holds some smallish strings that would fit in a `VARCHAR2`. My workaround applies to this. The second level is you actually *want* to group by a column that contains large strings. In that case the `TO_CHAR` probably won't help.
How to use GROUP BY on a CLOB column with Oracle?
[ "", "sql", "oracle", "group-by", "max", "" ]
Assume the two tables: ``` Table A: A1, A2, A_Other Table B: B1, B2, B_Other ``` *In the following examples,* `is something` *is a condition checked against a fixed value, e.g.* `= 'ABC'` *or* `< 45`. I wrote a query like the following **(1)**: ``` Select * from A Where A1 IN ( Select Distinct B1 from B Where B2 is something And A2 is something ); ``` What I really meant to write was **(2)**: ``` Select * from A Where A1 IN ( Select Distinct B1 from B Where B2 is something ) And A2 is something; ``` **Strangely,** both queries returned the same result. When looking at the *explain plan* of query **1**, it looked like when the subquery was executed, because the condition `A2 is something` was not applicable to the subquery, it was **deferred** for use as a filter on the main query results. I would normally expect query **1** to fail because the subquery *by itself* would fail: ``` Select Distinct B1 from B Where B2 is something And A2 is something; --- ERROR: column "A2" does not exist ``` But I find this is not the case, and Postgres defers inapplicable subquery conditions to the main query. Is this standard behaviour or a Postgres anomaly? **Where is this documented, and what is this feature called?** Also, I find that if I add a column `A2` in table `B`, only query **2** works as originally intended. In this case the reference `A2` in query **2** would still refer to `A.A2`, but the reference in query **1** would refer to the new column `B.A2` because it is now applicable directly in the subquery.
Excellent question here, something that a lot of people come across but don't bother to stop and look. What you are doing is writing a subquery in the `WHERE` clause; not an inline view in the `FROM` clause. There's the difference. When you write a subquery in `SELECT` or `WHERE` clauses, you can access the tables that are in the `FROM` clause of the main query. This doesn't happen only in Postgres, but it is a standard behaviour and can be observed in all the leading RDBMSes, including Oracle, SQL Server and MySQL. When you run the first query, the optimizer takes a look at your entire query and determines when to check for which conditions. It is this behaviour of the optimizer that you see the condition is **deferred** to the main query because the optimizer figures out that it is faster to evaluate this condition in main query itself without affecting the end result. If you run just the subquery, commenting out the main query, it is bound to return an error at the position that you have mentioned as the column that is being referred to is not found. In your last paragraph, you have mentioned that you added a column `A2` to table `tableB`. What you have observed is right. That happens because of the implicit reference phenomenon. If you don't mention the table alias for a column, the database engine looks for the column first in the tables in `FROM` clause of the subquery. Only if the column is not found there, a reference is made to the tables in main query. If you use the following query, it would still return the same result: ``` Select * from A aa -- Check the alias Where A1 IN ( Select Distinct B1 from B bb Where B2 is something And aa.A2 is something -- Check the reference ); ``` Perhaps you can find more information in Korth's book on relational database, but I'm not sure. I have just answered your question based on my observations. I know this happens and why. I just don't know how I can provide you with further references.
**Correlated Subquery:-** If the outcome of a subquery is depends on the value of a column of its parent query table then the Sub query is called Correlated Subquery. It is standard behavior, not an error. It is not necessary that the column on which the correlated query is depended is included in the selected columns list of the parent query. ``` Select * from A Where A1 IN ( Select Distinct B1 from B Where B2 is something And A2 is something ); ``` A2 is a column of table A and parent query is on table A. That means A2 can be referenced in subquery. The above query might work slower than the following one. ``` Select * from A Where A2 is something And A1 IN ( Select Distinct B1 from B Where B2 is something ); ``` That is because A2 from parent query is referenced in loop. It depends upon the condition for the data to be fetched. If subquery is something like ``` Select Distinct B1 from B Where B2 is A2 ``` we have to reference parent query column. Alternatively, we can use joins.
IN subquery's WHERE condition affects main query - Is this a feature or a bug?
[ "", "sql", "postgresql", "subquery", "postgresql-9.1", "in-subquery", "" ]
So as the title says i need a query that finds the table that contains most rows in my database. I can show all my tables with this query: ``` select * from sys.tables ``` Or: ``` select * from sysobjects where xtype = 'U' order by name ``` And all the indexes with this query: ``` select * from sys.indexes ``` But how do i show the columns with most rows in the whole database? Kind regards, Chris
I use this query usually to sort all tables by rowcount: ``` USE DATABASENAME SELECT t.NAME AS TableName, SUM(p.rows) AS RowCounts FROM sys.tables t INNER JOIN sys.indexes i ON t.OBJECT_ID = i.object_id INNER JOIN sys.partitions p ON i.object_id = p.OBJECT_ID AND i.index_id = p.index_id WHERE t.NAME NOT LIKE 'dt%' AND i.OBJECT_ID > 255 AND i.index_id <= 1 GROUP BY t.NAME, i.object_id, i.index_id, i.name ORDER BY SUM(p.rows) desc ``` If you want only the firts just add `TOP 1` after `SELECT` --in reply to your comment---- ``` WHERE t.NAME NOT LIKE 'dt%' AND --exclude Database Diagram tables like dtProperties i.OBJECT_ID > 255 AND --exclude system-level tables i.index_id <= 1 -- avoid non clustered index ```
Using the answer to [this question](https://stackoverflow.com/questions/2221555/how-to-fetch-the-row-count-for-all-tables-in-a-sql-server-database), you can run the following to see the table with the highest row count: ``` CREATE TABLE #counts ( table_name varchar(255), row_count int ) EXEC sp_MSForEachTable @command1='INSERT #counts (table_name, row_count) SELECT ''?'', COUNT(*) FROM ?' SELECT TOP 1 table_name, row_count FROM #counts ORDER BY row_count DESC DROP TABLE #counts ```
SQL-query that finds the table that contains most rows in the database
[ "", "sql", "sql-server", "" ]
I am trying to get this query to work ``` INSERT INTO [ImportedDateRange] ([DataSet],[DateRange]) 'webshop', (select DISTINCT cast([OrderCreatedDate] as DATE) from webshop) ``` I basically want it to look something like this: ``` DataSet DateRange webshop 01/10/2013 webshop 02/10/2013 webshop 03/10/2013 webshop 03/10/2013 ``` where `webshop` is entered each time but each date range is copied over in to a new row. Also to check with the DateRange records for DataSet webshop already exist thanks for any help and advice
To insert unique DateRange records for DataSet webshop in [ImportedDateRange] table from webshop table write as: ``` INSERT INTO [ImportedDateRange] ([DataSet],[DateRange]) select DISTINCT 'webshop', cast(T2.[OrderCreatedDate] as DATE) from webshop T2 WHERE NOT EXISTS (select 1 from [ImportedDateRange] T1 where T1.[DateRange] = T2.[OrderCreatedDate]) ```
``` INSERT INTO [ImportedDateRange] ([DataSet],[DateRange]) SELECT DISTINCT 'webshop', CAST([OrderCreatedDate] as DATE) FROM webshop ```
Insert many rows in to one column with same string in another column
[ "", "sql", "sql-server", "" ]
My current SQl statement is: ``` SELECT distinct [Position] FROM [Drive List] ORDER BY [Position]ASC ``` And the output is ordered as seen below: ``` 1_A_0_0_0_0_0 1_A_0_0_0_0_1 1_A_0_0_0_0_10 1_A_0_0_0_0_11 1_A_0_0_0_0_12 1_A_0_0_0_0_13 - 1_A_0_0_0_0_24, and then 0_2-0_9 ``` The field type is Text in a Microsoft Access Database. Why is the order jumbled and is there any way of correctly sorting the values?
If you want sorting which incorporates the numerical values of those substrings, you can cast them to numbers. In the simplest case, you're concerned with only the digit(s) after the 12th character. That case would be fairly easy. ``` SELECT sub.Position, Left(sub.Position, 12) AS sort_1, Val(Mid(sub.Position, 13)) AS sort_2 FROM ( SELECT DISTINCT [Position] FROM [Drive List] ) AS sub ORDER BY 2, 3; ``` Or if you want to display only the `Position` field, you could do it this way ... ``` SELECT sub.Position FROM ( SELECT DISTINCT [Position] FROM [Drive List] ) AS sub ORDER BY Left(sub.Position, 12), Val(Mid(sub.Position, 13)); ``` However, your actual situation could be much more challenging ... perhaps the initial substring (everything up to and including the final `_` character) is not consistently 12 characters long, and/or includes digits which you also want sorted numerically. You could then use a mix of `InStr()`, `Mid()`,and `Val()` expressions to parse out the values to sort. But that task could get scary bad real fast! It could be less effort to alter the stored values so they sort correctly in character order as @Justin suggested.
> "Why the order is jumbled": The order is only jumbled because you are compiling it with your human brain and are applying more value than the computer does because of your symbolic understand of what the values represent. Parse the output as though you could only understand it as an array of character strings, and you were trying to determine which string is the greatest, all the while knowing nothing about the symbolic value of each character. You will find that the output your query generated is perfectly logical and not at all jumbled. > "Any way of correctly sorting the values" This is a design issue and it should be addressed if it really is a problem. ``` Change 1_A_0_0_0_0_0 to 1_A_0_0_0_0_00 Change 1_A_0_0_0_0_1 to 1_A_0_0_0_0_01 Change 1_A_0_0_0_0_2 to 1_A_0_0_0_0_02 etc ``` This will make the problem go away. Use these two separate queries: ``` SELECT distinct [Position] FROM [Drive List] WHERE [Position] LIKE '1_A_0_0_0_0_?' ORDER BY [Position] ASC SELECT distinct [Position] FROM [Drive List] WHERE [Position] LIKE '1_A_0_0_0_0_??' ORDER BY [Position] ASC ``` ...add to a temp table and append to get the results to display properly.
How to order by a text column that contains int in SQL?
[ "", "sql", "sorting", "ms-access", "" ]
I have problem where I have a table with 20000+ records I need to update DateServiceStart column with a date but i don't want to set it to with a single date for all 20k records. I want to spread the dates out over say 5 days, when you get to row 6 in the table I want to loop back and use the the starting date. I already have the update statement just not sure how to loop through? Any help appreciated! ``` RowNum | DateServiceStart 1 | 01/01/2014 2| 02/01/2014 3| 03/01/2014 4| 04/01/2014 5| 05/01/2014 ... 6|01/01/2014 7|02/01/2014 ```
If there is `ID` key field in the table (or you can change it to `ROWNUM` if it exists in your table) then try this query: ``` with CTE as (SELECT id, DateServiceStart, ROW_NUMBER() OVER (ORDER BY id) as rn FROM t ) UPDATE CTE SET DateServiceStart =CAST('01/01/2014' as Datetime)+(rn-1)%5 ``` `SQLFiddle demo`
If the `RowNum` column is sequential something like this would work ``` UPDATE yourTable SET DateServiceStart = DATEADD(day, (RowNum % 5), GETDATE()) ``` If you you will need a cursor or while loop.
SQL Server loop update in batches of 20
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I work as asp.net developer using C#, I receive text like this from the client: ``` > &lt;p&gt;&lt;a > href=&quot;http://www.vogue.co.uk/person/kate-winslet&quot;&gt;KATE > WINSLET&lt;/a&gt;&amp;nbsp;has given birth to a 9lb baby boy. The > Oscar-winning actress welcomed the baby with her husband Ned Rocknroll > at a hospital in Sussex.&lt;/p&gt; > > &lt;p&gt;&amp;quot;Kate had &amp;#39;Baby Boy Winslet&amp;#39; on > Saturday at an NHS Hospital,&amp;quot; Winslet&amp;#39;s spokeswoman > said, adding that the family were &amp;quot;thrilled to > bits&amp;quot;.&lt;/p&gt; > > &lt;p&gt;The announcement suggests that the child might bear his > mother&amp;#39;s surname, rather than his father&amp;#39;s slightly > more unusual moniker.&lt;/p&gt; > > &lt;p&gt;The baby is Winslet&amp;#39;s third - she is already mother > to Mia, 13, and Joe, eight, &amp;nbsp;from previous relationships - > and her husband&amp;#39;s first. They met on Necker Island, owned by > Rocknroll&amp;#39;s uncle, Richard Branson, and&lt;a > href=&quot;http://www.vogue.co.uk/news/2013/kate-winslet-married-to-ned-rocknroller---wedding-details&quot;&gt;married almost a year ago&lt;/a&gt;&amp;nbsp;in New York.&lt;/p&gt; ``` I need a way to extract the real text without tags and special characters using sql server 2008 or above ??
The best I can suggest is to use a .net HTML parser or such which is wrapped in a SQL CLR function. Or to wrap the regex in SQL CLR if you want. Note regex limitations: <http://www.codinghorror.com/blog/2008/06/regular-expressions-now-you-have-two-problems.html> Raw SQL language won't do it: it is not a string (or HTML) processing language
I recently had the same requirement (to remove HTML tags and entities) so developed this function in SQL Server. ``` CREATE FUNCTION CTU_FN_StripHTML (@dirtyText NVARCHAR(MAX)) RETURNS NVARCHAR(MAX) AS BEGIN -- Cleaned Text DECLARE @cleanText NVARCHAR(MAX)=RTRIM(LTRIM(@dirtyText)); -- HTML Tags DECLARE @tagStart SMALLINT =PATINDEX('%<%>%', @cleanText); DECLARE @tagEnd SMALLINT; DECLARE @tagLength SMALLINT; -- HTML Entities DECLARE @entityStart SMALLINT =PATINDEX('%&%;%', @cleanText); DECLARE @entityEnd SMALLINT; DECLARE @entityLength SMALLINT; WHILE @tagStart > 0 OR @entityStart > 0 BEGIN -- Remove HTML Tag SET @tagStart=PATINDEX('%<%>%', @cleanText); IF @tagStart > 0 BEGIN SET @tagEnd=CHARINDEX('>', @cleanText, @tagStart); SET @tagLength=(@tagEnd - @tagStart) + 1; SET @cleanText=STUFF(@cleanText, @tagStart, @tagLength, ''); END; -- Remove HTML Entity SET @entityStart=PATINDEX('%&%;%', @cleanText); IF @entityStart > 0 BEGIN SET @entityEnd=CHARINDEX(';', @cleanText, @entityStart); SET @entityLength=(@entityEnd - @entityStart) + 1; SET @cleanText=STUFF(@cleanText, @entityStart, @entityLength, ''); END; END; SET @cleanText = RTRIM(LTRIM(@cleanText)) RETURN @cleanText; END; ```
how to strip all html tags and special characters from string using sql server
[ "", "sql", "sql-server", "" ]
``` SELECT table.productid, product.weight FROM table1 INNER JOIN product ON table1.productid=product.productid where table1.box = '55555'; ``` Basically it's an inner join that looks at a list of products in a box, and then the weights of those products. So i'll have 2 Columns in my results, the products, and then the weights of each product. Is there an easy way to get the SUM of the weights that are listed in this query? Thanks
``` SELECT table.productid, SUM(product.weight) weight FROM table1 INNER JOIN product ON table1.productid=product.productid where table1.box = '55555' Group By table.productid ```
This will give you the total weight for each distinct `productid`. ``` SELECT table1.productid, SUM(product.weight) AS [Weight] FROM table1 INNER JOIN product ON table1.productid = product.productid WHERE table1.box = '55555' GROUP BY table1.productid ```
Retrieve Count from SQL Join query
[ "", "sql", "sql-server", "" ]
I am stuck with the question how to check if a time span overlaps other time spans in the database. For example, my table looks like this: ``` id start end 1 11:00:00 13:00:00 2 14:30:00 16:00:00 ``` Now I try to make a query that checks if a timespan overlaps with one of these timespans (and it should return any time spans that overlaps). * When I try `14:00:00 - 15:00:00` it should return the second row. * When I try `13:30:00 - 14:15:00` it shouldn't return anything. * When I try `10:00:00 - 15:00:00` it should return both rows. It's hard for me to explain, but I hope someone understands me enough to help me.
When checking time span overlaps, all you need is a query like this (replace @Start and @End with your values): For non-overlaps ``` SELECT * FROM tbl WHERE @End < tbl.start OR @Start > tbl.end ``` Thus, reversing the logic, for overlaps ``` SELECT * FROM tbl WHERE @End >= tbl.start AND @Start <= tbl.end ```
Assuming your timespan is denoted by two values - `lower_bound` and `upper_bound`, just make sure they both fall between the start and end: ``` SELECT * FROM my_table WHERE start <= :lower_bound AND end >= :upper_bound ```
How to check if a time span overlaps other time spans
[ "", "mysql", "sql", "time", "" ]
I have recently changed my database from access to a .mdf and now I am having problems getting my code to work. One of the problems im having is this error "incorrect syntax near ,". I have tried different ways to try fix this for example putting brackets in, moving the comma, putting spaces in, taking spaces out but I just cant get it. I would be so grateful if anyone could help me. My code is: ``` SqlStr = "INSERT INTO UserTimeStamp ('username', 'id') SELECT ('username', 'id') FROM Staff WHERE password = '" & passwordTB.Text & "'" ```
Assuming you're looking for username and id columns, then that's not proper SQL syntax. The main issues are that you're column names are enclosed in single quotes and in parentheses in your select. Try changing it to this: ``` SqlStr = "INSERT INTO UserTimeStamp (username, id) SELECT username, id FROM Staff WHERE password = '" & passwordTB.Text & "'" ``` That will get sent off to SQL like this: ``` INSERT INTO UserTimeStamp (username, id) SELECT username, id FROM Staff WHERE password = 'some password' ```
Try wrapping column names in square brackets like so: ``` INSERT INTO employee ([FirstName],[LastName]) SELECT [FirstName],[LastName] FROM Employee where [id] = 1 ``` Edit: Also drop the parentheses surrounding the selected fields.
incorrect syntax error near ,
[ "", "sql", "syntax", "mdf", "" ]
I've got two tables: experiments and pairings. ``` experiments: -experimentId -user pairings: -experimentId -tone -color ``` Each experiment consists of seven pairings. A pairing consists of matching a color against a tone. And the experiment is repeated multiple times by a single user. Now I'm trying to find out how to get the highest number of equal pairing per tone. Example: ``` user | tone | color | number of equal pairings user1 | b4 | red | 5 user1 | c4 | blue | 4 user2 | b4 | green | 4 … ``` So far I can get **all** the equal pairings with the following query: ``` SELECT user, tone, color, COUNT(tone) as toneCounter FROM experiments LEFT JOIN pairings ON experiments.experimentId = pairings.experimentId GROUP BY user, tone, color ORDER BY toneCounter DESC, user ASC ``` Which would look like this for example: ``` user | tone | color | number of equal pairings user1 | b4 | red | 5 user1 | b4 | blue | 2 user1 | c4 | blue | 4 user1 | c4 | red | 1 user1 | c4 | green | 2 user2 | b4 | green | 4 … ``` Yet I'm not sure how to only get the **top** equal pairings only. So in the above example I would want to get rid of the other entries for b4 and c4 for user1, and only display b4 red and c4 blue. I tried it with the following query, but apparently that is not valid SQL: ``` SELECT user, tone, color, COUNT(tone) as toneCounter FROM experiments LEFT JOIN pairings ON experiments.experimentId = pairings.experimentId GROUP BY user, tone, color HAVING toneCounter = (select max(COUNT(tone)) as tc from pairings as p where p.tone = pairings.tone) ORDER BY toneCounter DESC, user ASC ``` How can I do this?
2 SQL-Statments, the 2nd should do it... ``` SELECT AA.user, AA.tone, AA.color, MAX(AA.toneCounter) as toneCounter FROM ( SELECT user, tone, color, COUNT(tone) as toneCounter FROM experiments LEFT JOIN pairings ON experiments.experimentId = pairings.experimentId GROUP BY user, tone, color ) AA Group by AA.user, AA.tone ``` ... my answer did not satisfy myself and I doublechecked it. And I think the next answer is more adequate (and even runs on no-mysql) ``` SELECT AAA.user, AAA.tone, BBB.color, AAA.toneCounter FROM ( SELECT AA.user, AA.tone, MAX(AA.toneCounter) as toneCounter FROM ( SELECT user, tone, color, COUNT(tone) as toneCounter FROM experiments LEFT JOIN pairings ON experiments.experimentId = pairings.experimentId GROUP BY user, tone, color ) AA Group by AA.user, AA.tone ) AAA join ( SELECT BB.user, BB.tone, BB.color, MAX(BB.toneCounter) as toneCounter FROM ( SELECT user, tone, color, COUNT(tone) as toneCounter FROM experiments LEFT JOIN pairings ON experiments.experimentId = pairings.experimentId GROUP BY user, tone, color ) BB Group by BB.user, BB.tone, BB.color ) BBB ON BBB.user = AAA.user AND BBB.tone = AAA.tone AND BBB.toneCounter = AAA.toneCounter ```
If I understand your question correctly, what I will do is, firstly, retrieving the maximum tone counter of each tone for each user from the result table you got. Secondly, I will use that info to left join with the same result table you got to get the final result. ``` SELECT OriRef.* FROM ( SELECT user, tone, MAX(toneCounter) AS maxToneCounter FROM ( SELECT user, tone, color, COUNT(tone) as toneCounter FROM experiments LEFT JOIN pairings ON experiments.experimentId = pairings.experimentId GROUP BY user, tone, color ) AS Ref ) AS MaxRef LEFT JOIN ( SELECT user, tone, color, COUNT(tone) as toneCounter FROM experiments LEFT JOIN pairings ON experiments.experimentId = pairings.experimentId GROUP BY user, tone, color ) AS OriRef ON MaxRef.user = OriRef.user AND MaxRef.tone = OriRef.tone AND MaxRef.maxToneCounter = OriRef.toneCounter ``` Please correct me if I'm wrong.
MySQL: Getting the highest number of a combination of two fields
[ "", "mysql", "sql", "" ]
I have following query which give me a row for each day earning for each employee. Now i want to show those date rows as columns . my current query and its output is as follow. ``` declare @StartDate datetime,@EndDate datetime,@CompanyId int set @StartDate='01/01/2013' set @EndDate='01/31/2013' set @CompanyId=3 ;with d(date) as ( select cast(@StartDate as datetime) union all select date+1 from d where date < @EndDate ) select distinct d.date CDate,E.EmployeeId,Earning.EarningDescription,Earning.EarningId ,E.FirstName + ' ' + E.MiddleName + ' ' + E.LastName AS EmployeeName from d,Employee as E inner join Earning on E.CompanyId=Earning.CompanyId where E.CompanyId=@CompanyId and Earning.IsOnTimeCard=1 and Earning.IsHourly=1 order by EmployeeId,CDate,EarningId ``` ![current query output](https://i.stack.imgur.com/EEnWn.png) This output need to be converted using pivot. i have tried by looking into some examples of pivot . As per suggested answers to look into for solutions, now i have this query and its giving me error ``` declare @StartDate datetime,@EndDate datetime,@CompanyId int,@cols AS NVARCHAR(MAX),@query AS NVARCHAR(MAX) set @StartDate='01/01/2013' set @EndDate='01/31/2013' set @CompanyId=3 declare @WorkingDays Table ( WDate smalldatetime ) ;with d(date) as ( select cast(@StartDate as datetime) union all select date+1 from d where date < @EndDate ) insert into @WorkingDays select d.date from d SET @cols = STUFF((SELECT distinct ',' + QUOTENAME(wd.WDate) FROM @WorkingDays wd FOR XML PATH(''), TYPE ).value('.', 'NVARCHAR(MAX)') ,1,1,'') PRINT @cols set @query = ' SELECT * FROM ( select distinct WDate CDate,E.EmployeeId,Earning.EarningDescription,Earning.EarningId from @WorkingDays ,Employee as E inner join Earning on E.CompanyId=Earning.CompanyId where E.CompanyId=@CompanyId and Earning.IsOnTimeCard=1 and Earning.IsHourly=1 ) src PIVOT ( MIN(src.EarningId) FOR src.CDate IN ('+@cols+') ) AS PivotedView ' PRINT (@query) execute(@query) ``` and error is as follow now **`Must declare the table variable "@WorkingDays".`**
The SQL Server PIVOT clause does not support dynamic columns, and it looks like you require a dynamic column list. The only way to do this using PIVOT is to construct a dynamic SQL statement, pivoting the the list of dates required at that time, and then execute this SQL. A similar solution is presented in [Pivot Dynamic Columns, no Aggregation](https://stackoverflow.com/questions/11985796/sql-server-pivot-dynamic-columns-no-aggregation)
``` declare @StartDate datetime,@EndDate datetime,@CompanyId int,@cols AS NVARCHAR(MAX),@query AS NVARCHAR(MAX) set @StartDate='01/01/2013' set @EndDate='01/31/2013' set @CompanyId=3 Create table #t ( WDate smalldatetime ) ;with d(date) as ( select cast(@StartDate as datetime) union all select date+1 from d where date < @EndDate ) insert into #t select d.date from d SET @cols = STUFF((SELECT distinct ',' + QUOTENAME(wd.WDate) FROM #t wd FOR XML PATH(''), TYPE ).value('.', 'NVARCHAR(MAX)') ,1,1,'') PRINT @cols set @query = ' SELECT * FROM ( select distinct WDate CDate,E.EmployeeId,Earning.EarningDescription,Earning.EarningId ,E.FirstName +' + '''' + ' ''' + '+ E.MiddleName +' + '''' + ' ''' + '+ E.LastName AS EmployeeName from #t ,Employee as E inner join Earning on E.CompanyId=Earning.CompanyId where E.CompanyId='+ CAST (@CompanyId as nvarchar(50)) +' and Earning.IsOnTimeCard=1 and Earning.IsHourly=1 ) src PIVOT ( MIN(EarningDescription) FOR src.CDate IN ('+@cols+') ) AS PivotedView order by EmployeeId,EarningId ' PRINT (@query) execute(@query) drop table #t ```
pivot table to make each date as column with out any aggregate column
[ "", "sql", "sql-server", "pivot", "" ]
I am having a hell of time figuring out how to sum the total of these two queries: ``` select t.processed as title, count(t.processed) as ckos from circ_longterm_history clh, title t where t.bib# = clh.bib# and clh.cko_location = 'dic' group by t.processed order by ckos DESC select t.processed as title, count(t.processed) as ckos from circ_history ch, title t, item i where i.item# = ch.item# and t.bib# = i.bib# and ch.cko_location = 'dic' group by t.processed order by ckos DESC ``` Basically I want a result set with one column as t.processed and the other column the sum of the first count plus the second count. Any ideas?
I believe the following should work (although I have no sample data to test it against...): ``` SELECT t.processed as title, COALESCE(SUM(clh.count), 0) + COALESCE(SUM(ch.count), 0) as ckos FROM Title t LEFT JOIN (SELECT bib#, COUNT(*) as count FROM Circ_Longterm_History WHERE cko_location = 'dic' GROUP BY bib#) clh ON clh.bib# = t.bib# LEFT JOIN (SELECT i.bib#, COUNT(*) as count FROM Item i JOIN Circ_History ch ON ch.item# = i.item# WHERE ch.cko_location = 'dic' GROUP BY i.bib#) ch ON ch.bib# = t.bib# GROUP BY t.processed ORDER BY ckos DESC ```
``` ; WITH CTE AS ( SELECT T.PROCESSED AS TITLE, T.PROCESSED AS CKOS FROM dbo.CIRC_LONGTERM_HISTORY CLH INNER JOIN dbo.TITLE T WHERE T.[BIB#] = CLH.[BIB#] AND CLH.CKO_LOCATION = 'DIC' UNION ALL SELECT T.PROCESSED AS TITLE, T.PROCESSED AS CKOS FROM dbo.CIRC_HISTORY CH INNER JOIN dbo.ITEM I ON I.[ITEM#] = CH.[ITEM#] INNER JOIN dbo.TITLE T ON T.[BIB#] = I.[BIB#] WHERE CH.CKO_LOCATION = 'DIC' ) SELECT TITLE, COUNT(*) AS CKOS FROM CTE GROUP BY TITLE ORDER BY CKOS DESC ```
Summing a count (with group by)
[ "", "sql", "sql-server", "count", "sum", "" ]
``` select distinct ani_digit, ani_business_line from cta_tq_matrix_exp limit 5 ``` I want to select top five rows from my resultset. if I used above query, getting syntax error.
You'll need to use `DISTINCT` *before* you select the "top 5": ``` SELECT * FROM (SELECT DISTINCT ani_digit, ani_business_line FROM cta_tq_matrix_exp) A WHERE rownum <= 5 ```
``` select distinct ani_digit, ani_business_line from cta_tq_matrix_exp where rownum<=5; ```
How to select top five or 'N' rows in Oracle 11g
[ "", "sql", "oracle", "oracle11g", "syntax-error", "" ]
I have a stored function that pulls all employee clock in information. I'm trying to pull an exception report to audit lunches. My current query builds all info 1 segment at a time. ``` SELECT ftc.lEmployeeID, ftc.sFirstName, ftc.sLastName, ftc.dtTimeIn, ftc.dtTimeOut, ftc.TotalHours, ftc.PunchedIn, ftc.Edited FROM dbo.fTimeCard(@StartDate, @EndDate, @DeptList, @iActive, @EmployeeList) AS ftc LEFT OUTER JOIN Employees AS e ON ftc.lEmployeeID = e.lEmployeeID WHERE (ftc.TotalHours >= 0) AND (ftc.DID IS NOT NULL) OR (ftc.DID IS NOT NULL) AND (ftc.dtTimeOut IS NULL) ``` The output for this looks like this: ``` 24 Bob bibby 8/2/2013 11:55:23 AM 8/2/2013 3:36:44 PM 3.68 24 bob bibby 8/2/2013 4:10:46 PM 8/2/2013 8:14:30 PM 4.07 39 rob blah 8/2/2013 8:01:57 AM 8/2/2013 5:01:40 PM 9.01 41 john doe 8/2/2013 10:09:58 AM 8/2/2013 1:33:38 PM 3.4 41 john doe 8/2/2013 1:55:56 PM 8/2/2013 6:10:15 PM 4.25 ``` I need the query to do 2 things. 1) group the segments together for each day. 2) report the "break time" in a new colum After I have that info I need to check the hours of each segment and make sure 2 things happen. 1) if they worked over a total of 6 hours, did they get a 30 minute break? 2) if they took a break, did they take a break > 30 minutes. You see that Bob punched in at 11:55 AM and Punched out for lunch at 3:36. He punched back in from lunch at 4:10 and punched out at 8:14. He worked a total of 7.75 hours, and took over a 34 minute break. He was OK here. and I don't want to report an exception John worked a total of 7.65 hours. However, when he punched out, he only took 22 minute lunch. I need to report "Jim only took 22 minute lunch" You will also see rob worked 9 hours, without a break. I need to report "rob Worked over 6 hours and did not take a break" I think if I can accomplish grouping the 2 segments. Then I can handle the reporting aspect. \*UPDATE\*\* I changed the query to try to accomplish this. Below is my current query: ``` SELECT ftc.lEmployeeID, ftc.sFirstName, ftc.sLastName, ftc.TotalHours, DATEDIFF(mi, MIN(ftc.dtTimeOut), MAX(ftc.dtTimeIn)) AS Break_Time_Minutes FROM dbo.fTimeCard(@StartDate, @EndDate, @DeptList, @iActive, @EmployeeList) AS ftc LEFT OUTER JOIN Employees AS e ON ftc.lEmployeeID = e.lEmployeeID WHERE (ftc.TotalHours >= 0) AND (ftc.DID IS NOT NULL) OR (ftc.DID IS NOT NULL) AND (ftc.dtTimeOut IS NULL) GROUP BY ftc.lEmployeeID, ftc.sFirstName, ftc.sLastName, ftc.TotalHours ``` My Output currently looks like this: ``` 24 Bob bibby 3.68 -221 24 bob bibby 4.07 -244 39 rob blah 0.05 -3 39 rob blah 2.63 -158 41 john doe 3.4 -204 41 john doe 4.25 -255 ``` As you can see It's not combining the segments by date and the Break\_time is displaying negative minutes. It's also not combining the days. Bob's time should be on 1 line. and display 7.75 minutes break-time should 34 minutes.
i believe if you want to combine both times you need to take them out of the group by and add sum them. based on the results the reporting can check total hours and break hours. you can add case statements if you want to flag them. ``` SELECT ftc.lEmployeeID ,ftc.sFirstName ,ftc.sLastName ,SUM(ftc.TotalHours) AS TotalHours ,DATEDIFF(mi, MIN(ftc.dtTimeOut), MAX(ftc.dtTimeIn)) AS BreakTimeMinutes FROM dbo.fTimeCard(@StartDate, @EndDate, @DeptList, @iActive,@ EmployeeList) AS ftc WHERE SUM(ftc.TotalHours) >= 0 AND (ftc.DID IS NOT NULL) OR (ftc.DID IS NOT NULL) AND (ftc.dtTimeOut IS NULL) GROUP BY ftc.lEmployeeID, ftc.sFirstName, ftc.sLastName ``` I made this quick test in sql and it appears to work the way you want. did you add something to the group by? ``` declare @table table (emp_id int,name varchar(4), tin time,tout time); insert into @table VALUES (1,'d','8:30:00','11:35:00'), (1,'d','13:00:00','17:00:00'); SELECT t.emp_id ,t.name ,SUM(DATEDIFF(mi, tin,tout))/60 as hours ,DATEDIFF(mi, MIN(tout), MAX(tin)) AS BreakTimeMinutes FROM @table t GROUP BY t.emp_id, t.name ```
Using the pertinent pieces of your sample SQL, I created an SQL Fiddle showing how this could be done. You can view it here: <http://sqlfiddle.com/#!6/f05ce/3> ``` SELECT EmployeeId, Num_Hours, CASE WHEN tmp.Break_Time_Minutes < 0 Then 0 Else Break_Time_Minutes END As Break_Time_Minutes, CASE WHEN tmp.Break_Time_Minutes < 0 Then 1 Else 0 END As SkippedBreak FROM ( SELECT EmployeeId, Round(SUM(DATEDIFF(second, TimeIn, TimeOut) / 60.0 / 60.0),1) As NUM_Hours, DateDiff(mi, Min(TimeOut), Max(TimeIn)) As Break_Time_Minutes FROM Employee GROUP BY EmployeeId, CAST(TimeIn As Date) ) as tmp WHERE tmp.Num_Hours > 6 AND Break_Time_Minutes < 30 ```
How to Group time segments and check break time
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "sql-server-2005", "" ]
I create the following table on <http://sqlfiddle.com> in `PostgreSQL 9.3.1` mode: ``` CREATE TABLE t ( id serial primary key, m varchar(1), d varchar(1), c int ); INSERT INTO t (m, d, c) VALUES ('A', '1', 101), ('A', '2', 102), ('A', '3', 103), ('B', '1', 104), ('B', '3', 105); ``` table: ``` | ID | M | D | C | |----|---|---|-----| | 1 | A | 1 | 101 | | 2 | A | 2 | 102 | | 3 | A | 3 | 103 | | 4 | B | 1 | 104 | | 5 | B | 3 | 105 | ``` From this I want to generate such a table: ``` | M | D | ID | C | |---|---|--------|--------| | A | 1 | 1 | 101 | | A | 2 | 2 | 102 | | A | 3 | 3 | 103 | | B | 1 | 4 | 104 | | B | 2 | (null) | (null) | | B | 3 | 5 | 105 | ``` but with my current statement ``` select * from (select * from (select distinct m from t) as dummy1, (select distinct d from t) as dummy2) as combi full outer join t on combi.d = t.d and combi.m = t.m ``` I only get the following ``` | M | D | ID | C | |---|---|--------|--------| | A | 1 | 1 | 101 | | B | 1 | 4 | 104 | | A | 2 | 2 | 102 | | A | 3 | 3 | 103 | | B | 3 | 5 | 105 | | B | 2 | (null) | (null) | ``` Attempts to order it by m,d fail so far: ``` select * from (select * from (select * from (select * from (select distinct m from t) as dummy1, (select distinct d from t) as dummy2) as kombi full outer join t on kombi.d = t.d and kombi.m = t.m) as result) order by result.m ``` Error message: ``` ERROR: subquery in FROM must have an alias: select * from (select * from (select * from (select * from (select distinct m from t) as dummy1, (select distinct d from t) as dummy2) as kombi full outer join t on kombi.d = t.d and kombi.m = t.m) as result) order by result.m ``` It would be cool if somebody could point out to me what I am doing wrong and perhaps show the correct statement.
I think your problem is the order. You can solve this problem with the order by clause: ``` select * from (select * from (select distinct m from t) as dummy1, (select distinct d from t) as dummy2) as combi full outer join t on combi.d = t.d and combi.m = t.m order by combi.m, combi.d ``` You need to specify which data you would like to order. In this case you get back the row from the combi table, so you need to say that. <http://sqlfiddle.com/#!15/ddc0e/17>
``` select * from (select kombi.m, kombi.d, t.id, t.c from (select * from (select distinct m from t) as dummy1, (select distinct d from t) as dummy2) as kombi full outer join t on kombi.d = t.d and kombi.m = t.m) as result order by result.m, result.d ```
order by after full outer join
[ "", "sql", "postgresql", "sqlfiddle", "" ]
I know that changing a table with fixed width rows to have variable width rows (by changing a CHAR column to a VARCHAR) has performance implications. However my question is, given a preexisting table with variable width rows (due to many VARCHAR columns), and thus with that performance penalty already paid, would adding another variable length column further impact performance? My hunch is that it wouldn't, the biggest performance penalty would be switching from fixed width rows to variable width rows and that adding another variable width column would have a negligible impact.
Yes and no. It is true that variable width character columns are *slightly* slower then fixed width character columns. But the "penalty" (or performance cost) is cummulative and per column. So, every column you add to your query in general (fixed width or otherwise) is going to impact performance (as you query more data, it takes longer to fetch all of the data).
Each Variable length column you add to the table, makes it worse to retrieve the data. Another consideration would also be - if the variable length columns are part of the Query (filter/Where clause) and if you are going to be using those in indexes. Variable Length fields in the index will also add to the index overhead. For details, you will need to look at the documentation of the particular database you are using. e.g. <http://dev.mysql.com/doc/refman/5.6/en/innodb-table-and-index.html>
Does adding another variable length column to a table with variable length rows further impact performance?
[ "", "mysql", "sql", "" ]
I have a large number of columns in my table, like 20-30. I want to select all except 3-4 of the columns. Is there a way to to `SELECT * EVERYTHING BUT COLUMNS (Address,Name etc...) FROM MyTable` If not, then would such a functionality be useful ?
In SSMS, open Object Explorer, right-click on table -> "Script Table As" -> "SELECT To" -> "New Query Editor Window" Delete the columns you don't want and run the query. It does all the typing for you.
--Just change table name and put NOT IN columns ``` DECLARE @cols AS NVARCHAR(MAX), @query AS NVARCHAR(MAX) select @cols = STUFF((SELECT N',' + QUOTENAME(c.name) FROM sys.tables AS t INNER JOIN sys.columns c ON t.OBJECT_ID = c.OBJECT_ID WHERE t.name = 'tagCloudLibrary' and c.name not in ('langID') FOR XML PATH(''), TYPE ).value('.', 'NVARCHAR(MAX)') ,1,1,'') set @query = N'SELECT ' + @cols + N' from tagCloudLibrary' execute sp_executesql @query; ```
Select all columns except some columns
[ "", "sql", "sql-server", "sql-server-2005", "" ]
Let me clear first if heading is not sufficient. I have a table like - ``` CREATE TABLE `test` ( `id` int(10) NOT NULL DEFAULT '0', `senderid` int(10) DEFAULT NULL, `recid` int(10) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 ``` And sample data - ``` id senderid recid 1 1 2 2 1 3 3 3 1 4 6 1 5 4 1 6 4 2 7 2 4 ``` Now I want to know all transaction of senderid 1 to recid and from recid 1 to senderid like - **EDIT** I have made this - ``` SELECT GROUP_CONCAT(recid SEPARATOR ', ') AS trans FROM test WHERE senderid =1 UNION SELECT GROUP_CONCAT(senderid SEPARATOR ', ') AS trans FROM test WHERE recid =1 GROUP BY recid; ``` And result is - ``` trans 2, 3 3, 6, 4 ``` Which is in two row how i can combine both with distinct values is it possible to achieve such using sql? Thank you.
Yes, with a bit of logic: ``` SELECT GROUP_CONCAT((case when senderid = 1 then recid else senderid end) SEPARATOR ', ' ) AS trans FROM test WHERE senderid = 1 or recid = 1 ; ```
You can simply use `union all` to get your desired result.the first query will return all transaction which has senderid=1,the second query will return rows for recid=1 then combine using union all to get all transactions ``` select * from test where senderid=1 union all select * from test where recid =1 ```
How to get distinct results in one row from two column of same table?
[ "", "mysql", "sql", "select", "" ]
anyone know to how to create a query to find out if the data in one column contains (like function) of another column? For example ``` ID||First_Name || Last_Name ------------------------ 1 ||Matt || Doe ------------------------ 2 ||Smith || John Doe ------------------------ 3 ||John || John Smith ``` find all rows where Last\_name contains First\_name. The answer is ID 3 thanks in advance
Here's one way to do it: ``` Select * from TABLE where instr(first_name, last_name) >= 1; ```
Try this: ``` select * from TABLE where last_name LIKE '%' + first_name + '%' ```
find if one column contains another column
[ "", "mysql", "sql", "oracle", "" ]
Here's my Oracle (11g) table: ``` -------------------------- |MyTable | -------------------------- |UserID |Date | -------------------------- |1 |4/29/2011 | |1 |6/13/2013 | |2 |5/3/2001 | |2 |2/3/2011 | |3 |12/3/2009 | |3 |4/3/2011 | -------------------------- ``` If I perform the following SQL: ``` SELECT MAX(Date) AS upd_dt, UserID FROM MyTable GROUP BY upd_dt, UserID ``` I get: ``` -------------------------- |User ID |Date | -------------------------- |1 |6/13/2013 | |2 |2/3/2011 | |3 |4/3/2011 | -------------------------- ``` Which I understand. I now want to perform a SELECT on these results and get the row with the most recent date and its userID. Is there a way to SELECT from a SELECT? Something like: ``` SELECT MAX(upd_dt) AS maxdt, UserID FROM ( SELECT MAX(Date) AS upd_dt, UserID FROM MyTable GROUP BY upd_dt, UserID ) GROUP BY maxdt, UserID ```
I would say that your first query should be more like: ``` SELECT MAX(Date) AS upd_dt, UserID FROM MyTable GROUP BY UserID ``` For your second query, yes you can use subqueries. And I think you don't need to aggregate: ``` SELECT * FROM ( SELECT Date, UserID FROM MyTable ORDER BY Date dESC ) WHERE ROWNUM < 2; ``` Note that you need to put the `ORDER BY` in the *inner* query and then filter with `ROWUM` in the *outer* query. Otherwise what you are doing is `SELECT`ing the first retrieved row (whichever that may be) and then `ORDER`ing that single row. Note also that `ROWNUM` will in general **not** work as you expect unless you restrict filtering to less-than (`<`)
I think you can do it without subquery: ``` SELECT MAX(Date) AS upd_dt, MAX(UserID) keep(dense_rank last order by Date) as UserID FROM MyTable; ``` To clarify this part: `MAX(UserID)`. Consider having two rows with the same max `Date` and different `UserID`. ``` -------------------------- |MyTable | -------------------------- |UserID Date | -------------------------- |1 |6/13/2013 | |2 |6/13/2013 | -------------------------- ``` So you have to decide which one to pick. With that aggregate `MAX(UserID)` or maybe `MIN(UserID)` you can vary the result.
Oracle: Select From a Select
[ "", "sql", "oracle", "" ]
let's say that I have a a field named "control". If "control" is null, than I have to update fields "control", "f1", "f2", "f3", "f4", "f5". If "control" is NOT null, I have only to update "f4" and "f5". How am I supposed to achieve this goal? I tried something like: ``` UPDATE table SET control = IF(control IS NULL, 1, do_nothing), f1 = IF(control IS NULL, value1, do_nothing), f2 = IF(control IS NULL, value2, do_nothing), f3 = IF(control IS NULL, value3, do_nothing), f4 = value4, f5 = value5 WHERE id = XX ``` but "control" once being set to 1 is not null anymore, so other updates (but the f4 and f5) are not processed. Moreover, how do I tell in the if statement to "do\_nothing" on the ELSE branch? Getting confused. I thought to make a select and a nested update, but got many errors. Thanks everyone
Done! Thanks for the advice. To whom may be useful: ``` DELIMITER | CREATE PROCEDURE `setLastLogin` ( IN `ip_user` varchar(255), IN `user_id` int ) BEGIN /* Procedure text */ SELECT control INTO @con FROM tbl_users WHERE id = user_id; IF (@con IS NULL) THEN UPDATE tbl_users SET control = 1, date_control = NOW(), ip_control = ip_user, date_last_login = NOW(), ip_last_login = ip_user WHERE id = user_id; ELSE UPDATE tbl_users SET date_last_login = NOW(), ip_last_login = ip_user WHERE id = user_id; END IF; END| DELIMITER ; ```
You can use a stored procedure or this can be achieved with two statements. ``` UPDATE table SET control = 1, f1 = value1, f2 = value2, f3 = value3, f4 = value4, f5 = value5 WHERE id = XX and control IS NULL ``` and ``` UPDATE table SET f4 = value4, f5 = value5 WHERE id = XX and control IS NOT NULL ```
mysql multiple updates only if one field is null
[ "", "mysql", "sql", "if-statement", "sql-update", "" ]
I have 2 table : \_ Table user: ID ( primary key) , Name, phonenumber. \_ Table class : ID (primary key), Subject (primary key). I want to select ID, Name, Phonenumber from table user which have record ID in table class without duplicate ID.For example: ``` ID Name PhoneNumber 1 a 012312 2 b 345678 3 c 232321 ID Subject 2 abc 3 def 2 def 3 abc ``` The result will be ``` ID Name PhoneNumber 2 b 345678 3 c 232321 ``` Any help would be great.
``` SELECT DISTINCT id,name,phonenumber FROM user JOIN class on user.id = class.ID ``` or ``` SELECT id,name,phonenumber FROM user WHERE id IN (SELECT id FROM class) ``` or ``` SELECT id,name,phonenumber FROM user WHERE EXISTS (select 1 from class where user.id = class.id) ```
``` SELECT distinct ID, Name, PhoneNumber FROM User, Class WHERE User.ID = Class.ID ```
Select unique record from 2 table
[ "", "sql", "sql-server", "" ]
I am using postgresql 9.3. I have 2 tables like following: ``` create table table1 (col1 int, col2 int); create table table2 (col2 int, col4 int); insert into table1 values (1,2),(3,4); insert into table2 values (10,11),(30,40),(50,60); ``` My expected resultset is as follow: ``` COL1 table1_COL2 table2_COL2 COL4 1 2 10 11 3 4 30 40 (null) (null) 50 60 ``` I have tried to use `with, join` but not getting the expected result. I am not intending to join these two tables. Only I want the results should come in one resultset so that I don't need to query in database for 2 times.
[moved from a comment to an answer, at OP's request] The best way to do this is — not to do it. It's a bad idea. You have two separate and unrelated queries, you should just run one and then the other. (If you really feel strongly about running them simultaneously and fake-combining the results, you can use subqueries with `row_number()` and then `FULL OUTER JOIN` on that. So you can. But you shouldn't.)
As if you have not provided any join condition you can do like this. ``` SELECT table1.col1 AS table1_COL1 , table1.col2 AS table1_COL2 , table2.col1 AS table2_COL3 , table2.col2 AS table2.col4 FROM table1,table2 ```
Combine 2 different tables data using select query
[ "", "sql", "postgresql", "select", "" ]
I've been developing a database which one of its tables has the following design: <http://imageshack.us/scaled/landing/843/3z08.jpg> I've tried to execute the following UPDATE command: ``` UPDATE TrainerPokemon SET PokemonID = 2, lvl = 55, AbilityID = 2, MoveSlot1 = 8, MoveSlot2 = 9, MoveSlot3 = 6, MoveSlot4 = 7 WHERE ID = 48 ``` This was the data representation before the execution of the UPDATE command: <http://imageshack.us/scaled/landing/853/01aa.jpg> And this was the data representation after the execution: <http://imageshack.us/scaled/landing/841/ul5j.jpg> As the image above shows, it is clear that the UPDATE command behaved like an INSERT command. Honestly, I've never seen this kind of behavior during all the years that I've worked with databases and SQL language. What could ever have happened here?
If you are truly sending this statement, as it's shown, to the server then the only artifact that could change the behavior is a trigger.
Unless it's a `TRIGGER`, I'd wager some test environment is erroneously pointing to this database.
SQL Update command performs an insertion instead of an update
[ "", "sql", "sql-server", "sql-update", "sql-insert", "" ]
Folks, I have researched this question first and came up with nothing for my specific issue, I found `SUM/CASE` which is neat but not exactly what I need. Here is my situation: I have been asked to report back the total number of people who meet **5 out of 8 conditions**. I am having trouble coming up with the best way of doing this. It must be something to do with having a `counter` for each condition and then adding the counter at the end and returning the count of people who met 5 of the 8 conditions (call them condition a - h) So can you do a count of a count? Something like ``` if exists (code for condition A) 1 ELSE 0 if exists (code for condition B) 1 ELSE 0 etc sum(count) ``` Thank you
I ended up completing this by using a WITH statement something like this: WITH ( Select statement for first condition AS blah Select statement for second condition AS blah Select statement for third condition AS blah Select statement for fourth condition AS blah Select statement for fifth condition AS blah Select statement for sixth condition AS blah Select statement for seventh condition AS blah Select statement for eighth condition AS blah ) select CASE WHEN (8 cases based on the 8 selects above I just put the results in a spreadsheet and did all the math in Excel
Since the conditions are spread across rows, you can do this by combining `MAX()` and a `CASE` statement in a `HAVING` clause: ``` SELECT person_ID FROM YourTable GROUP BY Person_ID HAVING MAX(CASE WHEN ConditionA THEN 1 END) + MAX(CASE WHEN ConditionB THEN 1 END) + MAX(CASE WHEN ConditionC THEN 1 END) + MAX(CASE WHEN ConditionD THEN 1 END) + MAX(CASE WHEN ConditionE THEN 1 END) + MAX(CASE WHEN ConditionF THEN 1 END) + MAX(CASE WHEN ConditionG THEN 1 END) + MAX(CASE WHEN ConditionH THEN 1 END) >= 5 ```
SQL Server counting unrelated conditions
[ "", "sql", "sql-server", "sql-server-2008", "count", "conditional-statements", "" ]
I googled it enough but for some reason, mine doesnt work. the column name is **CD**. type is **VARCHAR2(10Byte)** Table name is **TB\_POT\_ECD\_CD** I want to change the size of column to **VARCHAR2(100Byte)** ``` ALTER TABLE TB_POT_ECD_CD MODIFY(CD VARCHAR2(100)) ``` didn't work. Can anyone look at it?
You may `remove` that column and `add` it again with new size ``` ALTER TABLE TB_POT_ECD_CD DROP COLUMN CD ALTER TABLE TB_POT_ECD_CD ADD CD VARCHAR2(100) ```
It is perfectly possible to modify a column containing data, including changing its size; the one exception is that we cannot make a column smaller than the largest value existing in the column **(1)**. This is the syntax ... ``` alter table TB_POT_ECD_CD modify cd varchar2(100 byte) / ``` ... and here is a [SQL fiddle](http://sqlfiddle.com/#!4/5082d/1) too. **(1)** This is true of 11gR2 and perhaps earlier versions; in older versions of Oracle we could only shrink empty columns. Thanks to @StanMcgeek for pointing this out to me. --- > "I get ORA-00942. Table or View does not exist." That is a problem with your SQL. Probably you've misspelled the table, or you're trying to run the query from the wrong schema.
<SQL>How to change the size of VARCHAR2 of the table
[ "", "sql", "oracle", "" ]
I have 2 large tables, with around 25 columns each. They both contain the same sort of data but the columns are in different orders. How can I combine these tables without having to re-arrange my query code? I'd rather not do that for a ~600 line script. If needed, I can give an example of what it looks like but I don't really see the need. I have so far tried; ``` SELECT * FROM [guest].table1 UNION ALL SELECT * FROM [guest].table2; SELECT * FROM [guest].table1, [guest].table2; ``` I also tried inserting the data like so; ``` SET IDENTITY_INSERT [guest].table1 ON; SET IDENTITY_INSERT [guest].table2 ON; INSERT INTO [guest].table1 id, short_name, name, invention_title, reference, client_id, client_ref, date_case_opened, date_case_closed, case_type, notes, fee_earner, created, last_updated, file_location, foreign_attorney_id, foreign_attorney_ref, country_code, application_number, filing_date, publication_number, invoice_currency, publication_date, status, pct_application_number, pct_case_id, national_phase_entry, base_number, base_country, base_date, base_case_id, divisional_date_lodged, illustrative_image, parent_case_id, parent_application_number) select id, short_name, name, invention_title, reference, client_id, client_ref, date_case_opened, date_case_closed, case_type, notes, fee_earner, created, last_updated, file_location, foreign_attorney_id, foreign_attorney_ref, country_code, application_number, filing_date, publication_number, invoice_currency, publication_date, status, pct_application_number, pct_case_id, national_phase_entry, base_number, base_country, base_date, base_case_id, divisional_date_lodged, illustrative_image, parent_case_id, parent_application_number FROM [guest].table2; ``` But that gives me this error: > SQL Error: Cannot insert explicit value for identity column in table 'exported\_cases' when IDENTITY\_INSERT is set to OFF. Any help would be greatly appreciated! To clarify, I am exporting a database that currently exists in 2 portions, (table1 and table2). It already has the PK generated, and I am not inserting into it until it's completely exported, at which point the PK will be set to a number higher than any current PK. (I need to keep the ID field preserved.)
As everyone is focusing on the insert, let me answer the select part :) For the UNION to work, make sure you explicitly select all the columns in the same order: ``` SELECT id, short_name, name, invention_title, reference, client_id, client_ref, date_case_opened, date_case_closed, case_type, notes, fee_earner, created, last_updated, file_location, foreign_attorney_id, foreign_attorney_ref, country_code, application_number, filing_date, publication_number, invoice_currency, publication_date, status, pct_application_number, pct_case_id, national_phase_entry, base_number, base_country, base_date, base_case_id, divisional_date_lodged, illustrative_image, parent_case_id, parent_application_number FROM [guest].table1 UNION ALL SELECT id, short_name, name, invention_title, reference, client_id, client_ref, date_case_opened, date_case_closed, case_type, notes, fee_earner, created, last_updated, file_location, foreign_attorney_id, foreign_attorney_ref, country_code, application_number, filing_date, publication_number, invoice_currency, publication_date, status, pct_application_number, pct_case_id, national_phase_entry, base_number, base_country, base_date, base_case_id, divisional_date_lodged, illustrative_image, parent_case_id, parent_application_number FROM [guest].table2 ``` This result set you can use to query, insert into a new table or whatever :)
``` you should not insert identity element its generated auto: just try inserting all except id(which is your identity) ``` pls try this: ``` SET IDENTITY_INSERT [guest].table1 ON; SET IDENTITY_INSERT [guest].table2 ON; INSERT INTO [guest].table1 id, short_name, name, invention_title, reference, client_id, client_ref, date_case_opened, date_case_closed, case_type, notes, fee_earner, created, last_updated, file_location, foreign_attorney_id, foreign_attorney_ref, country_code, application_number, filing_date, publication_number, invoice_currency, publication_date, status, pct_application_number, pct_case_id, national_phase_entry, base_number, base_country, base_date, base_case_id, divisional_date_lodged, illustrative_image, parent_case_id, parent_application_number) select id, short_name, name, invention_title, reference, client_id, client_ref, date_case_opened, date_case_closed, case_type, notes, fee_earner, created, last_updated, file_location, foreign_attorney_id, foreign_attorney_ref, country_code, application_number, filing_date, publication_number, invoice_currency, publication_date, status, pct_application_number, pct_case_id, national_phase_entry, base_number, base_country, base_date, base_case_id, divisional_date_lodged, illustrative_image, parent_case_id, parent_application_number FROM [guest].table2; SET IDENTITY_INSERT [guest].table1 OFF; SET IDENTITY_INSERT [guest].table2 OFF; ```
SQL Server : combine two tables
[ "", "sql", "sql-server", "" ]
I'm trying to make a copy of a database in Azure. This requires that the user logged in be the db\_owner of the database. Unfortunately the person who created the database has left and we don't have their login credentials and they don't remember them either. Is there any way to change the db\_owner in Azure databases? Note: Doing `EXEC sp_addrolemember 'db_owner', 'login1User'` doesn't work for this as the actual owner account is needed to copy a database in Azure.
You probably want to reset the password on the server (not the database). When you click on "SQL Databases" tab on the portal, you'll go to a list of your databases. On there there will be a "Server" column. The entries in that column are hyperlinks. Click on the server you don't know the password for. on the Dashboard page for the server for the SQL Database you'll see a "Reset Administrator Password" link on the right under "quick glance". Once you do that you can log into the management console for the database and change the logins for the database with ALTER LOGIN
``` ALTER AUTHORIZATION ON DATABASE::<YourDatabaseName> to [NewOwner]; ```
How do you change the owner of an Azure database
[ "", "sql", "database", "azure", "copy", "" ]
From example@gmail.com ``` exam***@gm***.com ``` Can anyone mask the email using SQL query.AS show above from `example@gmail.com` and convert it to ``` exam***@gm***.com ```
``` DECLARE @String VARCHAR(100) = 'example@gmail.com' SELECT LEFT(@String, 3) + '*****@' + REVERSE(LEFT(RIGHT(REVERSE(@String) , CHARINDEX('@', @String) +2), 2)) + '******' + RIGHT(@String, 4) ``` **Result** ``` exa*****@gm******.com ``` Just thought of another simpler solution ``` SELECT LEFT(@String, 3) + '*****@' + SUBSTRING(@String, CHARINDEX('@',@String)+1,2) + '*******' + RIGHT(@String, 4) ``` This will also give you the exact same results.
while insert you can do same from c# code. try this with other example, ``` Declare @input varchar(50)='example@gmail.com ' select left(@input,4)+replicate('*',len(substring(@input,5,charindex('@',@input)-5))) +substring(@input,charindex('@',@input),3) +replicate('*',len(substring(@input,charindex('@',@input)+3,len(@input)-charindex('.',@input)))) +substring(@input,charindex('.',@input),len(@input)) ```
Replacing certain character in email addresses with '*' in an SQL query
[ "", "sql", "sql-server", "sql-server-2008", "" ]
If I have the following tables tbl1: ``` +------+-----------+ | sex | type | +------+-----------+ | m | rolls | | f | acom | +------+-----------+ ``` tbl2: ``` +------+-----------+ | age | type | +------+-----------+ | 12 | rolls | | 25 | acom | +------+-----------+ ``` How do I create a view to show the two tables merged but, sex and age become category ``` +-----------+-----------+ | category | type | +-----------+-----------+ | m | rolls | | f | acom | | 12 | rolls | | 25 | acom | +-----------+-----------+ ``` thanks.
You can use a `union` statement ``` SELECT `sex` as `category`,`type` FROM `tbl1` UNION SELECT `age` as `category`,`type` FROM `tbl2` ```
you can that by using a union ``` select sex as category, type from tbl1 union select age as category, type from tbl2 ```
Joining tables MySQL
[ "", "mysql", "sql", "join", "" ]
I have a database diagram and need to create a Database with different tables. ![enter image description here](https://i.stack.imgur.com/kPdC5.png) This is my code: ``` use FirmaLieferungen; drop table liefert; drop table rabatt; drop table artikel; drop table firma; SET DATEFORMAT dmy; create table firma ( fnr integer primary key, name char(10), jahrgruendung integer, -- Gründungsjahr land char(3) ); insert into firma values (101,'Schwer' ,1890,'A' ); insert into firma values (102,'Schmal' ,1901,'CH' ); insert into firma values (103,'Tief' ,1945,'I' ); insert into firma values (104,'Breit' ,1950,'A' ); insert into firma values (105,'Leicht' ,1945,'F' ); insert into firma values (106,'Hoch' ,1920,'CH' ); insert into firma values (107,'Hell' ,1900,'A' ); create table artikel ( fnr integer, lfdnr integer, bezeichnung char(10), preis decimal(6,2), einheit char(3), land char(3), primary key(fnr, lfdnr), foreign key(fnr) references firma ); insert into artikel values (101,1,'Schaufel' ,12.30,'Stk','A' ); insert into artikel values (101,2,'Hacke' ,15.20,'Stk','F' ); insert into artikel values (102,1,'Spaten' ,13.00,'Stk','A' ); insert into artikel values (103,1,'Schere' , 8.00,'Stk','A' ); insert into artikel values (103,2,'Messer' ,10.60,'Stk','F' ); insert into artikel values (103,3,'Schnur' , 1.10,'m' ,'D' ); insert into artikel values (105,1,'Schnur' , 0.40,'m' ,'D' ); insert into artikel values (106,1,'Hacke' ,20.70,'Stk','CH' ); insert into artikel values (106,2,'Draht' , 0.60,'m' ,'CH' ); create table liefert ( fnrvon integer, fnran integer, fnr integer, lfdnr integer, datum date, menge decimal(8,2) primary key(fnrvon, fnran, fnr, lfdnr, datum), foreign key(fnr, lfdnr) references artikel, foreign key(fnr) references firma ); insert into liefert values (101,102,101,1,'01.02.1999', 3.00); insert into liefert values (101,102,101,1,'02.01.2000', 2.00); insert into liefert values (101,104,101,2,'13.02.2000', 11.00); insert into liefert values (101,104,101,1,'24.11.1999', 19.00); insert into liefert values (101,105,103,3,'31.03.2001', 1553.00); insert into liefert values (102,101,102,1,'21.04.1999', 28.00); insert into liefert values (102,101,101,1,'11.12.1999', 1.00); insert into liefert values (102,104,101,1,'04.07.2000', 63.00); insert into liefert values (103,101,103,3,'21.04.1999', 3.25); insert into liefert values (103,104,101,1,'08.02.1998', 17.00); insert into liefert values (104,102,105,1,'19.11.2001', 132.50); insert into liefert values (104,106,101,1,'04.07.2000', 22.00); insert into liefert values (106,102,101,1,'07.08.2002', 81.00); insert into liefert values (106,102,106,2,'01.06.2002', 21.30); insert into liefert values (106,104,101,1,'26.09.2001', 2.00); create table rabatt ( fnrvon integer, fnran integer, prozent decimal (5,2), primary key (fnrvon, fnran), foreign key (fnrvon, fnran) references firma ); insert into rabatt values (101,102, 5.25); insert into rabatt values (102,101, 5.50); insert into rabatt values (101,103,15.75); insert into rabatt values (103,102, 7.50); insert into rabatt values (102,103,10.50); insert into rabatt values (105,106, 5.25); insert into rabatt values (104,101, 7.50); select * from rabatt; select * from firma; select * from liefert; select * from artikel; ``` But there's an error in the 'rabatt' creation, it says that the last command is invalid. > foreign key (fnrvon, fnran) references firma This is somehow wrong, but I don't know why... Is the diagram wrong? There are also two keys going from 'liefert' to 'firma' how do I do this? Please help me! Thanks! (I'm using Microsoft SQL Server 2008)
When you reference another table, you should specify which column(s) you are referencing. So, for example: ``` create table liefert ( fnrvon integer, fnran integer, fnr integer, lfdnr integer, datum date, menge decimal(8,2) primary key(fnrvon, fnran, fnr, lfdnr, datum), foreign key(fnr, lfdnr) references artikel (fnr, lfdnr), foreign key(fnr) references firma (fnr) ); ```
Mureinik is right, just answered 1 sec before me. For the sake of the question I crated a Fiddle Improve it for further questions <http://sqlfiddle.com/#!6/6a1b7>
SQL: 2 Foreign Keys referencing one Primary Key
[ "", "sql", "" ]
I am trying to have 2 where clause. The SQL statement works before I put ``` AND fborders.date="+todayDate); ``` So is there a error at my sql statement because it does not work? ``` DateFormat dateFormat = new SimpleDateFormat("dd/MM/yyyy"); Date date = new Date(); String todayDate = dateFormat.format(date); (" SELECT Id,Name,quantity,date,time, FROM orders WHERE status='pending' AND date="+todayDate ); ```
You missed single quote sign. `Date` needs quotes around that. Here you go: ``` AND fborders.date='" + todayDate + "'" ```
The date needs to be in quotes too. Change it to this: ``` AND date='" + todayDate + "'" ```
WHERE AND SQL statement
[ "", "mysql", "sql", "" ]
Please help me to build a sql select to assign (software development) tasks to a software release. Actually this is a fictive example to solve my real business specific problem. I have a relation Tasks: ``` ID Effort_In_Days 3 3 1 2 6 2 2 1 4 1 5 1 ``` I want to distribute the Tasks to releases which are at most 2 days long (tasks longer than 2 shall still be put into one release). In my real problem I have much more "days" available to distribute "tasks" to. Expected output: ``` Release Task_ID 1 3 2 1 3 6 4 2 4 4 5 5 ``` I think I need to use analytic functions, something with `sum(effort_in_days) over` and so on, to get the result. But I'm I haven't used analytic functions much and didn't find an example that's close enough to my specific problem. I need to build groups (releases) if a sum (>= 2) is reached.
I would do something like: ``` with data as ( select 3 ID, 3 Effort_In_Days from dual union all select 1 ID, 2 Effort_In_Days from dual union all select 6 ID, 2 Effort_In_Days from dual union all select 2 ID, 1 Effort_In_Days from dual union all select 4 ID, 1 Effort_In_Days from dual union all select 5 ID, 1 Effort_In_Days from dual ) select id, effort_in_days, tmp, ceil(tmp/2) release from ( select id, effort_in_days, sum(least(effort_in_days, 2)) over (order by effort_in_days desc rows unbounded preceding) tmp from data ); ``` Which results in: ``` ID EFFORT_IN_DAYS TMP RELEASE ---------- -------------- ---------- ---------- 3 3 2 1 1 2 4 2 6 2 6 3 2 1 7 4 4 1 8 4 5 1 9 5 ``` Basically, I am using least() to convert everything over 2 down to 2. Then I am putting all rows in descending order by that value and starting to assign releases. Since they are in descending order with a max value of 2, I know I need to assign a new release every time when I get to a multiple of 2. Note that if you had fractional values, you could end up with releases that do not have a full 2 days assigned (as opposed to having over 2 days assigned), which may or may not meet your needs. Also note that I am only showing all columns in my output to make it easier to see what the code is actually doing.
This is an example of a bin-packing problem (see [here](http://en.wikipedia.org/wiki/Bin_packing_problem)). There is not an optimal solution in SQL, that I am aware of, except in some boundary cases. For instance, if all the tasks have the same length or if all the tasks are >= 2, then there is an easy-to-find optimal solution. A greedy algorithm works pretty well. This is to put a given record in the first bin where it fits, probably going through the list in descending size order. If your problem is really as you state it, then the greedy algorithm will work to produce an optimal solution. That is, if the maximum value is 2 and the efforts are integers. There might even be a way to calculate the solution in SQL in this case. Otherwise, you will need pl/sql code to achieve an approximate solution.
Select to build groups by (analytic) sum
[ "", "sql", "oracle", "select", "" ]
This is a generic question. I have a query to get names of students and their Roll Number from a table. In some cases two or more students have same name. In that case alone i want to display their names along with their roll numbers. In other cases i want to display their names alone. How can the query be written for that? Is that possible in SQL or else i want to perform the checking operation in code behind only? Sql Query: ``` Select Name as Student_Name,RollNo from Students ```
To do this in SQL you need to do something like the following: ``` select case when m.Name is null then s.Name else s.Name + ' ' + s.RollNo end as Name from Students s left join ( select Name from Students group by Name having count(*) > 1 ) m on s.name = m.name ``` However I'm going to attempt a controversial answer because I feel this is UI logic and should be kept in the UI (you've tagged the question with ASP.NET so I can only assume you have a UI). I'm assuming C#. Inside your Student class I'd do the following: ``` public override ToString() { return this.Student_Name; } public string GetDisplayName(IEnumerable<Student> otherStudentsInList) { if(otherStudentsInList.Contains(this.ToString()) { return string.Concat(this.ToString(), " ", this.RollNo); } else { return this.ToString(); } } ```
Write a query with a CASE, and in the case condition GROUP BY name and if HAVING COUNT > 1, select name and roll number. In the ELSE statement just query for names.
Differentiating values with common names in a query
[ "", "asp.net", "sql", "" ]
I'm working on an Oracle database that has every table exposed in a view. I have made a view name contain the name of the table it is exposing, for clarity reasons. Now I would like to find out if there are tables whose name is not contained in any view, meaning that I forgot to have a view exposing them. If I query all\_tables and all\_views, let's assume that I get the following result: ``` SELECT table_name FROM all_tables: TABLE1 TABLE2 TABLE3 SELECT view_name FROM all_tables: TABLE1_VIEW TABLE2_VIEW ``` I now would like to query the two results and obtain all the table names that are included in any view name and those that are not. In my example, TABLE1 and TABLE2 are included, but TABLE3 is not. I have tried looping all the table names and looking for each one of them on the view names result set. I was wondering if there is a more direct approach.
Ok, I am not sure how clean this solution is going to be, but it's a quick-fix dirty approach at the least. Assumption is that, you have created views by appending a string at the end of the table name (which is unaltered) This will give you all tables, based on which a view is present ``` SELECT table_name FROM user_tables a WHERE EXISTS (SELECT 1 FROM user_views b WHERE REGEXP_LIKE(b.view_name, a.table_name)); ``` To get the tables, for which no views are present, just use `MINUS` ``` SELECT table_name FROM user_tables MINUS SELECT table_name FROM user_tables a WHERE EXISTS (SELECT 1 FROM user_views b WHERE REGEXP_LIKE(b.view_name, a.table_name)); ```
> Now I would like to find out if there are tables whose name is not contained in any view To find such tables we can get all table names we are interested in by querying `[dba|all|user]_tables` data dictionary view and `minus` those table names that are already *part of a view* by querying `[dba | all | user ]_dependencies` data dictionary view: ``` /* A couple of test tables */ SQL> create table t1( 2 col number 3 ) 4 ; Table created SQL> create table t2( 2 col number 3 ) 4 ; Table created /* a view */ SQL> create or replace view v_1 as 2 select * 3 from t1; ``` And here are tables that are not included in any view in current schema. In this example user\_\* data dictionary views have been used, but you are free to use `dba_*` or `all_*` or `user_*` data dictionary view. ``` SQL> column table_name format a10 SQL> select q.table_name 2 from user_tables q 3 minus 4 select t.referenced_name 5 from user_dependencies t 6 where type = 'VIEW' 7 and t.referenced_type = 'TABLE' 8 ; ``` Result: ``` TABLE_NAME ---------- T2 ```
Oracle - Searching for strings in one table that are contained in another table
[ "", "sql", "oracle", "" ]
I used to think that foreign key and secondary key are the same thing. After Googling the result are even more confusing, some consider them to be the same, others said that a secondary key is an index that doesn't have to be unique, and allows faster access to data than with the primary key. Can someone explain the difference? Or is it indeed a case of mixed terminology? Does it maybe differ per database type?
The definition in [wiki/Foreign\_key](http://en.wikipedia.org/wiki/Foreign_key) states that: > In the context of relational databases, a foreign key is a field (or > collection of fields) in one table that uniquely identifies a row of > another table. In other words, a foreign key is a column or a > combination of columns that is used to establish and enforce a link > between two tables. > > The table containing the foreign key is called the referencing or > child table, and the table containing the candidate key is called the > referenced or parent table. Take the example of the case: A customer may place 0,1 or more orders. From the point of the business, each customer is identified by a unique id (Primary Key) and instead of repeating the customer information with each order, we place a reference, or a pointer to that unique customer id (Customer's Primary Key) in the order table. By looking at any order, we can tell who placed it using the unique customer id. The relationship established between the parent (Customer table) and the child table (Order table) is established when you set the value of the FK in the Order table after the Customer row has been inserted. Also, deleting a child row may affect the parent depending on your Referential Integrity stings (Cascading Rules) established when the FK was created. FKs help establish integrity in a relational database system. As for the "Secondary Key", the term refers to a structure of 1 or more columns that together help retrieve 1 or more rows of the same table. The word 'key' is somewhat misleading to some. The Secondary Key does not have to be unique (unlike the PK). It is not the Primary Key of the table. It is used to locate rows in the same table it is defined within (unlike the FK). Its enforcement is only through an index (either unique or not) and it is implementation is optional. A table could have 0,1 or more Secondary Key(s). For example, in an Employee table, you may use an auto generated column as a primary key. Alternatively, you may decide to use the Employee Number or SSN to retrieve employee(s) information. Sometimes people mix the term "Secondary Key" with the term "Candidate Key" or "Alternate Key" (usually appears in Normalization context) but they are all different.
A foreign key is a key that references an index on some other table. For example, if you have a table of customers, one of the columns on that table may be a country column which would just contain an ID number, which would match the ID of that country in a separate Country table. That country column in the customer table would be a foreign key. A secondary key on the other hand is just a different column in the table that you have used to create an index (which is used to speed up queries). Foreign keys have nothing to do with improving query speeds.
Foreign keys vs secondary keys
[ "", "sql", "" ]
Sometimes my coworkers give me queries written like this: ``` SEL c.col1, b.col2, a.somecol, a.somemore FROM db.tbl1 A, db.tbl2 B, db.tbl3 C WHERE A.id = B.id and B.otherId = C.otherID GROUP BY 1,2 ORDER BY 1 ``` I just wanted to clarify what this actually means. Is it the same thing as this: ``` SELECT c.col1 b.col2 a.somecol a.somemore FROM db.tbl1 AS A INNER JOIN db.tbl2 AS B ON B.id = A.id INNER JOIN db.tbl3 AS C ON C.otherId = B.otherID GROUP BY c.col1, b.col2 ORDER BY c.col1 ``` I'm just trying to make things more clear and readible, but wanted to make sure I wasn't misinterpreting anything from a performance standpoint or something. I'm also replacing the aliases with more legible names, etc...
Someone else stated there are no benefits one way or the other, but that is incorrect. There are benefits to using explicit joins. There are no benefits to using implicit joins unless your database is so old it won't use explicit joins. Many accidental cross joins are prevented by using explicit joins. Intent is clear when you actually want a cross join. Implicit joins are harder to maintain especially if you need to change to a left join as mixing the two (and some databases do not support a left join syntax that is implicit) will often result incorrect results. Unless you are working on a database that is such an old version it only takes implicit joins, it is a bad choice to ever write one more than 20 years after something better came out. I see any and they fail code review. Implicit joins are a SQL anti-pattern and there is no excuse to use one if you use a modern database.
That is the same as JOIN. SQL was standardized in 1986 but didn't have JOIN until the 3rd revision in 1992. As the commenters said, I have no idea what the GROUP BY's are doing. They're used with aggregates.
SQL queries without specific inner joins
[ "", "sql", "sql-server", "" ]
Because there are many tables with many columns, I need a query to search for column names in a specific table. example: ``` select column_name from table where column_name like '%ID%' ```
For Oracle ``` select COLUMN_NAME from ALL_TAB_COLUMNS where TABLE_NAME='mytable' and COLUMN_NAME like '%ID%'; ```
Assuming you are on an Oracle database, try this: ``` select column_name from user_tab_columns where table_name = 'MY_TABLE' and column_name like '%ID%'; ``` If the table is in a different schema, you can use: ``` select column_name from all_tab_columns where owner = 'TABLE_OWNER' and table_name = 'MY_TABLE' and column_name like '%ID%'; ```
search for column names in oracle
[ "", "sql", "oracle", "" ]
I have this query that does not work and I do not understand why. Each SELECT statements should return descending results, but they're ordered ascendingly. Why ? ``` (SELECT * FROM table WHERE deleted_at = 0 ORDER BY id DESC) UNION ALL (SELECT * FROM table WHERE deleted_at <> 0 ORDER BY id DESC) LIMIT 0,30 ``` I have to say, this query does not generates any error and the results are what I expect. They are just not well ordered.
There is no guarantee of ordering when using subqueries. If you want the results ordered by id descending, then use: ``` (SELECT * FROM table WHERE deleted_at = 0) UNION ALL (SELECT * FROM table WHERE deleted_at <> 0) order by id desc LIMIT 0,30; ``` However, I think the query you really want is: ``` select * from table order by deleted_at = 0 desc, id desc limit 0, 30; ``` This puts the `deleted_at = 0` rows first and then fills out the data with the rest of the data. Note: if `deleted_at` can be NULL and you want to filter them out too, then add a `where` clause for this filtering.
[from the manual](http://dev.mysql.com/doc/refman/5.5/en/union.html): > To apply ORDER BY or LIMIT to an individual SELECT, place the clause inside the parentheses that enclose the SELECT: > (SELECT a FROM t1 WHERE a=10 AND B=1 ORDER BY a LIMIT 10) > UNION > (SELECT a FROM t2 WHERE a=11 AND B=2 ORDER BY a LIMIT 10); > > However, use of ORDER BY for individual SELECT statements implies nothing about the order in which the rows appear in the final result because UNION by default produces an unordered set of rows. Therefore, the use of ORDER BY in this context is typically in conjunction with LIMIT, so that it is used to determine the subset of the selected rows to retrieve for the SELECT, even though it does not necessarily affect the order of those rows in the final UNION result. If ORDER BY appears without LIMIT in a SELECT, it is optimized away because it will have no effect anyway. If you need it sorted in total: ``` SELECT * FROM table WHERE deleted_at = 0 UNION ALL SELECT * FROM table WHERE deleted_at <> 0 ORDER BY deleted_at = 0 DESC, id DESC LIMIT 0,30 ``` But this is of source the same as: ``` SELECT * FROM table ORDER BY deleted_at = 0 DESC, id DESC LIMIT 0,30 ```
UNION ALL and ORDER BY at the same time
[ "", "mysql", "sql", "" ]
I'm trying to figure out how to query for readmissions on Server 2008r2. Here is the basic structure of the visit table. There are other fields but none that I thought would be helpful. One issue is that some of these may be transfers instead of discharges which I have no easy way to deduce but that issue can be ignored for now. I tried my hand at this but I guess my understanding of SQL needs more work. I tried to find any info I could online but none of the queries lead me to a useful conclusion or I just didn't understand. Any suggestions would be appreciated. EDIT: Readmission is if a patient returns within 30 days of previous discharge. ``` +---------+--------+-----------------+-----------------+ | VisitID | UID | AdmitDT | DischargeDT | +---------+--------+-----------------+-----------------+ | 12 | 2 | 6/17/2013 6:51 | 6/17/2013 6:51 | | 16 | 3 | 6/19/2013 4:48 | 6/21/2013 13:35 | | 18 | 3 | 6/11/2013 12:08 | 6/11/2013 12:08 | | 21 | 3 | 6/12/2013 14:40 | 6/12/2013 14:40 | | 22 | 3 | 6/13/2013 10:00 | 6/14/2013 12:00 | | 25 | 2 | 6/11/2013 16:13 | 6/11/2013 16:13 | | 30 | 1 | 6/20/2013 8:35 | 6/20/2013 8:35 | | 31 | 7 | 6/13/2013 6:12 | 6/13/2013 6:12 | | 34 | 3 | 6/12/2013 8:40 | NULL | | 35 | 1 | 6/12/2013 8:52 | NULL | | 38 | 2 | 6/12/2013 10:10 | 6/12/2013 10:10 | +---------+--------+-----------------+-----------------+ ``` Attempt at Code: ``` SELECT N2.* FROM visitTable AS N1 INNER JOIN visitTable AS N2 ON N1.UID = N2.UID WHERE N1.EncounterID <> N2.EncounterID AND ( N2.AdmitDT BETWEEN N1.DischargeDT and DATEADD(DD,30, N1.DischargeDT)) ```
Here's a start: [sqlfiddle](http://sqlfiddle.com/#!3/e14e5/25/0) [new fiddle](http://sqlfiddle.com/#!3/e14e5/27/0) It gets each visit for each UID in order of admitDT, then pairs each visit with the next visit in that result. If the current admit date is between the last discharge date and 30 days from then, select it. There are some weird points though - UID 1 is shown to have been admitted on 6/12/2012 and never discharged, but then admitted again on 6/20/2013 and discharged the same day. edit: restructured a bit to reduce the number of joins ``` WITH cte AS ( SELECT visitid,uid,dischargedt,admitdt, row_number()over(partition BY uid ORDER BY admitdt) AS r FROM t ) SELECT c1.visitid AS v1, c2.visitid AS v2, c1.uid, c1.dischargedt as [Discharged from first visit], c2.admitdt as [Admitted to next visit] FROM cte c1 INNER JOIN cte c2 ON c1.uid=c2.uid WHERE c1.visitid<>c2.visitid AND c1.r+1=c2.r AND c2.admitdt BETWEEN c1.dischargedt AND dateadd(d,30,c1.dischargedt ) ORDER BY c1.uid ``` **[Results](http://sqlfiddle.com/#!3/e14e5/27/0)**: ``` | V1 | V2 | UID | DISCHARGED FROM FIRST VISIT | ADMITTED TO NEXT VISIT | |----|----|-----|-----------------------------|-----------------------------| | 25 | 38 | 2 | June, 11 2013 16:13:00+0000 | June, 12 2013 10:10:00+0000 | | 38 | 12 | 2 | June, 12 2013 10:10:00+0000 | June, 17 2013 06:51:00+0000 | | 18 | 34 | 3 | June, 11 2013 12:08:00+0000 | June, 12 2013 08:40:00+0000 | | 21 | 22 | 3 | June, 12 2013 14:40:00+0000 | June, 13 2013 10:00:00+0000 | | 22 | 16 | 3 | June, 14 2013 12:00:00+0000 | June, 19 2013 04:48:00+0000 | ```
try this: (Show me the visits where the admission date is after discharge for another earlier visit by the same patient) ``` Select * From visits v Where Exists (Select * From Visits Where uid = v.uid and v.AdmitDT > DischargeDT) ```
SQL - How do I query for re-admissions in TSQL?
[ "", "sql", "sql-server-2008", "t-sql", "" ]
I don't understand where the error is. I've done this a million times before, but for some reason, this error keeps coming up. Here are my Create Table Statements: ``` CREATE TABLE chf.Transaction ( TransactionID INT IDENTITY(1,1) PRIMARY KEY, AmtDue MONEY, DiscountPercent DECIMAL(5,2), AmtPaid MONEY, Date DATETIME, ) GO CREATE TABLE chf.Agent ( AgentID INT IDENTITY(1,1) PRIMARY KEY, Name VARCHAR(50), TransactionID INT, constraint Agent_T_FK foreign key (TransactionID) REFERENCES chf.Transaction(TransactionID), ) GO ```
`Transaction` is a keyword. So it is not allowing. If you want to use `Transaction` as table name then use like this. `CREATE TABLE chf.[Transaction]` But I strongly recommend **not to use the keywords / reserved words**
Transaction is a reserved word. put [] around it like so ``` CREATE TABLE chf.[Transaction] ( TransactionID INT IDENTITY(1,1) PRIMARY KEY, AmtDue MONEY, DiscountPercent DECIMAL(5,2), AmtPaid MONEY, Date DATETIME, ) GO ```
Incorrect Syntax near keyword 'Transaction' SQL Insertion Statements
[ "", "sql", "sql-server", "" ]
For some performance improvements, I am looking at using a temporary table rather than a table variable I am currently putting 100,000s or rows into a table variable using `INSERT INTO @table EXECUTE sp_executesql @SQLString` (where @SQLString returns a string 'SELECT 'INSERT INTO LiveTable Values('x','y','z') build by dynamic SQL so that the x,y,z values are from the real records) The `INSERT INTO` takes a bit of time and I was wondering if, having read about how much better `SELECT * INTO #tempTable` is, can you do a `SELECT * INTO` with another SELECT as the source? So something like ``` SELECT * INTO #tempTable FROM (SELECT * FROM Table2) ```
The problem with your query is that all subqueries need a table alias in SQL: ``` SELECT * INTO #tempTable FROM (SELECT * FROM Table2) t; ```
Short answer is yes (I believe I have done this before, awhile ago, but I don't recall any issues). You can get some more information from this post on msdn: <http://social.msdn.microsoft.com/Forums/sqlserver/en-US/92e5fdf0-e2ad-4f1c-ac35-6ab1c8eec642/select-into-localvarname-from-select-subquery>
Is it possible to apply SELECT INTO a temporary table from another SELECT?
[ "", "sql", "sql-server", "" ]
I am trying to create an T-SQL query t that only shows data from the current quarter. My data table looks as follows. NAME , DATE Alice , 1-1-2013 Bob , 12-1-2013 Charles , 12-15-2013 At any given time when I run the report I'd like to be able to see only data from the current quarter I'm in. So if Today is 12-18-14 then when I run the report, I'd like to only see entries with dates from 10-1-14 through 12-31-14 so Bob, 12-1-2013 Charles, 12-15-2013 I welcome any hints. I have searched thoroughly but am just not experienced enough to piece together the partial answers using CASE statements. Thanks. MC
If I understand correctly what you're trying to do, I don't think you need a CASE statement: ``` SELECT * FROM YourTable WHERE Date >= DATEADD(qq,DATEDIFF(qq,0,GETDATE()),0) AND Date < DATEADD(dd,1,DATEADD(qq,DATEDIFF(qq,-1,GETDATE()),-1)) ``` [sqlfiddle](http://sqlfiddle.com/#!6/1884f/1)
Another easy way to get the desired result. ``` SELECT * FROM TableName WHERE DATEPART(QUARTER,DateColumnName) = DATEPART(QUARTER,GETDATE()) AND YEAR(DateColumnName) = YEAR(GETDATE()) ```
SSRS query for only data from current quarter?
[ "", "sql", "t-sql", "reporting-services", "" ]
I've been trying to get this right for some time now with no use. I have a table in mssql database and I want to insert new row using stored procedure ``` CREATE TABLE "Customers" ( "CustomerID" NCHAR(5) NOT NULL, "CompanyName" NVARCHAR(40) NOT NULL, "ContactName" NVARCHAR(30) NULL, "ContactTitle" NVARCHAR(30) NULL, "Address" NVARCHAR(60) NULL, "City" NVARCHAR(15) NULL, "Region" NVARCHAR(15) NULL, "PostalCode" NVARCHAR(10) NULL, "Country" NVARCHAR(15) NULL, "Phone" NVARCHAR(24) NULL, "Fax" NVARCHAR(24) NULL, PRIMARY KEY ("CustomerID") ); ``` The problem is CustomerID field which contains unique string for each record (ALFKI, BERGS, BERGS, etc.) I want to make a stored procedure which will insert a row with new data and create an unique CustomerID. Build in functions are out of a question as I need the string to be 5 chars long. I have a procedure which generates 5 chars ID as follows ``` begin declare @chars char(26) = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' declare @i int = 0 declare @id varchar(max) = '' while @i < 5 begin set @id = @id + substring(@chars, cast(ceiling(rand() * 26) as int), 1) set @i = @i + 1 end Select (cast(@id as nvarchar(400))) end ``` And the one that I tried to make work with no use. It is supposed to select an unique id (set @id = 'ANATR' is there on purpose to make it go into the loop ``` begin declare @randID varchar(5) = '' declare @selectID varchar(20) = '' declare @chars char(26) = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' declare @i int = 0 declare @id varchar(10) = '' while @i < 5 begin set @id = @id + substring(@chars, cast(ceiling(rand() * 26) as int), 1) set @i = @i + 1 end select @id set @id = 'ANATR' SET @selectID = (SELECT CustomerID FROM CUSTOMERS WHERE CustomerID = @id) while @selectID <> 'NULL' begin set @id = '' while @i < 5 begin set @id = @id + substring(@chars, cast(ceiling(rand() * 26) as int), 1) set @i = @i + 1 end SET @selectID = (SELECT CustomerID FROM CUSTOMERS WHERE CustomerID = @id) SELECT @id end end ``` Here is the insert procedure I have at the moment ``` CREATE PROCEDURE [dbo].[InsertCustomers] ( @CustomerID nchar(5), @CompanyName nvarchar(40), @ContactName nvarchar(30) = NULL, @ContactTitle nvarchar(30) = NULL, @Address nvarchar(60) = NULL, @City nvarchar(15) = NULL, @Region nvarchar(15) = NULL, @PostalCode nvarchar(10) = NULL, @Country nvarchar(15) = NULL, @Phone nvarchar(24) = NULL, @Fax nvarchar(24) = NULL ) AS SET NOCOUNT OFF; INSERT INTO [dbo].[Customers] ([CustomerID], [CompanyName], [ContactName], [ContactTitle], [Address], [City], [Region], [PostalCode], [Country], [Phone], [Fax]) VALUES (@CustomerID, @CompanyName, @ContactName, @ContactTitle, @Address, @City, @Region, @PostalCode, @Country, @Phone, @Fax); ```
I believe you can do something like this to make sure you all get a unique id ``` begin declare @chars char(26) = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' declare @i int = 0 declare @id varchar(max) = '' while (1=1) begin set @id = @id + substring(@chars, cast(ceiling(rand() * 26) as int), 1) set @i = @i + 1 IF (NOT EXISTS(SELECT * FROM Customers WHERE CustomerID = @id) AND LEN(@id) = 5) BREAK ELSE CONTINUE end Select (cast(@id as nvarchar(400))) end ``` Set the while condition to be always true and break out of while loop only when both of your requirements are TRUE i.e `Length of new ID is 5` and it `does not exist in the customers table already`.
The main problem here is that the incremental cost of detecting collisions from the generated string, and try again, increases as you generate more and more strings (since you have to read *all* of those strings to make sure you didn't generate a duplicate). At the same time, the odds of hitting a duplicate goes up, meaning the bigger the table gets, the slower this process will get. Why do you need to generate the unique string at runtime? Build them all in advance. [This article](http://www.mssqltips.com/sqlservertip/3055/generating-random-numbers-in-sql-server-without-collisions/?utm_source=AaronBertrand) and [this post](http://www.sqlperformance.com/2013/09/t-sql-queries/random-collisions) are about random numbers, but the basic concept is the same. You build up a set of unique strings and pull one off the stack when you need one. Your chance of collisions stays constant at 0% throughout the lifetime of the application (provided you build up a stack of enough unique values). Pay for the cost of collisions up front, in your own setup, instead of incrementally over time (and at the cost of a user waiting for those attempts to finally yield a unique number). This will generate 100,000 unique 5-character strings, at the low, one-time cost of about 1 second (on my machine): ``` ;WITH a(a) AS ( SELECT TOP (26) number + 65 FROM master..spt_values WHERE type = N'P' ORDER BY number ), b(a) AS ( SELECT TOP (10) a FROM a ORDER BY NEWID() ) SELECT DISTINCT CHAR(b.a) + CHAR(c.a) + CHAR(d.a) + CHAR(e.a) + CHAR(f.a) FROM b, b AS c, b AS d, b AS e, b AS f; ``` That's not enough? You can generate about 1.12 million unique values by changing `TOP (10)` to `TOP (20)`. This took 18 seconds. Still not enough? `TOP (24)` will give you just under 8 million in about 2 minutes. It will get exponentially more expensive as you generate more strings, because that `DISTINCT` has to do the same duplicate checking you want to do *every* **single** time you add a customer. So, create a table: ``` CREATE TABLE dbo.StringStack ( ID INT IDENTITY(1,1) PRIMARY KEY, String CHAR(5) NOT NULL UNIQUE ); ``` Insert that set: ``` ;WITH a(a) AS ( SELECT TOP (26) number + 65 FROM master..spt_values WHERE type = N'P' ORDER BY number ), b(a) AS ( SELECT TOP (10) a FROM a ORDER BY NEWID() ) INSERT dbo.StringStack(String) SELECT DISTINCT CHAR(b.a) + CHAR(c.a) + CHAR(d.a) + CHAR(e.a) + CHAR(f.a) FROM b, b AS c, b AS d, b AS e, b AS f; ``` And then just create a procedure that pops one off the stack when you need it: ``` CREATE PROCEDURE dbo.AddCustomer @CustomerName VARCHAR(64) /* , other params */ AS BEGIN SET NOCOUNT ON; DELETE TOP (1) dbo.StringStack OUTPUT deleted.String, @CustomerName /* , other params */ INTO dbo.Customers(CustomerID, CustomerName /*, ...other columns... */); END GO ``` No silly looping, no needing to check if the `CustomerID` you generated just exists, etc. The only additional thing you'll want to build is some type of check that notifies you when you're getting low. As an aside, these are terrible identifiers for a CustomerID. What is wrong with a sequential surrogate key, like an IDENTITY column? How is a 5-digit random string with all this effort involved, any better than a unique number the system can generate for you much more easily?
t-sql string unique ID (Northwind database)
[ "", "sql", "sql-server", "stored-procedures", "unique-id", "" ]
I am using mysql. I have `orders` table with column `customer_id, order_id, order_date(datetime)` now I want to find all orders on Dec 20, 2013, which are from repeat customers( not new customer, i.e. customers has placed some order before as well) in a single query. `Orders` table has other typical columns as well not mentioned here. Let me know I can provide more data. UPDATE: Can we do it without subquery? If yes how? (Just curious)
``` select customer_id , order_id, order_date from orders where order_date between '20/12/2013 00:00:00' and '20/12/2013 23:59:00' AND customer_id IN (SELECT customer_id FROM orders where order_date < '20/12/2013') ```
Assuming that order\_id is the primary key in the table orders and customer\_id is a foreign key, you can use the following query with self join to pull the list of all order\_id corresponding to repeat customers: ``` select order_id, customer_id from orders a, orders b where a.customer_id = b.customer_id and order_date = '20131220' ```
How to find orders from repeat/old customers using sql query?
[ "", "mysql", "sql", "" ]
I have that `sql`: ``` SELECT DISTINCT count(KTT) FROM TRA.EVENT; ``` it returns me a number of 1901335. Now I want to expand the sql with a `join` like this: ``` SELECT DISTINCT count(E.KTT) FROM TRA.EVENT E LEFT JOIN TRA.TMP_BNAME TBN ON E.KTT = TBN.KTT_DEF; ``` But here I have a result of 1942376. I dont understand why? I expect also a result of 1901335. I thought I easily `join` the values from `TBN` based on the entries of `EVENT`? **EDIT** ``` SELECT DISTINCT E.KTT, TB.B_BEZEICHNER FROM TRA.EVENT E LEFT JOIN TRA.TMP_BNAME TBN ON E.KTT = TBN.KTT_DEF LEFT JOIN TRA.TMP_B TB ON TBN.B_ID = TB.B_ID; ``` What I am doing wrong? Thx for your help. Stefan
You have not provided full details so treat those comments as general ones. When you join 2 tables, it may happen that it can create "duplicate" rows from one table. In your instance, there may be more than 1 record with the same `KTT_DEF` in `TRA.TMP_BNAME` table. When you join that to `TRA.EVENT` table, it create more than one record for each original record in `TRA.EVENT` table. You may choose to count the distinct values of `KTT` from `TRA.EVENT` and use `DISTINCT` keyword but you need to put it into the `COUNT`: `SELECT COUNT(DISTINCT E.KTT)`. This will work provided that your values are actually unique. If they are not, the count will be different from the first query.
You want to count the distinct KTT? Then your code is wrong. You have to use: ``` SELECT count(DISTINCT KTT) FROM TRA.EVENT; ``` You get different count because you count every row. Not the distinct ones. And because the join add more rows to the query thats why you get a bigger number.
Different count results with join
[ "", "sql", "select", "join", "count", "distinct", "" ]
I have a table where dates are stored in as text because this field can also store something else. I need extract the rows with a date in that field and this date has to be within now + 2 months. My query is: ``` ;WITH sel AS (SELECT t1.ItemUserID, t3.Omschrijving AS Hoofditem, t4.Omschrijving AS Subitem, t2.Omschrijving AS SubItemonderdeel, CONVERT(DATE,t1.subitemwaarde) AS subitemwaarde, CONVERT(DATE,DATEADD(month, 2, CONVERT(DATE,GETDATE(),101))) as now_plus_2 FROM MainDataTable AS t1 LEFT JOIN SubItemsOnderdelen AS t2 ON t1.SubItemonderdeel=t2.Code LEFT JOIN HoofdItemOnderdelen AS t3 ON t1.Hoofditem = t3.Code LEFT JOIN ItemData AS t4 ON t1.subitem=t4.itemcode WHERE t2.Omschrijving LIKE 'expires on' and ISDATE(t1.SubItemWaarde) = 1) SELECT * FROM sel AS t2 ``` If I run this, I get a table with all the data, the last two columns are showing dates and are date columns. When I add WHERE t2.subitemwaarde< t2.now\_plus\_2 to the last SELECT statement, I get the error: "Conversion failed when converting date and/or time from character string" why? What am I doing wrong? rg. Eric
Exclude the non-dates in the `JOIN` clause itself prior to filtering with WHERE ``` ON t1.SubItemonderdeel=t2.Code and ISDATE(t1.SubItemWaarde) = 1) ``` You should also consider adding a set of columns appropriate to their data type.
`t2.subitemwaarde< t2.now_plus_2` compares a `String` with a `Date`. Convert `t2.subitemwaarde` to a Date before comparing to solve the problem: ``` CONVERT(DATE,t2.subitemwaarde) < t2.now_plus_2 ``` This will cause errors if `t2.subitemwaarde` is not a date, so you may need to add the filter `ISDATE(t2.subitemwaarde) = 1`.
two date comparisn gives Conversion failed when converting date and/or time from character string
[ "", "sql", "sql-server-2008", "" ]
This is my scenario. I have a stored procedure that has a select query. ``` Select M ,X , Y, dbo.function(parameters) as Z from SomeTable where condition1 and condition2 and Z > SomeValue ``` the Z in the where clause is the calculated value from the function. The dbo.function returns a single value always. I dont want to use the function in the where clause. I want to set the value to a variable and use that variable in the where clause. How should this be done? kindly help.
You would have to do something like ``` DECLARE @Z TYPE_GOES_HERE = dbo.function(parameters) Select M ,X , Y, @Z as Z from SomeTable where condition1 and condition2 and @Z > SomeValue ``` This will only work, if the parameters passed to the function is not columns from the table. If it was dependant, you coul duse a CTE. Something like ``` ;WITH Vals AS ( Select M ,X , Y, dbo.function(parameters) as Z from SomeTable ) SELECT * FROm Vals where condition1 and condition2 and Z > SomeValue ```
for your requirements, u can use select inside select, steady =) ``` SELECT * FROM ( Select M ,X , Y, dbo.function(parameters) as Z from SomeTable where condition1 and condition2 ) T1 WHERE Z > SomeValue ```
Using SET statement in a select query
[ "", "sql", "sql-server", "sql-server-2008-r2", "" ]
I am trying to self educate myself in SQL in order to better use databases at work. For this purpose I am using Oracle Application Express. This if my first time using the COUNT function and I am having some difficulties integrating it within my query. I have done a great deal of research and read quite a bit of literature but I just can't seem to get it right. My goal is to display the `channel_name` and `channel_number` columns (from the channel table) for each channel along with a count of the number of customers that have that channel as a favorite channel (`survey_result` column from the survey table). Please see below for code: ``` SELECT channel.channel_number, channel.channel_name, survey.survey_result, count(SELECT survey.survey_result FROM survey) From Channel, survey WHERE survey.channel_number = channel.channel_number ``` Currently I am getting the error message: > ORA-00936: missing expression.
Try this: Below query gives you only those channels which have minimum 1 customer. ``` SELECT C.channel_number, C.channel_name, COUNT(S.survey_result) NoOfCustomers FROM Channel C INNER JOIN survey S ON S.channel_number = C.channel_number GROUP BY C.channel_number, C.channel_name; ``` And below query gives you all channels whether it has customer or not. ``` SELECT C.channel_number, C.channel_name, COUNT(S.survey_result) NoOfCustomers FROM Channel C LEFT JOIN survey S ON S.channel_number = C.channel_number GROUP BY C.channel_number, C.channel_name; ```
Either of these may work for you ``` SELECT channel.channel_number, channel.channel_name, count(survey.survey_result) From Channel, survey WHERE survey.channel_number = channel.channel_number GROUP BY channel.channel_number, channel.channel_name ``` or ``` SELECT channel.channel_number, channel.channel_name, survey.survey_result, (SELECT count(survey_result) FROM survey) From Channel, survey WHERE survey.channel_number = channel.channel_number ```
Trouble with integrating COUNT function in SQL query using Oracle
[ "", "sql", "database", "oracle", "count", "" ]
I've a table with 5 million of rows ``` CREATE TABLE dummy_table ( num VARCHAR(16) NOT NULL DEFAULT 0, rsid VARCHAR(16) NOT NULL, list VARCHAR(128) NOT NULL, PRIMARY KEY (num, rsid) ); ``` **`num` field in the table is the prefix of some dynamic number('123457467890' in above query).** Now I need to fetch `list` column based on `num` and `rsid` and in this `num` must be the longest prefix match of incoming number. For getting `list` I've a below query: ``` select list from dummy_table where '123457467890' like CONCAT(num, '%') and rsid = '123' order by LENGTH(num) desc LIMIT 1; ``` **NOTE: 123457467890`: this number will be different each time we throw a query** Now the problem is for executing this query, MYSQL is taking about 0.80 seconds which is very high in my case. I need to throw more than 1000 query in a second. Is there any way to optimize this query to this extent. Can anyone help to achieve this result?
My first optimisation would be: - Add another column "length" - Add an index on `(rsid, length DESC, num)` Now your query is only very slightly different: - `SELECT list FROM dummy_table WHERE '123457467890' LIKE CONCAT(num, '%') AND rsid = '123' ORDER BY length DESC LIMIT 1;` But, by including the length in the index, the query *should* be able to stop at the first hit. ***However...*** This is always going to be a costly process. The worst case scenario being that you find no matches - and so have to scan the full set of (rsid = '123') records no matter what. The optimisation above isn't going to help optimise the worst case scenarios, only the best case scenarios. *(It will help more the longer the match is, but won't help as much for shorter matches.)* What you MAY be forced to do is something like... 1. Create a temporary table 2. Insert '1234567890' in to it 3. Insert '123456789' in to it 4. Insert '12345678' in to it . . . n. Insert '1' in to it At this point you have every possible match for your search string in your temporary table. Then your query can instead potentially use an index seek. Potentially finding 10 matches (in this case) and then finding the longest of those. ``` -- Index now needs to be (rsid, num, length) SELECT * FROM dummy_table INNER JOIN your_search_table ON dummy_table.num = your_search_table.num WHERE rsid = '123' ORDER BY dummy_table.length LIMIT 1 ```
The test `'123457467890' like CONCAT(num, '%')` cannot be optimized with an index. However, it's equivalent to: ``` num IN ('1', '12', '123', '1234', '12345', '123456', '1234567', '12345678', '123456789', '1234567890') ``` which *can* be optimized if you have an index on the `num` column. If you're generating the query from a programming language, it should be relatively simple to convert it to this format. For instance, in PHP it would be: ``` $nums = array(); for ($i = 1; $i <= strlen($number); $i++) { $nums[] = "'" . substr($number, 0, $i) . "'"; } $where = 'num IN (' . implode(', ', $nums) . ')' ```
Optimal MYSQL query for longest prefix matching in a table with 5 million rows
[ "", "mysql", "sql", "" ]
I have table like this: ``` id products ------ ---------- 5 1,2,3 6 2,4,5 9 1,4,7 17 4,6,7 18 1,6,8 19 2,3,6 ``` I have to select only that rows, which row's `products` column contains one of `(2,3)` values. In this case query must return: ``` id products ------ ---------- 5 1,2,3 6 2,4,5 19 2,3,6 ``` But I don't understand how to make construction of this query. Any ideas? Thanks in advance.
Would you mind to try this one please? ``` select * from TABLE_NAME where products regexp "(^|,)[23](,|$)"; ``` Its doing either two or three at the begining, or at end. Or in between the commas.
``` SELECT id,products FROM yourTable WHERE FIND_IN_SET('2',products)>0 OR FIND_IN_SET('3',products)>0 ``` [sqlFiddle](http://sqlfiddle.com/#!2/5ec51/1/0)
MySQL where condition when intersecting 2 values (comma delimited)
[ "", "mysql", "sql", "database", "select", "" ]
Following is the data that i have in a table: ``` Date Original Estimated Actual 2013-04-14 141.44323 NULL 384.875 2013-04-14 31.184295 NULL 200.375 2013-04-14 0 NULL 54.75 2013-04-14 0 NULL 0.625 2013-04-15 5.4326204 NULL 0 2013-04-15 45.795869 NULL -0.375 2013-04-15 86.57694 NULL 11.875 2013-04-15 186.219 NULL 58.875 ``` i want to sum values of column **Actual** but if its value is zero or null then for that specific row value should be taken from **Estimated** column and if **Estimated** column also have zero value then value will be taken from **Original** column. and i want the result to be grouped by Month and year. Please help
``` SELECT SELECT DATEPART(YEAR, Date) as [Year], DATEPART(MONTH, Date) as [Month], SUM(COALESCE(NULLIF(Actual,0), NULLIF(Estimated, 0), Original) FROM YourTable GROUP BY DATEPART(YEAR, Date), DATEPART(MONTH, Date); ```
I believe you want [`COALESCE`](http://msdn.microsoft.com/en-us/library/ms190349.aspx) ~~SELECT DATEPART(YEAR, Date), DATEPART(MONTH, Date), SUM(Coalesce(Actual, Estimated, Original)) FROM MyTable GROUP BY DATEPART(YEAR, Date), DATEPART(MONTH, Date);~~ **Edit** As @MartinSmith pointed out, the `NULL OR Zero` will need to be catered for. This can be done by projecting zero back to NULL with a CTE before running it through the `COALESCE`, like so: ``` WITH CTE AS ( SELECT CASE WHEN Actual = 0 THEN NULL ELSE Actual END AS Actual, CASE WHEN Estimated = 0 THEN NULL ELSE Estimated END AS Estimated, CASE WHEN Original = 0 THEN NULL ELSE Original END AS Original FROM MyTable ) SELECT DATEPART(YEAR, Date), DATEPART(MONTH, Date), SUM(Coalesce(Actual, Estimated, Original)) FROM CTE GROUP BY DATEPART(YEAR, Date), DATEPART(MONTH, Date); ```
SQL sum values from different columns based on condition
[ "", "sql", "sql-server", "t-sql", "" ]
I am new for development. I have a table in Oracle named `tbl_sale`. I have placed the table structure below and i also placed the exact report what i want.Please help me to get this. I want to show the record where parent=0; **tbl\_sale** ``` ID AMOUNT PARENT 1 100 0 2 125.26 1 3 11 1 4 200 0 5 500 4 6 250 4 7 100 4 8 29 0 ``` Desired output: ``` ID AMOUNT PARENT 1 236.26 0 4 1050 0 8 29 0 ```
You can do it this way both in SQL Server and Oracle by leveraging a recursive CTE ``` WITH q(id, amount, parent, top_most) AS ( SELECT id, amount, parent, id FROM tbl_sale WHERE parent = 0 UNION ALL SELECT t.id, t.amount, t.parent, q.id FROM tbl_sale t JOIN q ON t.parent = q.id ) SELECT top_most id, SUM(amount), MIN(parent) parent FROM q GROUP BY top_most ``` Sample output: ``` | ID | SUM(AMOUNT) | PARENT | |----|-------------|--------| | 1 | 236.26 | 0 | | 4 | 1050 | 0 | | 8 | 29 | 0 | ``` Here is **[SQLFiddle](http://sqlfiddle.com/#!4/fd5f7/1)** demo (**Oracle**) Here is **[SQLFiddle](http://sqlfiddle.com/#!6/1078a/1)** demo (**SQL Server**)
This is what aggregate function and a `GROUP BY` are designed for: ``` SELECT id, SUM(amount) AS amount, parent FROM tbl_sale WHERE parent = 0 GROUP BY id, parent ```
Total summary from a sales table
[ "", "sql", "sql-server", "oracle", "" ]
I want to automatically generate unique id with per-defined code attach to it. ex: ``` UID12345678 CUSID5000 ``` I tried `uniqueidentifier` data type but it generate a id which is not suitable for a user id. Any one have suggestions?
The only viable solution in my opinion is to use * an `ID INT IDENTITY(1,1)` column to get SQL Server to handle the automatic increment of your numeric value * a **computed, persisted** column to convert that numeric value to the value you need So try this: ``` CREATE TABLE dbo.tblUsers (ID INT IDENTITY(1,1) NOT NULL PRIMARY KEY CLUSTERED, UserID AS 'UID' + RIGHT('00000000' + CAST(ID AS VARCHAR(8)), 8) PERSISTED, .... your other columns here.... ) ``` Now, every time you insert a row into `tblUsers` without specifying values for `ID` or `UserID`: ``` INSERT INTO dbo.tblUsersCol1, Col2, ..., ColN) VALUES (Val1, Val2, ....., ValN) ``` then SQL Server will **automatically and safely** increase your `ID` value, and `UserID` will contain values like `UID00000001`, `UID00000002`,...... and so on - automatically, safely, reliably, no duplicates. **Update:** the column `UserID` is **computed** - but it still **OF COURSE** has a **data type**, as a quick peek into the Object Explorer reveals: [![enter image description here](https://i.stack.imgur.com/8SaO5.jpg)](https://i.stack.imgur.com/8SaO5.jpg)
``` CREATE TABLE dbo.tblUsers ( ID INT IDENTITY(1,1) NOT NULL PRIMARY KEY CLUSTERED, UserID AS 'UID' + RIGHT('00000000' + CAST(ID AS VARCHAR(8)), 8) PERSISTED, [Name] VARCHAR(50) NOT NULL, ) ``` ![marc_s's Answer Snap](https://i.stack.imgur.com/1QPKp.jpg) marc\_s's Answer Snap
How to automatically generate unique id in SQL like UID12345678?
[ "", "sql", "sql-server", "sql-server-2008", "sql-server-2008-r2", "sql-server-2012", "" ]
I have the following requirements: 1. A Project can be owned by either a company or a person. What would be a good solution to this design problem? First approach would be to include references to both Person and Company tables in the Project table. But the problems with join would be very difficult to manage. Second approach, that is used in my workplace is to use EntityType and EntityId where EntityType can be (1=Company or 2=Person) and EntityId refers to the record id in the respective table. I can see that join can still be an issue with this approach as it will need to do a union after join with these tables. As Project is one of my key entities, I can see this being a very expensive routine. I an interested to find out if there is a better approach to this design issue?
From a pure SQL perspective I'd go with the following: ``` Projects - ProjectID - OwnerID -> references ProjectOwners.OwnerID ProjectOwners - OwnerID (identity and PK) Company -> inherits from ProjectOwners - CompanyID (PK and FK to ProjectOwners) People -> inherits from ProjectOwners - PersonID (PK and FK to ProjectOwners) ``` This design allows you to keep full RI. The downside is that there are more tables involved in any joins which will impact performance.
Design a new table `OwnerType`. This can have rows `individual` or `company`. Now include a reference from table `OwnerType` to `ProjectOwner` table & include a composite key in your project table. ![Sample Schema](https://i.stack.imgur.com/V7Fqj.png)
Database Normalization scenario
[ "", "sql", "database-design", "" ]
All the mobile numbers have gone into the format 447XXXXXXXX, I would like the numbers to be in the format of 07XXXXXXXXXX. I was going to try the following query: ``` UPDATE stack SET Telephone= REPLACE(Telephone,'447','07') ``` However, if a number contains 447 this will also be replaced, I was thinking to use wildcards to replace just the first two 4's of every column. How would be the best way to update all 49,000 entries, I have created a backup just in case. ***UPDATE*** It seems the numbers were set in the wrong format so the following query was used to put them in the correct format: ``` ALTER TABLE stack ALTER COLUMN Telephone VARCHAR(15) ``` Now ALL the numbers are showing up in a weird format: 4.47944e+011
Change your data type to `varchar` (number data types don't store leading zeros). To do that you need to change it to `bigint` first, to make the conversion work: ``` ALTER TABLE your_table ALTER COLUMN Telephone BIGINT; ALTER TABLE your_table ALTER COLUMN Telephone VARCHAR(15); ``` After that you can change your data like this ``` UPDATE your_table SET Telephone = '07' + substring(Telephone, 4,99) WHERE charindex('447', Telephone) = 1 ``` ## [SQLFiddle demo](http://sqlfiddle.com/#!6/b39ef/1)
Maybe change the source (and avoid conversion requirements in future?): ``` =IF(LEFT(A1,2)="44",0&MID(A1,3,15),LEFT(A1,15)) ``` This results in a string.
Sorting out mobile numbers on SQL database
[ "", "sql", "excel", "sql-server-2012", "excel-2010", "" ]
I am using the following stored procedure to select data from a table which works fine so far. Now I want to create two different cases without duplicating the whole query: If the input @selection = "active" then select everything that DOES NOT equal statusX = "Published" else select everything that DOES equal statusX = "Published". Can anyone tell me how I can achieve this here ? **My stored procedure (reduced columns):** ``` ALTER PROCEDURE [dbo].[FetchRequests] @selection nvarchar(20) AS BEGIN SET NOCOUNT ON; SELECT col1, col2, col3, col4, col5, statusX, logStatus, logID FROM LogRequests WHERE logStatus = 'active' ORDER BY logID desc FOR XML PATH('cols'), ELEMENTS, TYPE, ROOT('ranks') END ``` Many thanks for any help with this, Tim.
You can just use a conditional *WHERE* clause: ``` ALTER PROCEDURE [dbo].[FetchRequests] @selection nvarchar(20) AS BEGIN SET NOCOUNT ON; SELECT col1, col2, col3, col4, col5, statusX, logStatus, logID FROM LogRequests WHERE logStatus = 'active' AND ((statusX <> 'Published' AND @selection = 'active') OR (statusX = 'Published' AND @selection <> 'active')) ORDER BY logID desc FOR XML PATH('cols'), ELEMENTS, TYPE, ROOT('ranks') END ```
Add an `AND` clause to your `WHERE`: ``` AND (@selection = 'active') = (StatusX <> 'Published') ``` (I see it's a combination of the two lines proposed by the other posters. Well, matter of taste I guess :) )
SQL Server: how to use if else statement to select different data from same table
[ "", "sql", "sql-server", "if-statement", "" ]
The select statement looks like this right now ![The select statement looks like this right now](https://i.stack.imgur.com/QUk1m.png) Here i got the output in 2 lines for the same stg\_edi835\_id ,I want to select the results in a single line for that stg\_edi835\_id. Output should look like this ![enter image description here](https://i.stack.imgur.com/wLkNo.png) Can some please help me in doing this Thanks in advance..
``` SELECT STG_EDI835_PLB_ID, STG_EDI835_ID, ADJUSTMENTREASONCODE1, ADJUSTMENTIDENTIFIER1, SUM(NTVE_ADJUSTMENTAMOUNT1_T1+NTVE_ADJUSTMENTAMOUNT1_T2) AS ADJUSTMENTAMOUNT1, SUM(PTVE_ADJUSTMENTAMOUNT1_T1+PTVE_ADJUSTMENTAMOUNT1_T2) AS ADJUSTMENTAMOUNT2, ADJUSTMENTREASONCODE2, ADJUSTMENTIDENTIFIER2 FROM ( SELECT T1.STG_EDI835_PLB_ID , T2.STG_EDI835_ID, T1.ADJUSTMENTREASONCODE1, T1.ADJUSTMENTIDENTIFIER1, (CASE WHEN T1.ADJUSTMENTAMOUNT1 < 0 THEN T1.ADJUSTMENTAMOUNT1 ELSE 0 END) AS NTVE_ADJUSTMENTAMOUNT1_T1, (CASE WHEN T2.ADJUSTMENTAMOUNT1 < 0 THEN T2.ADJUSTMENTAMOUNT1 ELSE 0 END) AS NTVE_ADJUSTMENTAMOUNT1_T2, (CASE WHEN T1.ADJUSTMENTAMOUNT1 >= 0 THEN T1.ADJUSTMENTAMOUNT1 ELSE 0 END) AS PTVE_ADJUSTMENTAMOUNT1_T1, (CASE WHEN T2.ADJUSTMENTAMOUNT1 >= 0 THEN T2.ADJUSTMENTAMOUNT1 ELSE 0 END) AS PTVE_ADJUSTMENTAMOUNT1_T2, COALESCE(T2.ADJUSTMENTREASONCODE1, 'NULL') AS ADJUSTMENTREASONCODE2, COALESCE(T2.ADJUSTMENTIDENTIFIER1, NULL) AS ADJUSTMENTIDENTIFIER2 FROM TABLE1 AS T1 INNER JOIN TABLES T2 ON T2.STG_EDI835_ID = T1.STG_EDI835_ID AND T2.STG_EDI835_PLB_ID = T1.STG_EDI835_PLB_ID ) A GROUP BY STG_EDI835_PLB_ID, STG_EDI835_ID, ADJUSTMENTREASONCODE1, ADJUSTMENTIDENTIFIER1, ADJUSTMENTREASONCODE2, ADJUSTMENTIDENTIFIER2 ```
Your question is somewhat incomplete (you should show your desired output), however, here is a sample of what you could do: * get rid of all unique values you don't need. (column 1, containing ID's, etc.) * Use aggregate functions on the rest. Example: ``` Select --column 1 removed MAX(column2) as ID, MAX(column3) as RefID, --column 4 removed --column 5 removed --column 6 removed SUM(column7) as Ad1, --column 8 removed --column 9 removed SUM(column10) as Ad2 From table ```
How to select two rows into a single row output
[ "", "sql", "sql-server", "" ]
Let say I have a `user` table and the record is like this: ``` ---------------------- | User ID | Username | ---------------------- |US1 | Andy | |US2 | Boston | |US3 | Charlie | |US4 | Donnie | |US5 | Elmo | |US6 | Frank | |US7 | Garry | |US8 | Henry | |US9 | Ignatius | |US10 | John | ``` What I need is ``` US10 | John ``` Then I do: ``` SELECT MAX (UserId) FROM User; ``` The result was ``` US9 | Ignatius ``` because its ordered by string literally, so US10 is not the max What should I do to accomplished that result? Thanks in advance.
NOTE: keep in mind that `user` is a reserved word in sql so it should be used by brackets around that `[user]` There are many options: 1) ``` select top 1 Userid, CAST(SUBSTRING(UserId, PATINDEX('%[0-9]%', UserId), LEN(UserId)) AS INT) as Maxid, username from [user] Order By Maxid desc ``` Output: ``` Userid Maxid username ============================= US10 10 John ``` 2) ``` SELECT TOP 1 * FROM [User] ORDER BY CAST(RIGHT(UserId,LEN(UserId)-2) AS INT) DESC ``` Output ``` userid username ===================== US10 John ```
If you have fixed format for data like numbers are at last in string then you can use PATINDEX() function to get number from string like ``` SELECT TOP 1 * FROM [User] ORDER BY CAST(SUBSTRING(UserId, PATINDEX('%[0-9]%', UserId), LEN(UserId)) AS INT) DESC; ``` **If you have not fixed format for number in string and you want to sort data based on numbers only then check more details in below link** **[SQL Server 2008 - order by strings with number numerically](https://stackoverflow.com/questions/20240313/sql-server-2008-order-by-strings-with-number-numerically/20240619#20240619)**
Data Ordering & Select Max SQL Server
[ "", "sql", "sql-server", "" ]
I would like to measure the time it take to execute a Stored Procedure in SQL Server. But instead of just measuring the entire stored procedure execution I would also like to measure the time each operation inside the stored procedure takes, so I could find bottlenecks. In c# to accomplish such a thing when testing code execution time I would save the date before execution starts and when execution ends I would print the TimeSpan object that is given to me by subtracting the start time from the current time (after execution). I was wondering how I could achieve such thing with SQL Server inside a stored procedure where I could print the time span I measure between operation within the stored procedure. Thanks
``` SET STATISTICS TIME ON; SET STATISTICS IO ON; GO EXECUTE <yourSP>; GO ```
What worked for me was the suggestion posted here: <https://www.codeproject.com/tips/1151457/a-simple-method-to-measure-execution-time-in-sql-s> ``` Declare @StartTime DateTime2 SET @StartTime = SysUTCDateTime() -- SQL Execution Print 'Time taken was ' + cast(DateDiff(millisecond, @StartTime, SysUTCDateTime()) as varchar) + 'ms' SET @StartTime = SysUTCDateTime() -- Second SQL Execution Print 'Time taken was ' + cast(DateDiff(millisecond, @StartTime, SysUTCDateTime()) as varchar) + 'ms' ```
SQL measure stored procedure execution time
[ "", "sql", "sql-server", "stored-procedures", "" ]
``` CREATE TABLE interview (uniqueID int identity(1,1), date datetime, recordtype int, amount numeric(18, 4)) INSERT INTO interview values('6/30/13', 1, 27.95) INSERT INTO interview values('5/20/13', 1, 21.85) INSERT INTO interview values('5/22/13', 2, 27.90) INSERT INTO interview values('12/11/12', 2, 23.95) INSERT INTO interview values('6/13/13', 3, 24.90) INSERT INTO interview values('6/30/13', 2, 27.95) INSERT INTO interview values('5/20/13', 2, 21.85) INSERT INTO interview values('5/22/13', 1, 27.90) INSERT INTO interview values('12/11/12',1, 23.95) INSERT INTO interview values('6/13/13', 3, 24.90) INSERT INTO interview values('6/30/13', 3, 27.95) INSERT INTO interview values('5/20/13', 3, 21.85) INSERT INTO interview values('5/22/13', 2, 27.90) INSERT INTO interview values('12/11/12', 1, 23.95) INSERT INTO interview values('6/13/13', 1, 24.90) ``` How to get the following result? What would the query look like? ![Query Result](https://i.stack.imgur.com/SzY0x.jpg) I was only able to get a partial to work, but my answer is not correct. I need to somehow join the queries. ``` select distinct date, count(RecordType)as Count_unique1 from interview where RecordType = '1' group by date select distinct date, count(RecordType)as Count_unique2 from interview where RecordType = '2' group by date select distinct date, count(RecordType)as Count_unique3 from interview where RecordType = '3' group by date ```
``` select date, sum(case when RecordType = '1' then 1 else 0 end) as Count_unique1, sum(case when RecordType = '2' then 1 else 0 end) as Count_unique2, sum(case when RecordType = '3' then 1 else 0 end) as Count_unique3 from interview group by date ```
Also in MSSQL you can use [PIVOT](http://technet.microsoft.com/en-us/library/ms177410%28v=sql.105%29.aspx) ``` SELECT date, [1] AS Count_unique1, [2] AS Count_unique2, [3] AS Count_unique3 FROM (SELECT date,recordtype,amount FROM interview) p PIVOT ( COUNT (amount) FOR recordtype IN ([1], [2], [3]) ) AS pvt ORDER BY pvt.date; ``` `SQLFiddle demo`
SQL Joining Query How To
[ "", "sql", "sql-server", "" ]
I can easily select the sum of one column using the `SUM()` function in MySQL like so: ``` SELECT SUM(`someColumn`) FROM `table` ``` Is it also possible to return the sum of a second column with the same query? Similar to this: ``` SELECT SUM (`someColumn`, `anotherColumn`) FROM `table` ``` Running such a query returns me the following access violation: ``` SQLSTATE[42000]: Syntax error or access violation: 1064 You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ' `anotherColumn`) FROM `table`' at line 1 ```
Try with: ``` SELECT SUM(COALESCE(`someColumn`, 0)+ COALESCE(`anotherColumn`, 0)) as TotalSum FROM `table` ``` or if you want these separately: ``` SELECT SUM(`someColumn`) as Sum1, SUM(`anotherColumn`) as Sum2 FROM `table` ```
``` SELECT SUM(`someColumn`) + SUM(`anotherColumn`) as TotalSum FROM `table` ```
Selecting two columnar sums in MySQL?
[ "", "mysql", "sql", "" ]
I have three tables office, manager,staff. Below are the contents of these tables. **Office** ``` office_id office_location office_telephone office_fax 100 brisbane 01438263 789 101 newyork 01457899 978 102 chicago 01457989 789 ``` **Manager** ``` office_id manager_id manager_name manager_phone 100 200 wayne 9879664878 101 201 tom 9745997669 102 202 harry 9789979799 ``` **Staff** ``` manager_id staff_id salary 200 300 3000 201 301 4000 200 302 5000 200 303 7856 201 304 4000 202 305 7856 202 306 6000 ``` Now, I need a query to display the total number of staff and their manager at each office. Here is the sample output ``` office_id office_location manager_id count(staff_id) 100 brisbane 200 3 101 newyork 201 2 102 chicago 202 2 ``` Till now I have tried is displaying the manager\_id and staff working for them. ``` SELECT manager_id,count(staff_id) from staff group by manager_id; ``` But I am having trouble to display the office\_id as well as it's location. Can anyone help me please?
``` SELECT o.office_id, o.office_location, m.manager_id, count(*) staff_count from staff s inner join manager m on m.manager_id = s.manager_id inner join office o on o.office_id = m.office_id group by o.office_id, o.office_location, m.manager_id; ```
Inner query gets count and total salary per single manager first... THEN, work down the chain to get rest of details ``` SELECT s.manager_id, m.manager_name, m.manager_phone, o.office_location, o.office_telephone, o.office_fax, s.NumOfEmployees, s.AllSalary from ( select s1.manager_id, count(*) as NumEmployees, SUM( s1.salary ) as AllSalary from staff s1 group by s1.manager_id ) s join manager m ON m.manager_id = m.manager_id join office o ON m.office_id = o.office_id ```
Difficulty in sql select query
[ "", "sql", "oracle", "select", "" ]
I have a table that looks like this. ``` CREATE TABLE tshirt ( id serial, sku character varying(255) NOT NULL ); ``` I want to delete only one row with my wanted sku, but `DELETE FROM tshirt WHERE sku='%s';` deletes all entries with that sku. How can I do this ?
Not the best way but, you could do this: ``` DELETE FROM tshirt WHERE id IN ( SELECT id FROM tshirt WHERE sku='%s' LIMIT 1 ) ```
This SQL Server DELETE TOP example would delete the first record from the employees table where the last\_name is 'Anderson'. If there are other records in the employees table that have a last\_name of 'Anderson', they will not be deleted by the DELETE TOP statement. ``` DELETE TOP(1) FROM employees WHERE last_name = 'Anderson'; ```
How to delete only one row if several found?
[ "", "sql", "postgresql", "" ]
Given the following schema, where members can share ownership of posts: ``` member: id, name, ... post_owner: member_id, post_id post: id, created, ... ``` What's the most efficient query to find users who haven't made a post in the last 90 days. I thought of this, but it seems inefficient: ``` select * from member where member_id not in ( select member_id from post p join post_owner po on po.post_id = p.id where created > subdate(now(), 90)) ``` Assume there is an index on `post(created)` and all foreign keys.
``` SELECT m.id, m.name FROM member m LEFT JOIN post_owner po ON m.id = po.member_id LEFT JOIN post p ON po.post_id = p.id AND p.created >= DATE_SUB(CURRENT_DATE(), INTERVAL 90 DAY) WHERE p.id IS NULL ``` Hint: compare `p.created` to a constant so as to make use of indexes.
what about this? ``` SELECT m.id, m.name From post p INNER JOIN post_owner po ON po.post_id = p.id AND RIGHT JOIN member m ON m.id = po.member_id where DATE_SUB(CURRENT_DATE(), INTERVAL 90 DAY) <= created ```
Efficient way to find missing rows across multiple joins
[ "", "mysql", "sql", "select", "join", "left-join", "" ]
I have a database A. I have taken a backup of database A called A.bak. I created a new database B. Now, I right click and Restore B from A.bak. In the Restore Dialog, I checked overwrite existing database and change the LogicalFileName from `C:\Program Files\Microsoft SQL Server\MSSQL11.SQLSERVER2012\MSSQL\DATA\A.mdf` to `C:\Program Files\Microsoft SQL Server\MSSQL11.SQLSERVER2012\MSSQL\DATA\B.mdf` and did the same with ldf file. But I am getting `Exclusive access could not be obtained because the database is in use`. Also tried, ``` ALTER DATABASE [B] SET SINGLE_USER WITH ROLLBACK IMMEDIATE; ``` Also sp\_who2, there was no existing connection of [B]
A cause for the attempt to get exclusive access comes from the options page of the restore dialog in SQL Server 2012 Management Studio. It will turn on tail-log and leave in restoring state options for the SOURCE database. So, it will try to gain exclusive access to the source database (in this case A) in order to perform this action. If you turn off the tail log option, you will find that the operation works much more smoothly.
The answer was very simple, Run this command to grab the LogicalNames, ``` RESTORE FILELISTONLY FROM DISK = 'C:\Users\MyUSer\Desktop\A.bak' ``` Then Just put the in LogicalName in below, ``` RESTORE DATABASE B FROM DISK = 'C:\Users\MyUSer\Desktop\A.bak' WITH MOVE 'LogicalName' TO 'C:\Program Files\Microsoft SQL Server\MSSQL11.SQLSERVER2012\MSSQL\Data\B.mdf', MOVE 'LogicalName_log' TO 'C:\Program Files\Microsoft SQL Server\MSSQL11.SQLSERVER2012\MSSQL\Data\B.ldf' GO ``` Note you might need to change the path. Helpful links, [How to restore to a different database in sql server?](https://stackoverflow.com/questions/6267273/how-to-restore-to-a-different-database-in-sql-server) <http://technet.microsoft.com/en-us/library/ms186390.aspx>
Backup/Restore from different database causing Restore failed exclusive access could not be obtained
[ "", "sql", "sql-server", "sql-server-2012", "" ]
I'm trying to get conversations from my private messages table with a single elegant SQL query. Here's a simplified version of my table: ![Sample table](https://i.stack.imgur.com/iIi6z.png) Let's say I want to retrieve all James's conversations. The lines highlighted in green are the ones I want him to have in his inbox. In other words, I want to select the latest messages james has received, grouped by sender. I've tried the following query : ``` SELECT * FROM "messages" WHERE To = "James" ORDER BY Id DESC GROUP BY `From` ``` But MySQL returns me these rows : ![Sample table](https://i.stack.imgur.com/uUV7N.png) So how could I fix this request ? Thanks ! **EDIT**: I Shouldn't have used "From" as a field name, I won't change it now to avoid breaking the answers but sorry about that.
``` SELECT M.Id, M.`From`, M.`To`, M.message FROM messages M INNER JOIN (SELECT `from`, max(Id) as maxId FROM messages WHERE `to` = "James" GROUP BY `from`)T ON M.Id = T.maxId ``` You say you want messages that 'James' has received but you compare `from = 'James'` is that a mistake? Edit: @Lotharyx wanted to know if there's a way to do it without a subquery, so here's one way. see second query in this [sqlFiddle](http://sqlfiddle.com/#!2/7ca076/5/1) ``` SELECT IF(@prevFrom IS NULL OR @prevFrom != M.`From`,@row:=1,@row:=@row +1) as row, @prevFrom:=M.`From`, M.id, M.`From`, M.`To`, M.message FROM messages M WHERE `to` = 'James' HAVING row = 1 ORDER BY M.`From`, M.Id DESC; ```
From is a reserved word, enclose it with backticks. <http://dev.mysql.com/doc/refman/5.5/en/reserved-words.html>
SQL query to retrieve conversations from messages?
[ "", "mysql", "sql", "" ]
i have three tables **customer** ``` id | name 1 | john ``` **orders** ``` id | customer_id | date 1 | 1 | 2013-01-01 2 | 1 | 2013-02-01 3 | 2 | 2013-03-01 ``` **order\_details** ``` id | order_id | qty | cost 1 | 1 | 2 | 10 2 | 1 | 5 | 10 3 | 2 | 2 | 10 4 | 2 | 2 | 15 5 | 3 | 3 | 15 6 | 3 | 3 | 15 ``` i need to select data so i can get the output for each order\_id the summary of the order sample output. I will query the database with a specific customer id **output** ``` date | amount | qty | order_id 2013-01-01 | 70 | 7 | 1 2013-02-01 | 50 | 4 | 2 ``` this is what i tried ``` SELECT orders.id, orders.date, SUM(order_details.qty * order_details.cost) AS amount, SUM(order_details.qty) AS qty FROM orders LEFT OUTER JOIN order_details ON order_details.order_id=orders.id AND orders.customer_id = 1 GROUP BY orders.date ``` but this returns the same rows for all customers, only that the qty and cost dont hav values
Maybe ``` SELECT orders.id, orders.date, SUM(order_details.qty * order_details.cost) AS amount, SUM(order_details.qty) AS qty FROM orders LEFT JOIN order_details ON order_details.order_id=orders.id AND orders.customer_id = 1 GROUP BY orders.date HAVING amount is not null AND qty is not null ``` [**SQL Fiddle**](http://www.sqlfiddle.com/#!2/0da8a7/10)
**NOTE:** *In the following query, it is assumed that the dates are stored in the database as a string in the format specified in the OP. If they are actually stored as some type of date with time then you'll want to modify this query such that the time is truncated from the date so the date represents the whole day. You can use the* [date](http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_date) *or* [date\_format](http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_date-format) *functions. But then you'll need to make sure that you modify the query appropriately so the group by and select clauses still work. **I added this modification as comments inside the query**.* ``` select o.date -- or date(o.date) as date , sum(odtc.total_cost) as amount , sum(odtc.qty) as qty , o.order_id from orders o inner join ( select od.id , od.order_id , od.qty , od.qty * od.cost as total_cost from order_details od inner join orders _o on _o.id = od.order_id where _o.customer_id = :customer_id group by od.id , od.order_id , od.qty , od.cost ) odtc on odtc.order_id = o.id where o.customer_id = :customer_id group by o.date -- or date(o.date) , o.order_id ; ```
Select and summarize data from three tables
[ "", "mysql", "sql", "" ]
I have 3 tables: `Entry`, `User`, `Comment`. I am showing the comment count and user (sender) full name for each entry by an sql view. But when I am using a `where` clause it's giving an error. **My SQL View is like:** ``` CREATE VIEW [dbo].Entry_View AS SELECT E.Id AS [Id], Convert(varchar(10), E.Date, 104) AS [Date], E.Subject AS [Subject], E.Content AS [Content], E.Faculty AS [Faculty], E.Category AS [Category], (U.Firstname + ' ' + U.Lastname) AS [User], E.Department AS [Department], E.ViewCount AS [View], E.SupportCount AS [Support], Count(C.Entry_Id) AS [Comment] FROM (Entry E INNER JOIN User U ON U.Id = E.User_Id) LEFT JOIN Comment C on C.Entry_Id=E.Id GROUP BY E.Id, E.Date, E.Subject, E.Content, E.Faculty, E.Category, (U.Firstname + ' ' + U.Lastname), E.Department, E.ViewCont, E.SupportCount ``` **When I am calling:** `SELECT * FROM Entry_View WHERE E.Department = 'Administration'` **I am getting an error:** `The multi-part identifier "E.Department" could not be bound.` I couldn't solve this problem and need help. Thanks.
You select E.Department AS Department....so ``` SELECT * FROM Entry_View WHERE Department = 'Administration' ```
The aliases used in the view definition are not available when queryting the view. So this: ``` SELECT * FROM Entry_View WHERE E.Department = 'Administration' ``` has to become this: ``` SELECT * FROM Entry_View E WHERE E.Department = 'Administration' ``` or this: ``` SELECT * FROM Entry_View WHERE Department = 'Administration' ```
Using SQL VIEW with WHERE Clause (ERROR)
[ "", "mysql", "sql", "" ]
My goal is to get Users and ONE latest entry user has created, I've tried to figure it out on google but I failed, so you're my last chance. On of things I tried: ``` DB::select('SELECT u.id, u.username, u.user_rating, u.avatar, u.created_at, e.id as e_id, e.title FROM users u LEFT JOIN ( SELECT e.* FROM entry e LIMIT 1 ) entry AS e ON e.user_id = u.id ORDER BY u.user_rating DESC LIMIT 10 ``` I'm using MySQL **EDIT:** with this it don't give me error, but it doesnt take any entry from join: ``` DB::select('SELECT u.id, u.username, u.user_rating, u.avatar, u.created_at, e.id as e_id, e.title FROM users u LEFT JOIN ( SELECT e.* FROM entry e ORDER BY e.id DESC LIMIT 1 ) AS e ON e.user_id = u.id ORDER BY u.user_rating DESC LIMIT 10'); ``` The result: ``` object(stdClass)#437 (7) { ["id"]=> int(5002) ["username"]=> string(6) "Nelosh" ["user_rating"]=> int(77763) ["avatar"]=> string(3) "its" ["created_at"]=> string(19) "2013-12-16 19:20:23" ["e_id"]=> NULL ["title"]=> NULL ``` }
Try this: ``` SELECT u.id, u.username, u.user_rating, u.avatar, u.created_at, e.id AS e_id, e.title FROM users u LEFT JOIN (SELECT * FROM (SELECT * FROM entry e ORDER BY e.user_id, e.id DESC) AS e GROUP BY e.user_id) AS e ON e.user_id = u.id ORDER BY u.user_rating DESC LIMIT 10; ``` **OR** ``` SELECT u.id, u.username, u.user_rating, u.avatar, u.created_at, e.id AS e_id, e.title FROM users u LEFT JOIN (SELECT * FROM entry e ORDER BY e.user_id, e.id DESC) AS e ON e.user_id = u.id GROUP BY u.id ORDER BY u.user_rating DESC LIMIT 10; ``` **EDIT:::** ``` SELECT u.id, u.username, u.user_rating, u.avatar, u.created_at, e.id AS e_id, e.title FROM (SELECT u.id, u.username, u.user_rating, u.avatar, u.created_at FROM users u ORDER BY u.user_rating DESC LIMIT 10) AS u LEFT JOIN (SELECT e.id, e.user_id, e.title FROM entry e ORDER BY e.user_id, e.id DESC) AS e ON e.user_id = u.id GROUP BY u.id; ```
Use this: ``` DB::select('SELECT u.id, u.username, u.user_rating, u.avatar, u.created_at, e.id as e_id, e.title FROM users u LEFT JOIN ( SELECT e.* FROM entry e ORDER BY e.created_date DESC //add this condition also LIMIT 1 ) entry AS e ON e.user_id = u.id ORDER BY u.user_rating DESC LIMIT 10 ```
How to select 1 row when joining table
[ "", "mysql", "sql", "select", "group-by", "sql-order-by", "" ]
``` +----+------+ | id | name | +----+------+ | 1 | Foo1 | | 2 | Foo2 | | 3 | Foo3 | +----+------+ ``` "id" is the primarykey here. My query: ``` SELECT id FROM tablename where name='Foo1'; ``` MYSQL showing only column name but no values.
Assuming cases match, there must be some leading and/or trailing whitespaces in data that need to be trimmed in the query: ``` SELECT id FROM `table` WHERE TRIM(name)='Foo1'; ``` Otherwise, use `UPPER` in both sides of the `=` to make it work.
try ``` select id from `table` where name = 'Foo1'; ``` TABLE is a reserved word, to use it as a table name enclose it in backticks. Your original query is throwing an error, thus the empty result.
How to fetch primarykey from table based on a condition?
[ "", "mysql", "sql", "primary-key", "" ]
I am new in learning stored procedure.Now I am trying to make a procedure which will generate a random number and store it in a variable and then insert the value in another table. ``` create procedure getRand() begin delcare rc varchar(255); SELECT CHAR(ROUND(65 + FLOOR( RAND() * 57 ))) into rc; insert into Rand_loader (snum,rnum) values (rc,rc); end// ``` But when I tried to run this on mysql then it gives this error ``` ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'rc varchar(255); SELECT CHAR(ROUND(65 + FLOOR( RAND() * 57 ))) into rc; insert' at line 3 ``` I am unable to understand this error.Ill appreciate your help Please help me to solve this
You just need to replace `declare` with `DECLARE` and always try to use capital letters while writing SQL commands.
Try below code. It works for me ``` DELIMITER $$ CREATE /*[DEFINER = { user | CURRENT_USER }]*/ PROCEDURE `yourdatabasename`.`getRand`() /*LANGUAGE SQL | [NOT] DETERMINISTIC | { CONTAINS SQL | NO SQL | READS SQL DATA | MODIFIES SQL DATA } | SQL SECURITY { DEFINER | INVOKER } | COMMENT 'string'*/ BEGIN DECLARE rc VARCHAR(255); SELECT CHAR(ROUND(65 + FLOOR( RAND() * 57 ))) INTO rc; INSERT INTO Rand_loader (snum,rnum) VALUES (rc,rc); END$$ DELIMITER ; ``` You can download `SQLyog` from here ``` https://code.google.com/p/sqlyog/downloads/list ``` to create tables and stored procedure with visual tools
mysql stored procedure error in creation
[ "", "mysql", "sql", "stored-procedures", "" ]
I have to create a query that verifies if a column is not null and join using it, but if it's null I must to join another column. if was possible, must be something like that ``` SELECT A FROM DBTEST IF ( NOT B = NULL) INNER JOIN DBTEST2.B = DBTEST.B ELSE INNER JOIN DBTEST2.C = DBTEST.B ``` Any idea?
``` SELECT a FROM dbtest JOIN dbtest2 ON dbtest.b = dbtest2.b UNION ALL SELECT a FROM dbtest JOIN dbtest2 ON dbtest.b = dbtest2.c WHERE dbtest2.b IS NULL ``` This will probably perform better than using a case expression inside the join condition, as the optimiser will be able to choose appropriate indexes if they exist, or at least to choose a join method based on the table structures.
This should work ``` SELECT A FROM DBTEST Inner join DBTEST2 on DBTEST.B = case when DBTEST2.B is null then DBTEST2.C else DBTEST2.B end ```
Conditional SQL Join
[ "", "sql", "sql-server", "join", "conditional-statements", "" ]
I have two (2) tables, **`users`** and **`warnings`**. In **`users`**, I have three (3) columns: `uid`, `rank`, and `language`. In **`warnings`**, I also have three (3) columns: `id`, `warnings`, and `warn_active`. So my question is: How can i display all results in **phpmyadmin** where: * `rank` = '2' * `language` = 'en' * `warn_active` = 'yes' In both tables `uid`/`id` stands for user unique ID. Thanks in advance for your help!
Try this ``` Select u.*,w.* from users u inner join warnings w on u.uid == w.id where u.rank=2 and u.language='en' and w.warn_active='yes' ```
what kind of relationship do you have here between the two tables? If you have Many to many.. then create a third table as follows ``` create table user_warnings(uid integer REFERENCES users(uid),id integer REFERENCES warnings(id)); ``` insert the values of ids in the table users\_warnings of the two ids in each table related to each other then fire the below query ``` Select * from users,warnings,users_warnings where users.uid=users_warnings.uid and warnings.id=users_warnings.id and users.rank=2 and users.language='en' and warnings.warn_active='yes'; ``` Let me knw if you encounter any error..
Retrieving data from two tables.
[ "", "sql", "select", "" ]
Currently my User model has the following code: ``` has_many :notifications has_many :friendships has_many :friends, :through => :friendships, :conditions => { status: 'accepted' } has_many :requested_friends, :through => :friendships, :source => :friend, :conditions => { status: 'requested' } has_many :pending_friends, :through => :friendships, :source => :friend, :conditions => { status: 'pending' } ``` And my Friendship model is as follows: ``` belongs_to :user belongs_to :friend, :class_name => "User" def self.request(user, friend) unless user == friend or Friendship.exists?(user_id: user, friend_id: friend) transaction do Friendship.create(:user => user, :friend => friend, :status => 'pending') Friendship.create(:user => friend, :friend => user, :status => 'requested') end else return "failed" end end def self.accept(user, friend) unless user == friend or Friendship.exists?(user_id: user, friend_id: friend) transaction do accepted_at = Time.now accept_one_side(user, friend, accepted_at) accept_one_side(friend, user, accepted_at) end else return "failed" end end def self.accept_one_side(user, friend, accepted_at) request = find_by_user_id_and_friend_id(user, friend) request.status = 'accepted' request.accepted_at = accepted_at request.save! end ``` When I try to run my cucumber tests, however, I am getting this error: ``` SQLite3::SQLException: no such column: users.status: SELECT "users".* FROM "users" INNER JOIN "friendships" ON "users"."id" = "friendships"."friend_id" WHERE "users"."status" = 'accepted' AND "friendships"."user_id" = ? (ActionView::Template::Error) ``` I think that this means it is trying to only include inside in for example pending\_friends, users which have the attribute status = "pending", where it should actually be including users who belong to friendships which have attribute status = "pending" Is this right? How would I go about fixing this?
I have updated to the following and this works: ``` has_many :notifications has_many :friendships has_many :accepted_friendships, :class_name => "Friendship", :conditions => {status: 'accepted'} has_many :requested_friendships, :class_name => "Friendship", :conditions => {status: 'requested'} has_many :pending_friendships, :class_name => "Friendship", :conditions => {status: 'pending'} has_many :friends, :through => :accepted_friendships has_many :requested_friends, :through => :requested_friendships, :source => :friend has_many :pending_friends, :through => :pending_friendships, :source => :friend ``` If anyone has a different approach without having to create accepted\_friendshis, requested\_friendships, and pending\_friendships, however, I would love to hear it!
`status` is a column for `friendships` table. So when you are writing the code, then mention the table name or else it will take the table of the Current model. ``` has_many :friends, -> { where "friendships.status = 'accepted'" }, :through => :friendships ```
Friendship has_many through model with multiple status'
[ "", "sql", "ruby-on-rails", "has-many-through", "" ]
I'm trying to prevent mysql from casting a string to an int in a where clause. The following query returns the correct row for the order `SELECT delivery_name FROM orders WHERE orders_id = '985225'` The following query also returns the same row as it is casting my string to an int, but I'd like to prevent it from returning the row at all (ie, not cast the string given to an int and effectively say, this cannot accurately be cast to an int so don't). `SELECT delivery_name FROM orders WHERE orders_id = '985225a'` I hope that makes sense, any help would be greatly appreciated!
You should be able to avoid this scenario altogether, but if you wanted a workaround you could concat a character to the front of the string, something like: ``` SELECT delivery_name FROM orders WHERE CONCAT('a',orders_id) = CONCAT('a','985225a') ```
Step 1 - remove the single quotes from the sql string. In other words, change this: ``` WHERE orders_id = '985225' ``` to this: ``` WHERE orders_id = 985225 ``` Step 2 - as per @Marc B's comment, ensure you actually have an integer. Step 3 - Use a query parameter. MySql accepts them as far as I know.
Prevent mysql casting string to int in where clause
[ "", "mysql", "sql", "casting", "" ]
I have a `INSERT INTO ... SELECT` statement that copies data from one table to another. The thing though is, the AutoNumber column value in the second table started from the last number in the first one. Meaning the count of first table is 2000, then, the second table started from 2001. Using an Access database, how to reset this value?
You can execute an Access DDL statement from ADO to reset the autonumber seed value. Here is an example Immediate window session: ``` strDdl = "ALTER TABLE Dummy ALTER COLUMN ID COUNTER(1, 1);" CurrentProject.Connection.Execute strDdl ``` The statement must be executed from ADO. It will fail if you try it with DAO (such as `CurrentDb.Execute strDdl`), or from the Access query designer. The example succeeded because `CurrentProject.Connection` is an ADO object. The two values following `COUNTER` are *seed* and *increment*. So if I wanted the autonumber to start from 1000 and increment by 2, I could use `COUNTER(1000, 2)` If the table contains data, the seed value must be greater than the maximum stored value. If the table is empty when you execute the statement, that will not be an issue.
Looks like your only option is to move the data into a new table. The following link has some information about how to do it based on your version of access. Note: be careful if you have relationships to other tables as those would need to be recreated. <http://support.microsoft.com/kb/812718>
How to reset an Access table's AutoNumber field? (it didn't start from 1)
[ "", "sql", "ms-access", "sql-insert", "autonumber", "" ]
What is the correct method of using between, because treehouse and my teacher have different answers for this question. Select all movies from the years 1999 to 2004. Treehouse answer: ``` SELECT * FROM movies WHERE year BETWEEN 1999 AND 2004; ``` Teachers answer: ``` SELECT * FROM movies WHERE year >= 1999 AND year <= 2004; ``` Which method is correct?
Both are technically correct. However, there are reasons to avoid using `between` as your teacher suggests. The simplest is that in English, the word is ambiguous and can include the end points or not. In fact, if you say that a point on a line is between two other points, the meaning is generally that the point does not coincide either end point. This is a minor issue for integers. It becomes much more important with other data types. A particular issue is with date times. If you have a datetime field in the database, then the following does not work the way you expect: ``` where datetimefield between '2013-01-01' and '2013-01-02' ``` This will return all datetime values from 2013-01-01. Plus, it will return a datetime value that is at exactly midnight between the two dates. It will not return any other value from 2013-01-02. I am heartened that your teacher recognizes the shortcomings of `between`. I hope s/he also explains why the explicit comparison method is better. EDIT: By the way, something similar happens for strings as well. So: ``` where charfield between 'a' and 'b' ``` will return all values that start with 'a' and exactly 'b'. But not 'ba' or 'b1'. One way to write it is: ``` where charfield >= 'a' and charfield < 'c' ``` The point is. `between` is a perfectly valid SQL construct. It works "correctly" for all data types. However, what is correct for SQL may not be intuitive for most people. Explicit comparisons come closer to avoiding this problem.
This is an answer to the comment, "Are they the only 2 methods? Or is their alot more? " Here are a few other ways to do it. ``` where year in (1999, 2000, 2001, 2002, 2003, 2004) where year > 1998 and year < 2005 ``` as suggested in Gordon's answer ``` where date >= {d '1999-01-01'} and date < {d '2005-01-01'} ``` All the answers are logically equivalent.
Correct method of BETWEEN in SQL
[ "", "sql", "between", "" ]
I store all the shipping info for a customer in MyTable - CustomerId, ShippingID, ItemID, ShipDate. A shipment/shippingID can contain one or more Items/ItemIDs. A shipping id can have only one shipdate. Here is a sample of the table - ``` CustomerID, ShippingID, ItemID, ShipDate C1, A1, I200, today C1, A1, I88, today C1, A2, I7, tomorrow C1, B1, I955, yesterday C2, B2.....et cetra ``` For a customerID, I want to display the distinct shippingIDs, number of items in a shippingID, ShipDate. Expected output - ``` C1, A1, 2, today C1, A2, 1, tomorrow C1, B1, 1, yesterday ...etc ``` I tried - ``` select distinct shippingid, count(*) over() itemid, orderdate from mytable where customerID = 'C1' ``` Output - ``` C1, A1, 4, today C1, A2, 4, tomorrow ``` Problem is that it counts all the items for C1. I want to count only the items in a shippingID of a customer. How do I do this ? **EDIT - Right now, I don't need it for all customers. Just for one. So group by is not necessary.** Thanks.
``` SELECT shippingid, COUNT(*) Items, ShipDate FROM mytable WHERE customerID = 'C1' GROUP BY shippingid, ShipDate ```
Why not use a `Group By`? It's the easiest way... [SqlFiddle](http://sqlfiddle.com/#!3/22b49/3/0) ``` SELECT CustomerID, ShippingID, count(1) FROM mytable GROUP BY CustomerID, ShippingID ```
COUNT with DISTINCT
[ "", "sql", "sql-server", "" ]
I want to do the following query, how can I implement it? ``` SELECT * FROM table WHERE routeNum LIKE 'N10%' -->if no rows return, WHERE clause change to routeName LIKE '&something%' ```
I think this will work for you: ``` WITH CTE AS ( SELECT 2 AS 'INDEX', * FROM table WHERE routeNum LIKE 'N10%' UNION ALL SELECT 1 AS 'INDEX', * FROM table WHERE routeNum LIKE '&something%' ) SELECT * FROM CTE WHERE CTE.INDEX = (SELECT MAX(INDEX) FROM CTE) ``` As you have to choose between two result sets, you will need two queries, which will return two different sets and then, based on the 'index' which corresponds to the query that has run, you choose how to display your results. Here is a [SQLFiddle](http://sqlfiddle.com/#!3/63895a/9) demo.
A possible solution ``` WITH cte AS ( SELECT * FROM routes WHERE routeNum LIKE 'N10%' ) SELECT * FROM cte UNION ALL SELECT * FROM routes WHERE routeNum LIKE 'something else%' AND NOT EXISTS ( SELECT * FROM cte ) ``` Here is **[SQLFiddle](http://sqlfiddle.com/#!3/508c3/4)** demo
SQL if statement within WHERE clause
[ "", "sql", "sql-server", "where-clause", "" ]
In my websites authors can upload their books. I display statistics on each page and one of the stats I display is the average rating of the author's books. I take the sum of all the ratings, and divide it by the number of books: ``` select sum(BookRate)/count(BookID) as AvrageRate from Books where Author = "Some Author" ``` The problems appear when the user has not yet uploaded any books and an obvious error message is returned: ``` Divide by zero error encountered. ``` Is there a way to avoid this situation within the SQL query or do I have to first check the book count with one query, and then run another query for the stats? Thanks world.
You can use `CASE` to cater for the sum of zero: ``` select case sum (BookID) when 0 then 0 else sum (BookRate)/sum (BookID) end as AvrageRate from Books where Author="Some Author" ``` BTW, don't you want to divide by the count, rather than sum to get the average?: ``` select case count(BookID) when 0 then 0 else sum (BookRate)/count(BookID) end as AvrageRate from Books where Author="Some Author" ```
You can use NULLIF() if you are using Oracle ``` NULLIF( 0, 0 ) ``` In SQL Server also you can try this: ``` Select dividend / nullif(divisor, 0) ``` So in your case it could be like this: ``` select sum (BookRate)/nullif(sum (BookID),0) as AvrageRate from Books where Author="Some Author" ```
How to avoid dividing by zero in SQL query?
[ "", "sql", "" ]
I have data in the following form: ``` Country City1 City2 City3 AverageTemperature UK London Glasgow Manchester 15 Italy Rome Naples Venice 25 Germany Munich Berlin 20 USA New York 25 ``` With either SQL or a SAS data step, I would like to get the data in this form: ``` Country City AverageTemperature UK London 15 UK Glasgow 15 UK Manchester 15 Italy Rome 25 Italy Naples 25 Italy Venice 25 Germany Munich 20 Germany Berlin 20 USA New York 25 ``` So that I have the data across individual rows. I have thought about doing this by looping over the three city columns where the city is not blank, but I'm not sure how to confidently do this - is it easily done with either SQL or SAS? Just a pointer would be greatly appreciated.
``` SELECT COUNTRY, City1, AverageTemperature FROM Table_Name UNION ALL SELECT COUNTRY, City2, AverageTemperature FROM Table_Name UNION ALL SELECT COUNTRY, City3, AverageTemperature FROM Table_Name ``` To get rows where City column is not null you can do something like this ``` SELECT COUNTRY, City1, AverageTemperature FROM Table_Name WHERE City1 IS NOT NULL UNION ALL SELECT COUNTRY, City2, AverageTemperature FROM Table_Name WHERE City2 IS NOT NULL UNION ALL SELECT COUNTRY, City3, AverageTemperature FROM Table_Name WHERE City3 IS NOT NULL ```
Simple in a SAS Data Step ``` data out; set in; array cities[3] city1-city3; format city $12.; do i=1 to 3; if compress(cities[i]) ^= "" then do; city = cities[i]; output; end; end; keep country city AverageTemperature; run; ```
Split data over multiple columns into rows (in SQL or SAS)
[ "", "sql", "sas", "proc-sql", "" ]
*How to create table with existing table structure without iterate row by row like this in Oracle?* Thanks in Advance. ``` CREATE TABLE new_table AS (SELECT * FROM old_table WHERE 1=2); ```
If you are worried about iterating through the table: ``` CREATE TABLE new_table AS (SELECT * FROM (select * old_table where rownum = 1) t WHERE 1=2 ); ```
I have already read about this.. Hope it gives a Detailed explanation to you.. *What we ended up doing in this clients case was to replace the “WHERE 1=2” with a clause that equated the primary key of the table with an impossible value for that key, in this case the ID was being passed in as a GUID (a hexadecimal value) so we use a “WHERE KEY=HEX(00)” and got a low cost unique index lookup instead of a costly full table scan.* <http://www.dba-oracle.com/oracle_tips_ault_where_1_equals_2_parallel_.htm> Thanks to `Burleson Consulting`
How can I create a SQL table from another table without copying any values from the old table
[ "", "sql", "oracle", "" ]
I have a table with about 50 millions records. the table structure is something like below and both **callerid** and **call\_start** fields are indexed. > id -- callerid -- call\_start I want to select all records that their call\_start is greater than '2013-12-22' and callerid is not duplicated before '2013-12-22' in whole table. I used something like this: ``` SELECT DISTINCT ca.`callerid` FROM call_archives AS ca WHERE ca.`call_start` >= '2013-12-22' AND ca.`callerid` NOT IN (SELECT DISTINCT ca.`callerid` FROM call_archives AS ca WHERE ca.`call_start` < '2013-12-21') ``` but this is extremely slow, any suggestion is really appreciated.
Just curious if this query works fast or not on your table: ``` SELECT ca.`callerid` FROM call_archives GROUP BY ca.`callerid` HAVING MIN(ca.`call_start`) >='2013-12-22' ```
Try with `NOT EXISTS` ``` SELECT DISTINCT ca.`callerid` FROM call_archives AS ca WHERE ca.`call_start` >= '2013-12-22' AND NOT EXISTS (SELECT 1 FROM call_archives AS cb WHERE ca.`callerid` = cb.`callerid` AND cb.`call_start` < '2013-12-21') ```
select non-duplicated records
[ "", "mysql", "sql", "performance", "select", "not-exists", "" ]
Given two users, ``` NAME | JOB | CITY John | null | Washington D.C. Sandy | gardener| Nashville ``` If my SQL query reads: ``` "SELECT * FROM users WHERE job != 'gardener'", ``` my results are 0, where I would like them to be 1: ``` John | null | Washington D.C. ``` This is a much simpler example of a larger query I'm trying to make. If I default Job to '0', I get the correct results, but I'd rather have a null in there, for the sake of the rest of the application. UPDATE: Thanks for all of the suggestions, but I face another problem. Let's say my SQL query is: ``` "SELECT * FROM users WHERE city != Washington D.C. and job != 'gardener' OR job IS NULL" ``` Unfortunately, this returns John, whereas I don't want John because he lives in Washington D.C. I want to return 0 rows in this case.
``` SELECT name,job,city FROM users WHERE IFNULL(job,'') != 'gardener'; ``` if you wanted to use `OR job IS NULL` in your updated query, as Strawberry mentioned, you must put parenthesis around your JOB conditions like below ``` SELECT * FROM users WHERE city != 'Washington D.C.' and (job != 'gardener' OR job IS NULL) ```
``` SELECT * FROM users WHERE job != 'gardener' OR job IS NULL ```
SQL column value null, search filter mistaken
[ "", "mysql", "sql", "" ]
``` SELECT name FROM clients; ``` Table clients ``` id | name | 1 John 2 John Bravo 3 John Alves 4 Jo ``` In postgres, how can I select only names with more than one word? For example. This should be the output: ``` John Bravo John ALves ```
I think just test if the name contains a space: `where name like '% %'` But this will give you some problem if you name can contain space char befor or after the name, like: ' JOHN' or 'Luck '
If **word** means to you a space delimited token, then the following would do the trick: ``` SELECT name FROM clients WHERE name LIKE '% %'; ``` But this will also give you those that have empty names made out of spaces only. Also performance-wise, this will be a costly query.
Select only when the field has more than one word
[ "", "sql", "postgresql", "" ]
I usually don't ask for "scripts" but for mechanisms but I think that in this case if i'll see an example I would understand the principal. I have three tables as shown below: ![enter image description here](https://i.stack.imgur.com/dyChE.png) and I want to get the columns from all three, plus a count of the number of episodes in each series and to get a result like this: ![enter image description here](https://i.stack.imgur.com/kbno9.png) Currently, I am opening multiple DB threads and I am afraid that as I get more visitors on my site it will eventually respond really slowly. Any ideas? Thanks a lot!
First join all the tables together to get the columns. Then, to get a count, use a window function: ``` SELECT count(*) over (partition by seriesID) as NumEpisodesInSeries, st.SeriesId, st.SeriesName, et.episodeID, et.episodeName, ct.createdID, ct.CreatorName FROM series_table st join episode_table et ON et.ofSeries = st.seriesID join creator_table ct ON ct.creatorID = st.byCreator; ```
Try this: <http://www.sqlfiddle.com/#!3/5f938/17> ``` select min(ec.num) as NumEpisodes,s.Id,S.Name, Ep.ID as EpisodeID,Ep.name as EpisodeName, C.ID as CreatorID,C.Name as CreatorName from Episodes ep join Series s on s.Id=ep.SeriesID join Creators c on c.Id=s.CreatorID join (select seriesId,count(*) as Num from Episodes group by seriesId) ec on s.id=ec.seriesID group by s.Id,S.Name,Ep.ID,Ep.name,C.ID,C.Name ``` Thanks Gordon
how to do get multiple columns + count in a single query?
[ "", "sql", "sql-server-2008", "" ]
I want to retrieve some column values from table with these conditions. * If value is NULL (or) Empty String , return some user defined value * If not above condition , return it's value. How can I figure it out ? Here is my Table query.. ``` CREATE TABLE AUCTION_CAR_BID( bid_seq bigint NOT NULL AUTO_INCREMENT, auction_car_seq bigint NOT NULL, bid_group_seq bigint NOT NULL, bid_price int DEFAULT 0 NOT NULL, over_bid_price int DEFAULT -1 NOT NULL, result_id int DEFAULT 0 NOT NULL, remark varchar(500), PRIMARY KEY (bid_seq)) ENGINE = InnoDB DEFAULT CHARACTER SET utf8; ``` Here is my efforted codes to get it.. ``` SELECT COALESCE(OVER_BID_PRICE, -1) FROM AUCTION_CAR_BID WHERE BID_SEQ = 2354435345; ``` **Another :** ``` SELECT CASE WHEN OVER_BID_PRICE IS NULL OR TRIM(OVER_BID_PRICE) = '' THEN -1 ELSE OVER_BID_PRICE END OVER_BID_PRICE FROM AUCTION_CAR_BID WHERE BID_SEQ = 2354435345; ``` But I always get empty String value(not -1) if given **id** is not in my table. Any suggestions would be really appreciated !
If you write this: ``` SELECT COALESCE(OVER_BID_PRICE, -1) FROM AUCTION_CAR_BID WHERE BID_SEQ = 2354435345; ``` The results can be two types. First result: Your query no returns rows! Your WHERE condition is unsatisfact so you'll read NULL Second result: Your query returns rows but the value of your field is NULL, your COALESCE works fine in this case To resolve you can try this: ``` SELECT COALESCE( (SELECT COALESCE(OVER_BID_PRICE, -1) FROM AUCTION_CAR_BID WHERE BID_SEQ = 2354435345) ,-1); ``` Tell me if it's OK
How about this: ``` select case when price is null or id <> 1 then -1 else price end price from mytable ```
How to achieve default value if column value is NULL?
[ "", "mysql", "sql", "" ]
I am writing a stored procedure for SQL Server. I have a working version but I'm looking to optimize it a little bit if possible so that I don't need to waste two calls to the database. Here is my version : ``` If Exists (Select * From webapi.dbo.Users Where Username=@Username) Begin Select * From webapi.dbo.Users Where Username=@Username End Else Begin Set @error_message = 'Username ' + @Username + ' does not exist.' Raiserror (@error_message, 16, 1) End ``` I want something like : ``` Set @User = (Select * From webapi.dbo.Users Where Username=@Username) If Count(@User) = Begin Set @error_message = 'Username ' + @Username + ' does not exist.' Raiserror (@error_message, 16, 1) End Else Begin return @User End ``` Is this possible or a smart thing to do? I clearly make 2 of the same calls and would like to remove one of them in order to create the fastest possible system and for my own knowledge of SQL. **I Ended up using:** ``` Select * From webapi.dbo.Users Where Username=@Username If @@ROWCOUNT = 0 Begin Set @error_message = 'Username ' + @Username + ' does not exist.' Raiserror (@error_message, 16, 1) End ```
You cannot store your select result in a classical variable because it's a row, not a field. For your information if you only need to get 1 field, you could do what you thought with `@@ROWCOUNT`: ``` DECLARE @var INT; SELECT @var = id FROM tableName WHERE condition...; ``` After this select ask for `@@ROWCOUNT` ``` IF @@ROWCOUNT = 0 BEGIN RAISERROR... END ```
What about something like this? ``` declare @Username varchar(50) = 'johnny' if not exists (select 1 from users where username = @username) raiserror ('%s does not exist',16,1,@username) ``` `T-SQL` example
SQL Server Stored Procedure - If Count( ) Statement
[ "", "sql", "sql-server", "" ]
I have written a query: ``` SELECT `name` , COUNT( * ) FROM `daily` GROUP BY `Name` UNION SELECT `name` , COUNT( * ) FROM `monday` GROUP BY `Name` ``` I am getting this result ``` Name | Count(*) ------------------- Person 1 | 10 | Person 2 | 05 | Person 3 | 00 | Person 1 | 08 | Person 2 | 10 | ``` I simply want to get this result: ``` Name | Count(*) ------------------- Person 1 | 18 | Person 2 | 15 | Person 3 | 00 | ``` I want to add the two values from the two tables against the same 'name'. What join command do I use here ?
You want to do the `union` *before* the aggregation: ``` select name, count(*) from (select name from daily union all select name from monday) t group by name; ``` You can also pre-aggregate the values, although I'm not sure if there is a performance gain in MySQL to do this: ``` select name, sum(cnt) from ((select name, count(*) as cnt from daily group by name) union all (select name, count(*) as cnt from monday group by name) ) t group by name; ```
This is not tested ``` select name, SUM(total) from ( SELECT name , COUNT( * ) as total FROM daily GROUP BY Name UNION SELECT name ,COUNT( * ) as total FROM monday GROUP BY Name ) GROUP BY name; ``` Hope this works :)
SQL Adding the number of occurrences from different tables
[ "", "mysql", "sql", "addition", "" ]
Is it possible to do in SQL: for example I have period `where @s_date = '20130101' and @e_date = '20130601'` and I want to select all last days of months in this period. This is example of result: ``` 20130131 20130228 20130331 20130430 20130531 ``` Thanks.
The easiest option is to have a [calendar table](http://blog.jontav.com/post/9380766884/calendar-tables-are-incredibly-useful-in-sql), with a last day of the month flag, so your query would simply be: ``` SELECT * FROM dbo.Calendar WHERE Date >= @StartDate AND Date <= @EndDate AND EndOfMonth = 1; ``` Assuming of course that you don't have a calendar table you can generate a list of dates on the fly:' ``` DECLARE @s_date DATE = '20130101', @e_date DATE = '20130601'; SELECT Date = DATEADD(DAY, ROW_NUMBER() OVER(ORDER BY Object_ID) - 1, @s_date) FROM sys.all_objects; ``` Then once you have your dates you can limit them to where the date is the last day of the month (where adding one day makes it the first of the month): ``` DECLARE @s_date DATE = '20130101', @e_date DATE = '20130601'; WITH Dates AS ( SELECT Date = DATEADD(DAY, ROW_NUMBER() OVER(ORDER BY Object_ID) - 1, @s_date) FROM sys.all_objects ) SELECT * FROM Dates WHERE Date <= @e_Date AND DATEPART(DAY, DATEADD(DAY, 1, Date)) = 1; ``` **[Example on SQL Fiddle](http://sqlfiddle.com/#!6/d41d8/13234)**
You can run the following query and then adjust it by using your table details: ``` declare @s_date as datetime= '20130101' declare @e_date as datetime= '20131020' SELECT DateAdd(m, number, '1990-01-31') FROM master.dbo.spt_values WHERE 'P' = type AND DateAdd(m, number, @s_date) < @e_date ```
Calculate last days of months for given period in SQL Server
[ "", "sql", "sql-server-2008", "" ]
I want to get random number between 65 to 122.I am using this query ``` SELECT (FLOOR(65 + RAND() * 122)); ``` after getting help from [this question](https://stackoverflow.com/questions/14865632/mysql-update-with-random-number-between-1-3) but it is giving me numbers outside the range of 65-122. Please help.Thanks
``` SELECT (FLOOR(65 + RAND() * 57)); ``` Rand provides a number between 0-1. Like 0,75. So if you want to get a number betwen 65 and 122 then: ``` 65 = 65+0*57 122 = 65+1*57 ```
I think what you want is 65 + FLOOR( RAND() \* 57 ) a minimum of 65 PLUS random of 0-57, so at the high end, 65 + 57 = 122
Get random number between 65 to 122
[ "", "mysql", "sql", "random", "" ]