Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
Not sure if this is the best way to do this, but I'll try to give an example to explain what I am trying to accomplish. I have about 4 or 5 different tables that each contain a `TOTAL` field. One table contains a `CUSTOMER_ID` (each of the 4 or 5 other tables contain a foreign key that links their records to the parent `CUSTOMER` table). I want to group by `CUSTOMER_ID` in one column in my query while each of the other columns contains the overall total for the respective table. Does this make sense? I'm looking for the most efficient and properly designed query. It sounds like I would need sub-query rather than a bunch of left outer joins?
``` SELECT C.CUSTOMER_ID, T1.TOTAL TOTAL_T1, T2.TOTAL TOTAL_T2, T3.TOTAL TOTAL_T3, T4.TOTAL TOTAL_T4, T5.TOTAL TOTAL_T5 FROM CUSTOMER C LEFT JOIN ( SELECT CUSTOMER_ID, SUM(TOTAL) TOTAL) FROM TABLE1 GROUP BY CUSTOMER_ID) T1 ON C.CUSTOMER_ID = T1.CUSTOMER_ID LEFT JOIN ( SELECT CUSTOMER_ID, SUM(TOTAL) TOTAL) FROM TABLE2 GROUP BY CUSTOMER_ID) T2 ON C.CUSTOMER_ID = T2.CUSTOMER_ID LEFT JOIN ( SELECT CUSTOMER_ID, SUM(TOTAL) TOTAL) FROM TABLE3 GROUP BY CUSTOMER_ID) T3 ON C.CUSTOMER_ID = T3.CUSTOMER_ID LEFT JOIN ( SELECT CUSTOMER_ID, SUM(TOTAL) TOTAL) FROM TABLE4 GROUP BY CUSTOMER_ID) T4 ON C.CUSTOMER_ID = T4.CUSTOMER_ID LEFT JOIN ( SELECT CUSTOMER_ID, SUM(TOTAL) TOTAL) FROM TABLE5 GROUP BY CUSTOMER_ID) T5 ON C.CUSTOMER_ID = T5.CUSTOMER_ID ```
not sure if I get what correctly what you are asking but I think you can accomplish this with simple joins, for two tables ``` select table1.customerId, sum(table1.total) as total1, sum(table2.total) as total2 FROM table1, table2 where table1.customerId=table2.customerId group by table1.customerId; ``` and you can make it with as many tables as you want
SQL SUM of Totals from different tables in single query
[ "", "sql", "sql-server", "performance", "" ]
I am a beginner with SQL so I struggle with the MSDN description for creating a linked server in Management Studio. I whant to link a SQL Server into another to use everything from ServerB on ServerA to e.g. provide one location other systems can connect to. Both servers are in the same domain and both server have several databases inside. When I start creating a linked server on ServerA in the general tap I select a name for the linked server and select SQL Server as Server type. But I struggle on the Security tap. I have on both servers sa privilege so what is to set here? Or which role should I take/crate for this connection? My plan is to create views in a certain DB on ServerA with has also content of ServerB inside. This views will be conusumed from an certain AD service user. I already added this service user to the security on ServerA where the views are stored. Do I also have to add this user somewhere on the linked ServerB?
1. In Server Objects => right click New Linked Server ![enter image description here](https://i.stack.imgur.com/Xnfmp.jpg) **2.** The “New Linked Server” Dialog appears. (see below).![enter image description here](https://i.stack.imgur.com/oVJ3L.jpg) **3.** For “Server Type” make sure “Other Data Source” is selected. (The SQL Server option will force you to specify the literal SQL Server Name) * Type in a friendly name that describes your linked server (without spaces). – Select “Microsoft OLE DB Provider for SQL Server” * Product Name – type: SQLSERVER (with no spaces) * Datasource – type the actual server name, and instance name using this convention: SERVERNAMEINSTANCENAME * ProviderString – Blank * Catalog – Optional (If entered use the default database you will be using) * Prior to exiting, continue to the next section (defining security) ![enter image description here](https://i.stack.imgur.com/Q1Sxw.jpg) * Click OK, and the new linked server is created
I would recommend that you use Windows Authentication. Activate [Security Delegation](http://msdn.microsoft.com/en-us/library/ms189580.aspx). In the Security tab, choose "Add". Select your Windows user and check "Impersonate". As a quick and dirty solution, you can choose "Be made using this security context" from the options list and enter a SQL Login which is valid on the remote server. Since quick and dirty solutions tend to last, i would strongly recommend to spend some time on impersonation.
MS SQL: What is the easiest/nicest way to create a linked server with SSMS?
[ "", "sql", "sql-server", "" ]
I need to delete a subset of records from a self referencing table. The subset will always be self contained (that is, records will only have references to other records in the subset being deleted, not to any records that will still exist when the statement is complete). My understanding is that this ***might*** cause an error if one of the records is deleted before the record referencing it is deleted. **First question:** does postgres do this operation one-record-at-a-time, or as a whole transaction? Maybe I don't have to worry about this problem? **Second question:** is the order of deletion of records consistent or predictable? I am obviously able to write specific SQL to delete these records without any errors, but my ultimate goal is to write a regression test to show the next person after me why I wrote it that way. I want to set up the test data in such a way that a simplistic delete statement will consistently fail because of the records referencing the same table. That way if someone else messes with the SQL later, they'll get notified by the test suite that I wrote it that way for a reason. Anyone have any insight? **EDIT**: just to clarify, I'm not trying to work out how to delete the records safely (that's simple enough). I'm trying to figure out what set of circumstances will cause such a DELETE statement to consistently fail. **EDIT 2**: Abbreviated answer for future readers: this is not a problem. By default, postgres checks the constraints at the end of each ***statement*** (not per-record, not per-transaction). Confirmed in the docs here: <http://www.postgresql.org/docs/current/static/sql-set-constraints.html> And by the SQLFiddle here: <http://sqlfiddle.com/#!15/11b8d/1>
A single `DELETE` with a `WHERE` clause matching a set of records will delete those records in an implementation-defined order. This order may change based on query planner decisions, statistics, etc. No ordering guarantees are made. Just like `SELECT` without `ORDER BY`. The `DELETE` executes in its own transaction if not wrapped in an explicit transaction, so it'll succeed or fail as a unit. To force order of deletion in PostgreSQL you must do one `DELETE` per record. You can wrap them in an explicit transaction to reduce the overhead of doing this and to make sure they all happen or none happen. [PostgreSQL can check foreign keys at three different points](http://www.postgresql.org/docs/current/static/sql-set-constraints.html): * The default, `NOT DEFERRABLE`: checks for each row as the row is inserted/updated/deleted * `DEFERRABLE INITIALLY IMMEDIATE`: Same, but affected by `SET CONSTRAINTS DEFERRED` to instead check at end of transaction / `SET CONSTRAINTS IMMEDIATE` * `DEFERRABLE INITIALLY DEFERRED`: checks all rows at the end of the *transaction* In your case, I'd define your `FOREIGN KEY` constraint as `DEFERRABLE INITIALLY IMMEDIATE`, and do a `SET CONSTRAINTS DEFERRED` before deleting. (Actually if I vaguely recall correctly, despite the name `IMMEDIATE`, `DEFERRABLE INITIALLY IMMEDIATE` actually runs the check *at the end of the statement* instead of the default of after each row change. So if you delete the whole set in a single `DELETE` the checks will then succeed. I'll need to double check). (The mildly insane meaning of `DEFERRABLE` is IIRC defined by the SQL standard, along with gems like a `TIMESTAMP WITH TIME ZONE` that doesn't have a time zone).
In standard SQL, and I believe PostgreSQL follows this, each statement should be processed "as if" all changes occur at the same time, in parallel. So the following code works: ``` CREATE TABLE T (ID1 int not null primary key,ID2 int not null references T(ID1)); INSERT INTO T(ID1,ID2) VALUES (1,2),(2,1),(3,3); DELETE FROM T WHERE ID2 in (1,2); ``` Where we've got circular references involved in both the `INSERT` and the `DELETE`, and yet it works just fine. [fiddle](http://sqlfiddle.com/#!15/07fa5/1)
Ordered DELETE of records in self-referencing table
[ "", "sql", "postgresql", "" ]
I have a string filed in an SQL database, representing a url. Some url's are short, and some very long. I don't really know waht's the longest URL I might encounter, so to be on the safe side, I'll take a large value, such as 256 or 512. When I define the maximal string length (using SQLAlchemy for example): ``` url_field = Column(String(256)) ``` Does this take up space (storage) for each row, even if the actual string is shorter? I'm assuming this has to do with the implementation details. I'm using postgreSQL, but am interested in sqlite, mysql also.
Usually database storage engines can do many thing you don't expect. But basically, there are two kinds of text fields, that give a hint what will go on internally. char and varchar. Char will give you a fixed field column and depending on the options in the sql session, you may receive space filled strings or not. Varchar is for text fields up to a certain maximum length. Varchar fields can be stored as a pointer outside the block, so that the block keeps a predictable size on queries - but that is an implementation detail and may vary from db to db.
In PostgreSQL `character(n)` is basically just `varchar` with space padding on input/output. It's clumsy and should be avoided. It consumes the same storage as a `varchar` or `text` field that's been padded out to the maximum length (see below). `char(n)` is a historical wart, and should be avoided - at least in PostgreSQL it offers no advantages and has some weird quirks with things like `left(...)`. `varchar(n)`, `varchar` and `text` all consume the same storage - the length of the string you supplied with no padding. It only uses the storage actually required for the characters, irrespective of the length limit. Also, if the string is null, PostgreSQL doesn't store a value for it at all (not even a length header), it just sets the null bit in the record's null bitmap. Qualified `varchar(n)` is basically the same as unqualified `varchar` with a `check` constraint on `length(colname) < n`. Despite what some other comments/answers are saying, `char(n)`, `varchar`, `varchar(n)` and `text` are all TOASTable types. They can all be stored out of line and/or compressed. To control storage use `ALTER TABLE ... ALTER COLUMN ... SET STORAGE`. If you don't know the max length you'll need, just use `text` or unqualified `varchar`. There's no space penalty. For more detail see [the documentation on character data types](http://www.postgresql.org/docs/current/static/datatype-character.html), and for some of the innards on how they're stored, see [database physical storage](http://www.postgresql.org/docs/current/static/storage.html) in particular [TOAST](http://www.postgresql.org/docs/current/static/storage-toast.html). Demo: ``` CREATE TABLE somechars(c10 char(10), vc10 varchar(10), vc varchar, t text); insert into somechars(c10) values (' abcdef '); insert into somechars(vc10) values (' abcdef '); insert into somechars(vc) values (' abcdef '); insert into somechars(t) values (' abcdef '); ``` Output of this query for each col: ``` SELECT 'c10', pg_column_size(c10), octet_length(c10), length(c10) from somechars where c10 is not null; ``` is: ``` ?column? | pg_column_size | octet_length | length c10 | 11 | 10 | 8 vc10 | 10 | 9 | 9 vc | 10 | 9 | 9 t | 10 | 9 | 9 ``` `pg_column_size` is the on-disk size of the datum in the field. `octet_length` is the uncompressed size without headers. `length` is the "logical" string length. So as you can see, the `char` field is padded. It wastes space and it also gives what should be a very surprising result for `length` given that the input was 9 chars, not 8. That's because Pg can't tell the difference between leading spaces you put in yourself, and leading spaces it added as padding. So, don't use `char(n)`. BTW, if I'm designing a database I *never* use `varchar(n)` or `char(n)`. I just use the `text` type and add appropriate `check` constraints if there are application requirements for the values. I think that `varchar(n)` is a bit of a wart in the standard, though I guess it's useful for DBs that have on-disk layouts where the size limit might affect storage.
String field length in Postgres SQL
[ "", "sql", "postgresql", "sqlalchemy", "" ]
I need to find all Id's that have 20 or more days outside of date ranges, between the first StartDate and last EndDate. One Id has multiple start dates and end dates. In the following example, Id 1 has two gaps less than 20 days each. It should be considered as one range from 10/01/2012 to 10/30/2014 without any gap. ``` 1 10/01/2012 02/01/2013 1 01/01/2013 01/31/2013 1 02/10/2013 03/31/2013 1 04/15/2013 10/30/2014 ``` Id 2 has a gap more than 20 days between end date 01/30/2013 and start date 05/01/2013, therefore it has to be captured by the query. ``` 2 01/01/2013 01/30/2013 2 05/01/2013 06/30/2014 2 07/01/2013 02/01/2014 ``` Id 3 should be considered as one range from 01/01/2012 to 06/01/2014 without any gap. The gap between end date 02/28/2013 and start date 07/01/2013 should be ignored because range from 01/01/2012 to 01/01/2014 covers the gap. ``` 3 01/01/2012 01/01/2014 3 01/01/2013 02/28/2013 3 07/01/2013 06/01/2014 ``` A cursor can do it but it works extremely slow and is not acceptable. SQL fiddle: <http://sqlfiddle.com/#!3/27e3f/2/0>
With your fiddle schema, try this: ``` ;WITH naivegaps AS ( SELECT ROW_NUMBER() OVER (ORDER BY id, startdate, MAX(dr1.enddate)) AS rn, dr1.Id, dr1.startdate, MAX(dr1.enddate) as enddate FROM dateranges dr1 GROUP BY dr1.Id, dr1.startdate ) SELECT n1.id, n1.enddate as gap_start, n2.startdate AS gap_end, datediff(dd, n1.enddate, n2.startdate) as gap_width, n3.* FROM naivegaps n1 CROSS APPLY ( SELECT TOP 1 nx.id, nx.startdate FROM naivegaps nx WHERE n1.id = nx.id AND nx.rn > n1.rn ORDER BY nx.startdate ) n2 OUTER APPLY ( SELECT TOP 1 nx.id, nx.enddate FROM naivegaps nx WHERE n1.id = nx.id AND nx.rn < n1.rn ORDER BY nx.enddate DESC ) n3 WHERE datediff(dd, n1.enddate, n2.startdate) >= 20 AND (n3.enddate <= n1.enddate OR n3.enddate IS NULL) ``` The CTE at the top orders everything appropriately for the following checks, and adds a row number to facilitate ordering checks. The `CROSS APPLY` finds all gaps between the end of a sequence and the following beginning. The `OUTER APPLY` checks for ranges that completely surround the gap in question (that wouldn't have been sorted appropriately in the `CROSS APPLY`) EDIT: I compared the execution plan of this solution against the recursive CTE solution provided by Joe Farrell. They're significantly different plans, but the estimated efficiency is very close (mine is slightly better, about 4%). This may or may not translate to real-world performance on a large data set; I encourage you to test both approaches and use the one that works best in your scenario.
Here's a solution that doesn't use a cursor. I don't know how fast it will be on a large data set, so hopefully you can test it against your cursor-based approach and let me know how it holds up. A more detailed explanation of what's going on follows the code. ``` -- Get a list of all dates on which coverage starts or stops. with [EventsCTE] as ( select [id], [startdate] as [date], 1 as [change] from dateranges union all select [id], [enddate] as [date], -1 as [change] from dateranges ), -- Give each event a sequence number (by date) within its id. [SequencedEventsCTE] as ( select row_number() over (partition by [id] order by [date]) as [seq], * from [EventsCTE] ), -- Use the sequence number to construct a running total of the number of active -- date ranges at each point in time. [RunningTotalsCTE] as ( -- Base case: Get the first event for each id. select *, [change] as [rangesActive] from [SequencedEventsCTE] where [seq] = 1 union all -- Recursive case: build a running total for subsequent events. select [this].*, [this].[change] + [prev].[rangesActive] as [rangesActive] from [SequencedEventsCTE] [this] inner join [RunningTotalsCTE] [prev] on [this].[Id] = [prev].[Id] and [this].[seq] = [prev].[seq] + 1 ), -- Join each event to its successor and look for dates on which no range was -- active. This gives us a list of gaps and their sizes. [GapsCTE] as ( select [gapStart].[Id], datediff(day, [gapStart].[date], [gapEnd].[date]) as [GapSize] from [RunningTotalsCTE] [gapStart] inner join [RunningTotalsCTE] [gapEnd] on [gapStart].[Id] = [gapEnd].[Id] and [gapStart].[seq] = [gapEnd].[seq] - 1 and [gapStart].[rangesActive] = 0 ) -- Get the ids having gaps of 20 days or more. select distinct [id] from [GapsCTE] where [GapSize] >= 20; ``` First, in `EventsCTE`, I split each of the rows from your original table into two "events", one denoting that a date range has begun (these records have `change = 1`), and one denoting that a date range has ended (`change = -1`). Starting with this seemed necessary because of the fact that you have overlapping ranges; I can't identify gaps by just comparing one record in the original table to the record that follows it. `SequencedEventsCTE` takes this expanded data set and adds a new column, `seq`, which gives the relative sequence of a particular event within each `id`. This allow me to easily match each event to the event that comes immediately before it in my next step. `RunningTotalsCTE` has the trick that makes this whole thing work: for each event, it computes a running total of the `change` values within each `id`. This running total, `rangesActive`, should therefore give the number of date ranges were active as of each event date. This allows me to account for overlapping date ranges. For instance, if you select all of the records from `RunningTotalsCTE` where `id = 3`, you get the following: ``` seq id date change rangesActive 1 3 2012-01-01 00:00:00.000 1 1 2 3 2013-01-01 00:00:00.000 1 2 3 3 2013-02-28 00:00:00.000 -1 1 4 3 2013-07-01 00:00:00.000 1 2 5 3 2014-01-01 00:00:00.000 -1 1 6 3 2014-06-01 00:00:00.000 -1 0 ``` Finally, `GapsCTE` identifies all of the gaps by looking for records where `rangesActive = 0`, excluding the last event within each `id`. The size of the gap is the difference between such a record's event date and the event date of the record that follows it. The last step is to simply select the unique `id`s from this final CTE where there's a gap size of 20 days or more. I think this will do what you want, though as I said, I'm not sure how it will perform against a very large data set. Please comment if you have any specific questions about how this works.
Find Id's with start/end date range coverage caps greater than 20 days
[ "", "sql", "sql-server-2008", "date", "" ]
I have written a stored procedure to check the how transaction working in stored procedure. Is this correct? How can I check this is correct or not? What I want to do is if second table data not deleted ; both the table data should not be delete. ``` CREATE PROCEDURE DeleteDepartment ( @DepartmentID int ) AS BEGIN TRANSACTION DELETE FROM Employees WHERE DepartmentID = @DepartmentID IF @@ERROR <> 0 BEGIN -- Rollback the transaction ROLLBACK -- Raise an error and return RAISERROR ('Error in deleting employees in DeleteDepartment.', 16, 1) RETURN END DELETE FROM Departments WHERE DepartmentID = @DepartmentID IF @@ERROR <> 0 BEGIN -- Rollback the transaction ROLLBACK -- Raise an error and return RAISERROR ('Error in deleting department in DeleteDepartment.', 16, 1) RETURN END COMMIT ```
``` CREATE PROCEDURE DeleteDepartment ( @DepartmentID int ) AS BEGIN TRY BEGIN TRANSACTION DELETE FROM Employees WHERE DepartmentID = @DepartmentID --Test Code Start --For testing purpose Add an Insert statement with passing value in the identity column. declare @table1 as table(ID Identity(1,1),Test varchar(10)) insert into @table1(ID, Test) Values(1,'Failure Test') --Test Code end DELETE FROM Departments WHERE DepartmentID = @DepartmentID COMMIT TRANSACTION END TRY BEGIN CATCH ROLLBACK TRANSACTION RETURN ERROR_MESSAGE() END CATCH ``` First things first, `Commit transaction` appears ahead of `Rollback Transaction` And to test if the transactions work, what you can do is, try adding an `INSERT` statement in the query between 2 delete statements and try adding value for the identity column in it. So that the first delete is successful, but transaction fails. Now you can check if the first delete is reflected in the Table or not.
COMMIT is supposed to be before ROLLBACK. and i advice using try/catch blocks it should look like something like this ``` BEGIN TRY declare @errorNumber as int BEGIN TRANSACTION --do 1st statement IF @@ERROR<>0 BEGIN SET @errorNumber=1 END --do 2nd statement IF @@ERROR<>0 BEGIN SET @errorNumber=2 END COMMIT END TRY BEGIN CATCH IF @@TRANCOUNT > 0 ROLLBACK END CATCH ```
SQL Transaction Handling in stored procedure
[ "", "sql", "sql-server", "stored-procedures", "" ]
I have a large table full of information on the different documents within my system. Currently the filesize is stored in bytes, but I need to do a query where they are converted to megabytes, and then ordered on those megabytes. The query is very slow thanks to the calculation that is going on and I was wondering if there is a way I could optimise it to run faster? ``` SELECT [DS_ID] ,[DS_Blob] ,[DS_Ext] ,[DS_FileSize] / 1048576 AS 'Size (in MB)' ,[DS_CV_ID] ,[DS_Filename] ,[DS_DataAccess] FROM [DS_DocumentStorage] WHERE DS_FileSize > 1048576 ORDER BY 'Size (in MB)' DESC ``` Thanks
Couple of options I can think of: 1. Add a column representing the size in Mb (with all the additional storage and keeping in-sync issues that brings). 2. Use a "computed column" with a function-based index: ``` CREATE TABLE DS_DocumentStorage ( ... DS_FileSizeMB AS [DS_FileSize] / 1048576 ); CREATE INDEX ix_DS_FileSizeMB ON DS_DocumentStorage(DS_FileSizeMB); ``` N.B. You should test the execution plan to see if this actually improves your situation.
The greater than condition means that a table scan is needed. This means any index you have are rendered useless. The order by also screws up your execution plan. if ordering the result outside sql server is not possible, then I suggest you put your query into an indexed view since you are pulling data from just 1 table anyway. [Here](http://technet.microsoft.com/en-us/library/ms187864%28v=sql.105%29.aspx) is a link where you can learn about indexed views. An indexed view ensures your query execution plan performs an index seek instead of scans which will significantly improve the query execution plan and lessen the time it takes to pull data from the tables. But then again, if you have a huge pool (tens of millions of rows or so) of data, then it will still take a few minutes for the execution to be finished.
Optimise database query that is converting bytes to megabytes and then ordering by desc
[ "", "sql", "sql-server", "performance", "" ]
How do you return 1 value per row of the max of several columns: TableName [RefNumber, FirstVisitedDate, SecondVisitedDate, RecoveryDate, ActionDate] I want MaxDate of (FirstVisitedDate, SecondVisitedDate, RecoveryDate, ActionDate) these dates for all rows in single column and I want another new column(Acion) depends on Max date column for ex: if Max date is from FirstVisitedDate then it will be 'FirstVisited' or if Max date is from SecondVisitedDate then it will be 'SecondVisited'... The Total result Like: Select RefNumber, Maxdate, Action From Table group by RefNumber
I wrote a custom function to do this: ``` CREATE FUNCTION [dbo].[MaxOf5] ( @D1 DateTime, @D2 DateTime, @D3 DateTime, @D4 DateTime, @D5 DateTime ) RETURNS DateTime AS BEGIN DECLARE @Result DateTime SET @Result = COALESCE(@D1, @D2, @D3, @D4, @D5) IF @D2 IS NOT NULL AND @D2 > @Result SET @Result = @D2 IF @D3 IS NOT NULL AND @D3 > @Result SET @Result = @D3 IF @D4 IS NOT NULL AND @D4 > @Result SET @Result = @D4 IF @D5 IS NOT NULL AND @D5 > @Result SET @Result = @D5 RETURN @Result END ``` To call this, and calculated your `Action` column, this should work: ``` SELECT MaxDate, CASE WHEN MaxDate = FirstVisitedDate THEN 'FirstVisited' WHEN MaxDate = SecondVisitedDate THEN 'SecondVisited' WHEN MaxDate = RecoveryDate THEN 'Recovery' WHEN MaxDate = ActionDate THEN 'Action' END AS [Action] FROM ( SELECT RefNumber, dbo.MaxOf5(FirstVisitedDate, SecondVisitedDate, RecoveryDate, ActionDate) AS MaxDate, FirstVisitedDate, SecondVisitedDate, RecoveryDate, ActionDate FROM table ) AS data ``` Note that is is possible for more than one of your dates to be tied for the max date. In this case, the order of your `WHEN` clauses determines which one wins.
``` SELECT RecordID, MaxDate FROM SourceTable CROSS APPLY (SELECT MAX(d) MaxDate FROM (VALUES (date1), (date2), (date3), (date4), (date5), (date6), (date7)) AS dates(d)) md ```
MAX Date of multiple columns?
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "reporting", "" ]
I am trying to add Integration Services an existing SQL Server 2008 instance. I went to the SQL Server Installation Center and clicked the option to "New installation or add features to an existing installation." At this point, a file system window pops up. I am asked to browse for SQL Server 2008 R2 Installation Media. I tried ***C:Program Files\MicrosoftSQLServer*** but got the error message that it was not accepted as a "valid installation folder." I went deeper into the MicrosoftSQLServer folder and found ***\SetupBootstrap*** but this was not accepted either. It appears that the only way to proceed is to find the Installation Media Folder but I'm not exactly sure what it's asking for. How can I find the Installation Media folder? Alternatively, other methods for adding SSIS to an existing instance of SQL Server 2008 are welcome. Thanks.
To add features to an existing instance go to: 1. Control Panel -> Add remove programs 2. Click the SQL Server instance you want to add features to and click Change. Click the **Add** button in the dialog 3. Browse to the SQL Server installation file (.exe file), and select the **Add features to an existing instance of SQL Server** option. 4. From the features list select the **Integration Services** and finish the installation. Find more detailed information you can find here: [How to: Add Integration Services to an Existing Instance of SQL Server 2005](http://technet.microsoft.com/en-us/library/bb326043%28v=sql.90%29.aspx) it applies to SQL Server 2008 also Hope this helps
If you've downloaded SQL from the Microsoft site, rename the file to a zip file and then you can extract the files inside to a folder, then choose that one when you "Browse for SQL server Installation Media" ``` SQLEXPRADV_x64_ENU.exe > SQLEXPRADV_x64_ENU.zip 7zip will open it (standard Windows zip doesn't work though) Extract to something like C:\SQLInstallMedia You will get folders like 1033_enu_lp, resources, x64 and a bunch of files. ``` Idea from this article: [SQL Server Installation - What is the Installation Media Folder?](https://stackoverflow.com/questions/2979425/sql-server-installation-what-is-the-installation-media-folder)
Add SSIS to existing SQL Server instance
[ "", "sql", "sql-server", "ssis", "installation", "program-files", "" ]
I have 3 tables with a column called `created` and the type as datetime. I am looking for a way to check all the 3 created columns for a date (between today and today -7), if a date is found, the result should be 1 if not, 0. This [SQL FIDDLE](http://www.sqlfiddle.com/#!3/7554e/4) is what I have until now. It should return 1, but it is returning 0, instead. ``` SELECT CASE WHEN( ( table1.created BETWEEN DATEDIFF(dd, 7, GETDATE()) AND GETDATE() AND table2.created BETWEEN DATEDIFF(dd, 7, GETDATE()) AND GETDATE() AND table3.created BETWEEN DATEDIFF(dd, 7, GETDATE()) AND GETDATE() ) ) THEN 1 ELSE 0 END AS FLAG FROM table1, table2, table3 WHERE table1.cond1= 'A' and table2.cond1= 'A' and table3.cond1= 'A' ```
To check if a table has a row or rows matching a particular condition, you can use the [`EXISTS` predicate](http://msdn.microsoft.com/en-us/library/ms188336.aspx "EXISTS (Transact-SQL)"): ``` SELECT CASE WHEN EXISTS (SELECT * -- syntactically, the select list is disregarded here -- meaning you can replace the "*" with anything else FROM tablename WHERE ... ) THEN 1 ELSE 0 END -- no FROM clause in the main SELECT ; ``` If you want to check simultaneously several tables, make that several predicates, like this: ``` SELECT CASE WHEN EXISTS (SELECT * FROM table1 WHERE ...) AND EXISTS (SELECT * FROM table2 WHERE ...) AND EXISTS (SELECT * FROM table3 WHERE ...) THEN 1 ELSE 0 END ; ``` Finally, what other answerers have said also applies, i.e. you should consider changing your date checking conditions from ``` created BETWEEN DATEDIFF(dd, 7, GETDATE()) AND GETDATE() ``` to ``` created BETWEEN DATEADD(dd, -7, GETDATE()) AND GETDATE() ``` The original form would work too in your case but it would rely on implicit conversion of `int` to `datetime`, which is not a good practice and would break if you changed the type of `created` from `datetime` to one of the newer types, `datetime2` or `datetimeoffset` or, perhaps, `date`.
Change `DATEDIFF` to `DATEADD` and `7` to `-7` and `AND` to `OR`: ``` SELECT CASE WHEN( ( table1.created BETWEEN DATEADD(dd, -7, GETDATE()) AND GETDATE() OR table2.created BETWEEN DATEADD(dd, -7, GETDATE()) AND GETDATE() OR table3.created BETWEEN DATEADD(dd, -7, GETDATE()) AND GETDATE() ) ) THEN 1 ELSE 0 END AS FLAG FROM table1, table2, table3 WHERE table1.cond1= 'A' and table2.cond1= 'A' and table3.cond1= 'A' ``` [SQLFiddle](http://www.sqlfiddle.com/#!3/736a1/1)
Check in multiple columns for a range date
[ "", "sql", "sql-server-2008", "" ]
I have a table named `tblHumanResources` in which I want to get the collection of all rows which consists of only 2 rows from each distinct field in the `effectiveDate` column (order by: ascending): `tblHumanResources` Table ``` | empID | effectiveDate | Company | Description | 0-123 | 2014-01-23 | DFD Comp | Analyst | 0-234 | 2014-01-23 | ABC Comp | Manager | 0-222 | 2012-02-19 | CDC Comp | Janitor | 0-213 | 2012-03-13 | CBB Comp | Teller | 0-223 | 2012-01-23 | CBB Comp | Teller ``` and so on. Any help would be much appreciated.
Try to use [ROW\_NUMBER()](http://msdn.microsoft.com/en-us/library/ms186734.aspx) function to get N rows per group: ``` SELECT * FROM ( SELECT t.*, ROW_NUMBER() OVER (PARTITION BY effectiveDate ORDER BY empID) as RowNum FROM tblHumanResources as t ) as t1 WHERE t1.RowNum<=2 ORDER BY effectiveDate ``` `SQLFiddle demo` Version without ROW\_NUMBER() function assuming that `EmpId` is unique during the day: ``` SELECT * FROM tblHumanResources as t WHERE t.EmpID IN (SELECT TOP 2 EmpID FROM tblHumanResources as t2 WHERE t2.effectiveDate=t.effectiveDate ORDER BY EmpID) ORDER BY effectiveDate ``` `SQLFiddle demo`
``` SELECT TOP 2 * FROM ( SELECT * , ROW_NUMBER() OVER ( PARTITION BY effectiveDate ORDER BY effectiveDate ASC ) AS row_num FROM tblHumanResources ) AS rows WHERE row_num = 1 ```
Get top 2 rows from each distinct field in a column in Microsoft SQL Server 2008
[ "", "sql", "sql-server", "sql-server-2008", "" ]
My output right now is stacking one column on top of the other. How can I get the second column next to the first column, instead of below it? edit1: added HTML output. edit2: okay to elaborate, the do loop is operating on a hash of data. { site1 => 1, site2 =>1, site3 =>4} it's something like that. I want an output of: sites :: data site1 :: 3 site2 :: 2 site3 :: 5 ``` <table> <tr> <th>>7days</th> <th><7days</th> </tr> <tr> <% @ls7days.values.each do |ls7day| %> <td><%= ls7day %></td> </tr> <%end%> <tr> <% @gt7days.values.each do |gt7day| %> <td><%= gt7day %></td> </tr> <%end%> </table> ``` HTML output ``` <table> <tr> <th>>7days</th> <th>>7days</th </tr> <td>53</td> </tr> <td>13</td> </tr> <td>49</td> </tr> <td>8</td> </tr> <td>64</td> </tr> </table> ```
Alright, well it seems like you still have some formatting mistakes in your original code, so let's start by correcting those. ``` <table> <tr> <th>&gt;7days</th> <th>&lt;7days</th> </tr> <% @ls7days.values.each do |ls7day| %> <tr> <td><%= ls7day %></td> </tr> <%end%> <% @gt7days.values.each do |gt7day| %> <tr> <td><%= gt7day %></td> </tr> <%end%> </table> ``` Note that I replaced the > and < in the ths with `&gt;` and `&lt;` as well as moved the opens for your trs inside the each. Now that we have fixed the formatting errors we have some issues with the actual structure of the table. I think this step calls for some explanation as to what the different components of a table are for. * First, you have the `<table></table>` tags, these are pretty straight-forward as they signal a table. * Next, you have the `<tr></tr>` tags, these represent the rows of the table. These tags are known as Table Row tags. * Lastly, you have the `<td></td>` tags, these represent the cells of the table. These tags are known as Table Data tags, and are probably the most confusing part of a table. They represent the individual data cells and as such in a way represent your columns. All table rows in a table should have the same number of table datas. * Technically, there are some additional tags such as the `<th></th>` tags, but these can effectively be thought of as `<td><h3></h3></td>`. They are hardly different from a table data. Looking back at your table's structure we see that the first table row has two table datas in it. Then every table row after that has only a single td in it. As such, the behavior of your table is unstable and may display differently from one browser to another. In order to fix this you simply need to add another table data to each table row. My guess, given your comments, is that you actually were trying to use the each statements to generate the columns of your table. Because of the way that a table is structured this is not as simple as it might seem, but let's give it a shot. ``` <table> <tr> <td><h4>&gt;7days</h4></td> <td><h4>&lt;7days</h4></td> </tr> <% if @ls7days.values.size > @gt7days.values.size %> <% @ls7days.values.each_with_index do |ls7day, index| %> <tr> <td><%= @gt7days.values[index] unless index >= @gt7days.size %></td> <td><%= ls7day %></td> </tr> <% end %> <% else %> <% @gt7days.values.each_with_index do |gt7day, index| %> <tr> <td><%= gt7day %></td> <td><%= @ls7days.values[index] unless index >= @ls7days.size %></td> </tr> <% end %> <% end %> </table> ``` As you can see, we now iterate over the larger of the two arrays (gt7days and lt7days). As we do we put its value into a table data in a new table row as you were originally doing. However, we also put the value from the smaller array into another table data in that same table row. Which should work great, except it is probably possible (maybe even likely) that the two arrays are not the same size. So, we handle that by saying that we are going to grab the value from the smaller array unless the value we want won't exist (we know it won't because its place in the array is past the arrays size). It may not be the most elegant solution, but I think it will work how you are expecting.
Your th tag might be wrong. It should be like this: ``` <th>7days</th> <th>7days</th> ``` Hope I could help.
Two Columns of data in the same Column. How do I format this table correctly?
[ "", "sql", "ruby-on-rails", "format", "html-table", "" ]
I follow [the MS guidelines](http://msdn.microsoft.com/en-us/library/ms189775.aspx) and they give a specific example as follows. ``` ALTER ROLE Sales ADD MEMBER Barry; ``` However, when I perform corresponding operation, I get an error telling me that "Incorrect syntax near the keyword 'ADD'" and a line under the keyword *ADD*. When I hover over it, I can see that the tooltip says "Expecting WITH". As far I can see, *WITH* is used for other alternation that addition of users to roles. So my question is twofold. 1. What can I do to add the user to the role? 2. What's upp with the misguided tooltip? [Somewhere else on the internet](http://sqlserverplanet.com/security/add-user-to-role), I've seen that the users are added to roles by calling a stored procedure instead. That begs a question what the MSDN link is talking about - as if it's not possible to use that script. Suggestions are welcome on the subjects as I'm less than experienced with DBs and feel like a moose on the ice. My config: Microsoft SQL Server Management Studio 10.50.4000.0 Microsoft Data Access Components (MDAC) 6.1.7601.17514 Microsoft MSXML 3.0 4.0 5.0 6.0 Microsoft Internet Explorer 9.0.8112.16421 Microsoft .NET Framework 2.0.50727.5466 Operating System 6.1.7601
What verion of SQL are you using? The syntax you have will only work in 2012 onwards Instead you should execute the sp\_addrolemember: ``` EXEC sp_addrolemember 'Sales', 'Barry' ```
Try this... ``` EXECUTE sp_addrolemember 'Sales', 'Barry' ```
Can't add user to a role
[ "", "sql", "sql-server", "sql-server-express", "" ]
I have a table with values, it has an ID, date time seqno, lat long, event type, and event code with location (Google location). I am trying to pull data from the table on distinct ID (so only the latest datetime on the ID as value) but it keeps giving me all the rows not only the latest date time. I have tried to use `distinct` but it is not working. I can't ignore any of the data it must be all displayed. will this then result in using sub queries to get the values as required? --- The query I have used is: ``` SELECT Distinct[Unit_id] ,[TransDate] ,[event_id] ,[EventDesc] ,[GeoLocation] ,[Latitude] ,[Longitude] ,[MsgSeqNo] FROM [bd].[dbo].[table] ``` the result is the following: ``` Unit_id TransDate event_id EventDesc GeoLocation Latitude Longitude MsgSeqNo 68217 2014-03-10 17:22 345 Battery Disconnect T3, Ndola, Zambia -12.96938333 28.62473333 1597114648 68217 2014-03-17 15:01 345 Battery Disconnect 0|Position Invalid 0 0 1602601282 68217 2014-03-18 05:37 345 Battery Disconnect 0|Position Invalid 0 0 1602976336 69181 2014-04-03 11:02 345 Battery Disconnect Bowe Avenue, Mufulira, Zambia -12.53835 28.23953333 1616129719 24555 2014-04-21 20:49 345 Battery Disconnect North Rand, Lethabong, South Africa -26.08998333 28.17598333 1629881440 24555 2014-05-14 06:32 345 Battery Disconnect 17 Fountain Road, Lethabong (Ekurhuleni) 1609, South Africa -26.12955 28.15551667 1646144136 24555 2014-05-14 06:33 345 Battery Disconnect 17 Fountain Road, Edenvale 1609, South Africa -26.1296 28.15568333 1646146148 24555 2014-05-17 19:37 345 Battery Disconnect Mokgosi 1, Lobatse, Botswana -25.2317 25.66831667 1649274858 ```
The `DISTINCT` keyword will check for distinct row for all specified columns you specify and not the first one directly after. What you must do is partition rows by `[Unit_id]` and order the partition according to `[TransDate]`. The follow will help you find the information you are looking for. Note that I have only added three of the columns for simplicity, but it should be easy for you to add the rest of the columns having all column definitions and data. ``` DECLARE @Data TABLE ( [Unit_id] INT, [TransDate] DATETIME2(0), [event_id] INT ) INSERT INTO @Data VALUES ( 68217, '2014-03-10 17:22', 345 ), ( 68217, '2014-03-17 15:01', 345 ), ( 68217, '2014-03-18 05:37', 345 ), ( 69181, '2014-04-03 11:02', 345 ), ( 24555, '2014-04-21 20:49', 345 ), ( 24555, '2014-05-14 06:32', 345 ), ( 24555, '2014-05-14 06:33', 345 ), ( 24555, '2014-05-17 19:37', 345 ) ;WITH OrderedData AS ( SELECT [Unit_id], [TransDate], [event_id], ROW_NUMBER() OVER (PARTITION BY [Unit_id] ORDER BY [TransDate] DESC) AS [Order] FROM @Data ) SELECT [Unit_id], [TransDate], [event_id] FROM OrderedData WHERE [Order] = 1 ``` Note that wham using the `WITH` keyword you must make sure that there is a statement separator `;` between the two statements.
I'm assuming the `Unit_id` is unique in the table. But there is probably another unique composite key in the table. I'll assume GeoLocation in which case [GeoLocation, TransDate] might be the unique key. Then you want to find all the records with the max date for the given GeoLocation: ``` SELECT Unit_id] ,[TransDate] ,[event_id] ,[EventDesc] ,[GeoLocation] ,[Latitude] ,[Longitude] ,[MsgSeqNo] FROM [Ibd].[dbo].[table] x WHERE TransDate = ( SELECT MAX(TransDate) FROM [Ibd].[dbo].[table] WHERE GeoLocation = x.GeoLocation ) ``` If the unique key is somehting different, then the join needs to be modified accordingly. **Update** Based on sample data and comment: ``` SELECT Unit_id] ,[TransDate] ,[event_id] ,[EventDesc] ,[GeoLocation] ,[Latitude] ,[Longitude] ,[MsgSeqNo] FROM [Ibd].[dbo].[table] x WHERE MsgSeqNo= ( SELECT MAX(MsgSeqNo) FROM [Ibd].[dbo].[table] WHERE Unit_id= x.Unit_id) ``` Just note that using the max sequence does not imply the most recent record, it just implies the highest sequence number associated with the Unit\_id. Consider carefully your structure and what you really want.
single ID value from multiple rows with ID's
[ "", "sql", "t-sql", "" ]
Below is the necessary info. Table: `Parts`: ``` pid, Color ``` Table: `Supplier` ``` sid, sname ``` Table: `Catalog` ``` pid, sid ``` **I am trying to find the pid in parts that have multiple distinct suppliers. I really don't know what command to use to do this. I know I will have to use INNER JOIN to connect Parts and Supplier but what command ensures that I only get pid that have multiple distinct suppliers?** *What about finding parts that have NO suppliers? I know DISTINCT or COUNT could somehow be used but not sure how this would work.*
This should work: ``` select * from parts where pid in (select pid from catalog group by pid having count(distinct sid) > 1) ``` Since you already have a table mapping a `pid` to one or more `sid`, you just retrieve the records in that table which have multiple `sid` values, and use the `HAVING` clause to implement this filter. For the `pid` values with no `sid` values mapped to them, do a `left join` like so: ``` select * from parts p left join catalog c on p.pid = c.pid where c.sid is null ``` The `is null` check ensures that only those `pid` values which do not have a mapped `sid` in the `Catalog` table are retrieved.
**Find Parts with more than 1 supplier :** ``` SELECT p.Color ,COUNT(DISTINCT s.sname) as nbrSupName FROM Parts p INNER JOIN Catalog c ON c.pid = p.pid INNER JOIN Supplier s ON s.sid = c.sid GROUP BY p.Color HAVING COUNT(DISTINCT s.sname) > 1 ``` Or : ``` SELECT p.Color ,s.sname FROM (SELECT p.pid ,COUNT(DISTINCT s.sname) as nbrSupName FROM Parts p INNER JOIN Catalog c ON c.pid = p.pid INNER JOIN Supplier s ON s.sid = c.sid GROUP BY p.Color) subquery INNER JOIN Catalog c ON c.pid = subquery.pid INNER JOIN Supplier s ON s.sid = c.sid GROUP BY p.Color ,s.sname WHERE subquery.nbrSupName > 1 ``` --- **Find Parts with NO supplier :** ``` SELECT p.Color FROM Parts p LEFT JOIN Catalog c ON c.pid = p.pid WHERE c.sid IS NULL GROUP BY p.Color ``` You can also use the 1st query with `COUNT(DISTINCT s.sname) = 0`
Finding Distinct results in SQL columns
[ "", "sql", "" ]
I was given a task to show the CPU usage trend as part of a building process which also do regression test. Each individual test case run has a record in the table RegrCaseResult. The RegrCaseResult table looks something like this: ``` id projectName ProjectType returnCode startTime endTime totalMetrics 1 'first' 'someType' 16 'someTime' 'someOtherTime' 222 ``` The RegrCaseResult.totalMetrics is a special key which links to another table called ThreadMetrics through ThreadMetrics.id. Here is how ThreadMetrics will look like: ``` id componentType componentName cpuTime linkId 1 'Job Totals' 'Job Totals' 'totalTime' 34223 2 'parser1' 'parser1' 'time1' null 3 'parser2' 'generator1' 'time2' null 4 'generator1' 'generator1' 'time3' null ------------------------------------------------------ 5 'Job Totals' 'Jot Totals' 'totalTime' 9899 ... ``` The rows with the compnentName 'Job Totals' is what the totalMetrics from RegrCaseResult table will link to and the 'totalTime' is what I am really want to get given a certain projectType. The 'Job Totals' is actually a summation of the other records - in the above example, the summation of time1 through time3. The linkId at the end of table ThreadMetrics can link back to RegrCaseResult.id. The requirements also states I should have a way to enforce the condition which only includes those projects which have a consistent return code during certain period. That's where my initial question comes from as follows: I created the following simple table to show what I am trying to achieve: ``` id projectName returnCode 1 'first' 16 2 'second' 16 3 'third' 8 4 'first' 16 5 'second' 8 6 'first' 16 ``` Basically I want to get all the projects which have a consistent returnCode no matter what the returnCode values are. In the above sample, I should only get one project which is "first". I think this would be simple but I am bad when it comes to database. Any help would be great. I tried my best to make it clear. Hope I have achieved my goal.
Here is an easy way: ``` select projectname from table t group by projectname having min(returncode) = max(returncode); ``` If the `min()` and `max()` values are the same, then all the values are the same (unless you have `NULL` values). EDIT: To keep `'third'` out, you need some other rule, such as having more than one return code. So, you can do this: ``` select projectname from table t group by projectname having min(returncode) = max(returncode) and count(*) > 1; ```
``` select projectName from projects group by projectName having count(distinct(returnCode)) = 1) ``` This would also return projects which has only one entry. How do you want to handle them? **Working example**: <http://www.sqlfiddle.com/#!2/e7338/8>
How to do this query against MySQL database table?
[ "", "mysql", "sql", "database", "" ]
More of a curious question .. Studying a SQL and I want to know about what is the maximum number of AND clauses: ``` WHERE condition1 AND condition2 AND condition3 AND condition4 ... AND condition? ... AND condition_n; ``` i.e what isthe biggest possible `n` ? It would seem that since these could be trivial comparisons, the limit it high. How far can one go before reach limit? [src](http://www.techonthenet.com/sql/and.php)
Practically, there is no limit. Most tools will have some limit on the length of the SQL statement that they can deal with. If you want to get really deep into the weeds, though, you could use the `dbms_sql` package which accepts a collection of `varchar2(4000)` that comprise a single SQL statement. That would get you up to 2^32 \* 4000 bytes. If we assume that every condition is at least 10 bytes, that puts a reasonable upper limit of 400 \* 2^32 which is roughly 800 billion conditions. If you're getting anywhere close to that, you're doing something really wrong. Most tools will have limits that kick in well before that. Of course, if you did create the largest possible SQL statement using `dbms_sql`, that SQL statement would require ~16 trillion bytes. A single SQL statement that required 16 TB of storage would probably create other issues...
I put together a simple test case: ``` select * from dual where 1=1 and 1=1 ... ``` Using SQL\*Plus, I was able to run with 100,000 conditions (admittedly, very simple ones) without an issue. I'd find any use case that came even close to approaching that number to be highly suspect...
In Oracle SQL , what is the maximum number of AND clauses in a query?
[ "", "sql", "oracle", "oracle11g", "boundary", "" ]
I have problem with using alias in where clause. I have tables like this: ![enter image description here](https://i.stack.imgur.com/6lXbm.png) I store users and they can send messages to each other. Messages data is stored in social\_messages table, with core\_users\_sender and core\_users\_receiver ids. Now when user logs in system I want to show the list of only those users with which he/she had conversation. (logged in core\_users.id is 6) I use this query and get ids of friends with which user had conversations without problem: ``` SELECT messages.id, messages.status, messages.send_date, IF( core_users_sender = 6, core_users_receiver, core_users_sender ) as friend_id FROM social_messages messages WHERE messages.core_users_sender = 6 OR messages.core_users_receiver = 6 GROUP BY friend_id ``` But problem is that when I try to get data from core\_users table with friend\_id and and use query: ``` SELECT messages.id, messages.status, messages.send_date, IF( core_users_sender = 6, core_users_receiver, core_users_sender ) as friend_id, users.fullname FROM social_messages messages, core_users users WHERE users.id = friend_id AND ( messages.core_users_sender = 6 OR messages.core_users_receiver = 6 ) GROUP BY friend_id ``` I get error because friend\_id cant be used in where clause because its calculated in select
You can't uses aliases in the `where` clause. Use the original subquery ``` where users.id = IF( core_users_sender = 6, core_users_receiver, core_users_sender ) ``` > It is not allowable to refer to a column alias in a WHERE clause, because the column value might not yet be determined when the WHERE clause is executed.
I suggest you to reorder your query in this way : ``` SELECT messages.id, messages.status, messages.send_date, users.fullname, IF( messages.core_users_sender IS NOT NULL, messages.core_users_receiver, messages.core_users_sender ) as friend_id FROM social_messages messages left join core_users users_sender on users_sender.id = messages.core_users_sender and messages.core_users_sender = 6 left join core_users users_receiver on users_receiver.id = messages.core_users_receiver and messages.core_users_receiver = 6 WHERE users_sender.id IS NOT NULL OR users_receiver.id IS NOT NULL ```
Mysql select alias in where clause
[ "", "mysql", "sql", "" ]
I have a table with a column of dates which includes dates and `NULL` values. I am trying to figure out a way to find the MAX `Date`, per `ID`, or if there is a `NULL` value, then to return `NULL` instead. So for example: ``` ID Date 1 2014-01-01 1 2014-02-01 1 2014-03-01 2 2014-02-01 2 NULL 3 NULL 4 2014-03-01 ``` So what I am trying to yield is: ``` 1 = 2014-03-01 2 = NULL 3 = NULL 4 = 2014-03-01 ``` As of right now I am using something like this: ``` NULLIF(MAX(COALESCE(n.[SentDate], '12/16/9997')),'12/16/9997') AS [MaxSentDate] ``` I am 99% sure that no one will ever put in a date of `12/16/9997`, but I would like to come up with a proper solution rather than using a hackish one like this.
Try this : ``` SELECT [ID] , CASE WHEN MAX(CASE WHEN [Date] IS NULL THEN 1 ELSE 0 END) = 0 THEN MAX([Date]) END FROM YourTable GROUP BY [ID] ```
``` SELECT ID, [Date] FROM ( SELECT ID ,[DATE] ,ROW_NUMBER() OVER (PARTITION BY ID ORDER BY CASE WHEN [Date] IS NULL THEN '99991212' ELSE [Date] END DESC) RN FROM TABLE_NAME) A WHERE RN = 1 ``` ## [`Working SQL FIDDLE`](http://sqlfiddle.com/#!3/9b113/1)
How to find max value of column with NULL being included and considered the max
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have the following SQL query which is working great but is very slow to process (3 to 5 seconds). I have created indexes on "slug" and "checksum" columns but as the IN clause runs through 5000 to 10000 rows it's not enough to make it fast. I read that there was a way to improve it using temporary tables and/or joins but I can't find a way do make it work. DB engine is InnoDB on MySQL. Any help would be really appreciated. ``` SELECT name AS personName, slug AS personSlug, COUNT(slug) AS personCount FROM person WHERE checksum IN ( SELECT checksum FROM person WHERE slug = 'john-doe' ) AND NOT (slug = 'john-doe') GROUP BY personName ORDER BY personCount DESC ```
I am not fully understanding what you query is attempting to do without seeing some sample data. But it looks like you are trying to find all checksums that match the checksums assoicated with 'john-doe' but don't have slug = 'john-doe' - so a search for duplicates of some sort. The following self-join should do this for you. ``` SELECT p.name AS personName, p.slug AS personSlug, COUNT(p.slug) AS personCount FROM person AS p INNER JOIN person AS p2 ON p.checksum = p2.checksum WHERE p2.slug = 'john-doe' AND p.slug <> 'john-doe' GROUP BY personName ORDER BY personCount DESC ```
Often changing it to a `not exists` helps performance: ``` SELECT name AS personName, slug AS personSlug, COUNT(slug) AS personCount FROM person p WHERE EXISTS (SELECT 1 from person p2 WHERE p2.slug = 'john-doe' and p2.checksum = p.checksum ) AND NOT (slug = 'john-doe') GROUP BY personName ORDER BY personCount DESC; ``` For performance you want an index on `person(checksum, slug)`.
Slow IN() MySQL query optimization
[ "", "mysql", "sql", "database", "" ]
I am trying to get only previous sixth month's data form the query. i.e I have to group by only the previous sixth months. Suppose current month is June then I only want January's data & also I don't want all the previous month other than January Can anyone help me for this ``` SELECT so_date FROM RS_Sells_Invoice_Info_Master SIIM LEFT OUTER JOIN RS_Sell_Order_Master AS SM ON SM.sell_order_no = SIIM.sell_order_no LEFT OUTER JOIN RS_Sell_Order_Mapping AS SOM ON SOM.sell_order_no = SIIM.sell_order_no AND SIIM.product_id = SOM.product_id LEFT OUTER JOIN RS_Inventory_Master AS IM ON IM.product_id = SIIM.product_id where so_date between CAST(DATEADD(month, DATEDIFF(month, 0, so_date)-5, 0)AS DATE) and CAST(DATEADD(month, DATEDIFF(month, 0, so_date)-4, 0)AS DATE) ```
To get all the data for a specific month (6 months ago) use the following where clause, You need to compare month and year, to ensure you get the correct month ie if the current month is May you want December from the previous year. ``` where datepart(Month, [so_date]) = datepart(Month, dateadd(month, -6,getdate())) and datepart(Year, [so_date]) = datepart(year, dateadd(month, -6,getdate())) ```
> Suppose current month is June then I only want January's data This would work ``` WHERE so_date >= DATEADD(mm, -6, LEFT(CONVERT(VARCHAR, GETDATE(), 120), 8) + '01') AND so_date < DATEADD(mm, -5, LEFT(CONVERT(VARCHAR, GETDATE(), 120), 8) + '01') ``` The `LEFT(CONVERT(VARCHAR, GETDATE(), 120), 8) + '01'` gives you the start of the current month in `YYYY-MM-DD` format. The rest is straight-forward.
How to get only a particular month's data in SQL
[ "", "sql", "sql-server", "" ]
I have to merge two 500M+ row tables. What is the best method to merge them? I just need to display the records from these two SQL-Server tables if somebody searches on my webpage. These are fixed tables, no one will ever change data in these tables once they are live. ``` create a view myview as select * from table1 union select * from table2 ``` Is there any harm using the above method? If I start merging 500M rows it will run for days and if machine reboots it will make the database go into recovery mode, and then I have to start from the beginning again. **Why Am I merging these table?** * I have a website which provides a search on the person table. * This table have columns like Name, Address, Age etc * We got 500 million similar .txt files which we loaded into some other table. * Now we want the website search page to query both tables to see if a person exists in the table. * We get similar .txt files of 100 million or 20 million, which we load to this huge table. **How we are currently doing it?** * We import the .txt files into separate tables ( some columns are different in .txt) * Then we arrange the columns and do the data type conversions * Then insert this staging table into the liveCopy huge table ( in test environment) We have SQL server 2008 R2 * Can we use table partitioning for performance benefits? * Is it ok to create monthly small tables and create a view on top of them? * How can indexing be done in this case? We only load new data once in a month and do the select Does **replication** help? **Biggest issue I am facing is managing huge tables.** I hope I explained the situation . Thanks & Regards
If your purpose is truly just to move the data from the two tables into one table, you will want to do it in batches - 100K records at a time, or something like that. I'd guess you crashed before because your T-Log got full, although that's just speculation. Make sure to throw in a checkpoint after each batch if you are in Full recovery mode. That said, I agree with all the comments that you should provide why you are doing this - it may not be necessary at all.
1) Usually developers, to achieve more performance, are splitting large tables into smaller ones and call this as partitioning (horizontal to be more precise, because there is also vertical one). Your view is a sample of such partitions joined. Of course, it is mostly used to split a large amount of data into range of values (for example, table1 contains records with column [col1] < 0, while table2 with [col1] >= 0). But even for unsorted data it is ok too, because you get more room for speed improvements. For example - parallel reads if put tables to different storages. So this is a good choice. 2) Another way is to use MERGE statement supported in SQL Server 2008 and higher - [http://msdn.microsoft.com/en-us/library/bb510625(v=sql.100).aspx](http://msdn.microsoft.com/en-us/library/bb510625%28v=sql.100%29.aspx). 3) Of course you can copy using INSERT+DELETE, but in this case or in case of MERGE command used do this in a small batches. Smth like: ``` SET ROWCOUNT 10000 DECLARE @Count [int] = 1 WHILE @Count > 0 BEGIN ... INSERT+DELETE/MERGE transcation... SET @Count = @@ROWCOUNT END ```
How to merge 500 million table with another 500 million table
[ "", "sql", "sql-server", "" ]
I have two different query that works good alone. The first gave me my useful result column `TOTALI` and the second query column `RIMBORSATI`. So I need to union the first query with the second and make that the HAVING clause of first query is an operation like HAVING `totali-rimborsati < professionisti.limite`. Thank u so much. **First Query:** ``` SELECT professionisti.*,COUNT(contatti_acquistati_addebito.email) AS totali FROM professionisti LEFT JOIN contatti_acquistati_addebito ON professionisti.email = contatti_acquistati_addebito.email AND contatti_acquistati_addebito.DATA BETWEEN ('2014-05-01') AND ('2014-05-31') WHERE professionisti.categoria LIKE '%0540%' AND professionisti.province LIKE '%MI%' AND professionisti.addebito='1' GROUP BY professionisti.email HAVING totali < professionisti.limite ORDER BY totali ASC LIMIT 4 ``` **Second Query:** ``` SELECT professionisti.*,COUNT(contatti_rimborsi.email) AS rimborsati FROM professionisti LEFT JOIN contatti_rimborsi ON professionisti.email = contatti_rimborsi.email AND contatti_rimborsi.DATA BETWEEN ('2014-05-01') AND ('2014-05-31') WHERE professionisti.categoria LIKE '%0540%' AND professionisti.province LIKE '%MI%' AND professionisti.addebito='1' GROUP BY professionisti.email ORDER BY totali ASC LIMIT 4 ```
``` SELECT p.*,m1.*,m2.*,IFNULL(m2.rimborsi, 0) as rimborsiok FROM professionisti p LEFT JOIN ( SELECT ca.email, COUNT(*) AS totali FROM contatti_acquistati_addebito ca WHERE ca.data between ('2014-06-01') AND ('2014-06-31') GROUP BY ca.email ) AS m1 ON p.email = m1.email LEFT JOIN ( SELECT cr.email, COUNT(*) AS rimborsi FROM contatti_rimborsi cr WHERE cr.data between ('2014-06-01') AND ('2014-06-31') GROUP BY cr.email ) AS m2 ON p.email = m2.email WHERE p.categoria LIKE '%0540%' AND p.province LIKE '%MI%' AND p.standby='0' AND p.addebito='1' HAVING m1.totali-rimborsiok<p.limite OR p.limite=0 ```
``` select t1.email,t1.limite,t1.totali,t2.rimborsati from ( SELECT professionisti.email, max(professionisti.limite) as limite, min(COUNT(contatti_acquistati_addebito.email) AS totali FROM professionisti LEFT JOIN contatti_acquistati_addebito ON professionisti.email = contatti_acquistati_addebito.email AND contatti_acquistati_addebito.DATA BETWEEN ('2014-05-01') AND ('2014-05-31') WHERE professionisti.categoria LIKE '%0540%' AND professionisti.province LIKE '%MI%' AND professionisti.addebito='1' GROUP BY professionisti.email -- Here that professionisti.limite does make sense to me it should be an aggregate function!? -- (are you sure this query works?) -- using max(professionisti.limite) and using the aggregate count for email HAVING COUNT(contatti_acquistati_addebito.email) < max(professionisti.limite) -- using aggregate more general sql (works better on other engines) -- removed see why below. -- ORDER BY COUNT(contatti_acquistati_addebito.email) ASC LIMIT 4 ) t1 left join ( SELECT professionisti.email,COUNT(contatti_rimborsi.email) AS rimborsati FROM professionisti LEFT JOIN contatti_rimborsi ON professionisti.email = contatti_rimborsi.email AND contatti_rimborsi.DATA BETWEEN ('2014-05-01') AND ('2014-05-31') WHERE professionisti.categoria LIKE '%0540%' AND professionisti.province LIKE '%MI%' AND professionisti.addebito='1' GROUP BY professionisti.email -- Here you cannot order by totali you do not have it so I am removing both order by -- alternativly put the same left join with contatti_acquistati_addebito as above! -- ORDER BY totali ASC LIMIT 4 ) t2 on t1.email=t2.email where ,t1.totali-t2.rimborsati < t1.limite ```
MYSQL query with two different join and count
[ "", "mysql", "sql", "join", "count", "" ]
I have a parent table Person. And 3 child tables PersonA, PersonB and PersonC. ``` Person - ID int primary key not null, name varchar(50) null PersonA - ID int primary key not null references Person(ID), fname, lname, zip etc. PersonB - ID int primary key not null references Person(ID), fname, lname, zip etc. PersonC - ID int primary key not null references Person(ID), fname, lname, zip etc. ``` What I need to build is a constraint which will ensure that a person falls under only one of the three persons. For example if I have a row in Person with 234, JohnSmith with my current design I can have the same 234 in PersonA, PersonB and PersonC. My goal is to have the 234 in ONLY one of the three chilt tables.
One way to handle this is by having three foreign key references in the `Person` table with a constraint: ``` PersonAID int references PersonA(ID), PersonBID int references PersonB(ID), PersonCID int references PersonC(ID), check ((PersonAID is not null and PersonBID is null and PersonCID is null) or (PersonBID is not null and PersonCID is null and PersonAID is null) or (PersonCID is not null and PersonAID is null and PersonBID is null) ) ``` Note: if you want to allow all three to be `NULL`: ``` check ((PersonAID is not null and PersonBID is null and PersonCID is null) or (PersonBID is not null and PersonCID is null and PersonAID is null) or (PersonCID is not null and PersonAID is null and PersonBID is null) or (PersonAID is null and PersonBID is null and PersonCID is null) ) ``` If you do this, then you may not need the reference from each of the subtables to the maintable.
Have you tried creating a view of the union of the three child tables and then creating a unique index on this view? Check out [MSDN](http://msdn.microsoft.com/en-us/library/ms191432%28v=sql.105%29.aspx) ``` Create View Persons as Select ID From PersonA union Select ID From PersonB union Select ID From PersonC --Create an index on the view. CREATE UNIQUE CLUSTERED INDEX IDX_V1 ON Persons (ID); ```
SQL constraint for unique child rows
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "" ]
my question title saying exactly what i want help. **Table defination & relation:** ![enter image description here](https://i.stack.imgur.com/CffMx.jpg) --- I tried this below query but its not delete the data "jl" from measuring\_units table. I want to delete that row with 62 in "measuring\_units" table, because it is not being used in "food\_units\_relation" table this one i tried: ``` DELETE t2 FROM food_units_relation t1 JOIN measureing_units t2 ON t1.measuring_unit_id = t2.measuremnt_id WHERE t1.foodUnit_relation_Id = 17 and t2.measuremnt_id NOT IN(t1.measuring_unit_id) and t2.creater_id=1; ```
``` DELETE t2 FROM measureing_units t2 LEFT JOIN food_units_relation t1 ON t1.measuring_unit_id = t2.measuremnt_id WHERE t1.foodUnit_relation_Id = 17 and t1.measuring_unit_id is null and t2.creater_id = 1; ```
The issue here - you use INNER JOIN so you doesn't even select `measuremnt_id` that doesn't exists in `food_units_relation`. Instead use RIGHT JOIN and also exclude conditions IN WHERE for `t1` table : ``` DELETE t2 FROM food_units_relation t1 RIGHT JOIN measureing_units t2 ON t1.measuring_unit_id = t2.measuremnt_id WHERE t1.measuring_unit_id IS NULL AND t2.creater_id=1; ``` Or just use NOT EXISTS: ``` DELETE FROM measureing_units t2 WHERE NOT EXISTS (SELECT * FROM food_units_relation t1 WHERE t1.measuring_unit_id = t2.measuremnt_id) AND t2.creater_id=1 ```
sql query to delete parent table rows which are not used in child table
[ "", "mysql", "sql", "" ]
Assume the following structure: **Items**: ``` ItemId Price ---------------- 1000 129.95 2000 49.95 3000 159.95 4000 12.95 ``` **Thresholds**: ``` PriceThreshold Reserve ------------------------ 19.95 10 100 5 150 1 -- PriceThreshold is the minimum price for that threshold/level ``` I'm using SQL Server 2008 to return the 'Reserve' based on where the item price falls between in 'PriceThreshold'. Example: ``` ItemId Reserve 1000 5 2000 10 3000 1 ``` --Price for ItemId 4000 isn't greater than the lowest price threshold so should be excluded from the results. Ideally I'd like to just be able to use some straight T-SQL, but if I need to create a stored procedure to create a temp table to store the values that would be fine. [Link to SQL Fiddle for schema](http://sqlfiddle.com/#!3/5e3fb) It's late and I think my brain shut off, so any help is appreciated. Thanks.
Interested in something like this: ``` select ItemId, (select top 1 Reserve from Threshold where Threshold.PriceThreshold < Items.Price order by PriceThreshold desc) as Reserve from Items where Price > (select min(PriceThreshold) from Threshold) ``` [SQLFiddle](http://sqlfiddle.com/#!3/5e3fb/10)
One way to go about this is to use `reserve` as the lower boundary of the range, and use the `lead` analytic function to generate the "next" lower boundry, i.e., the top boundary. Once you've done this, it's simply a matter of joining with the condition that the price should be between both boundaries. Unfortunately, the `between` operator doesn't handle `null`s, so you'll need to use a somewhat clunky condition to handle the first and last rows: ``` SELECT [ItemId], [Reserve] FROM Items JOIN (SELECT [PriceThreshold] AS [Bottom], LEAD([PriceThreshold]) OVER (ORDER BY [PriceThreshold]) AS [Top], [Reserve] FROM [Threshold]) t ON [Price] Between [Bottom] AND [Top] OR ([Top] IS NULL AND [Price] > [Bottom]) OR ([Bottom] IS NULL AND [Price] < [Top]) ``` [SQLFiddle solution](http://sqlfiddle.com/#!6/5e3fb/6/0)
Find value from Table1 where value from Table2 is between rows in Table1
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have a Table say **Products**. Its having following columms: **ID** **AdminID** **CategoryID** Some queries on this table will be only on **AdminID** (**Q1**) and some only on **CategoryID** (**Q2**). However there will be few queries on both, i.e. **AdminID** and **CategoryID** (**Q3**). It seems that in this case I need to create 2 indexes (excluding the one on id): * Index on both **AdminID** and **CategoryID** with **AdminID's** position as 1 in index. This should handle **Q1** and **Q3**. * Index on only **CategoryID**. This should handle **Q2**. Is above a good design ?
Short answer is - Yes, that's fine. Making indexes when needed is "good design", if the usage of the data warrants it. Longer answer is Yes, with a few considerations; That the amount of data in the table warrants the indexes, meaning that the overhead in maintaining the data on updates, inserts and deletes in multiple indexes versus just extracting all data from the table on each query. Basically - if for example, the table will not contain "a lot of data", then it might be better to just avoid indexes on it all together. Also how is the distribution? If 80% (just a number as example) of the data will have the same value in the field/column, an index might be less useful on selections, because the query optimizer will still have to basically touch most rows anyway and therefore the overhead in maintaining the index is perhaps larger than the gain from having it on select. Also without knowing the rest of your database design (related tables), it's impossible to say whether your structure is "optimal" and whether you actually need the values in that table or they should be in another table or whether your queries possible could be changed.
You are correct. You only need to create two indexes: (AdminID, CategoryID) and (CategoryId) since AdminID covers CategoryID, AdminID will be used when the optimizer deems it cost effective. There is no reason to create a separate AdminID index.
SQL creating both multiple column index and single column index on a Table
[ "", "mysql", "sql", "" ]
How can I insert values into two different tables from the same stored procedure?
yup, easily: ``` CREATE PROCEDURE [dbo].[InserIntoTwoTables] @arg1 INT, @arg2 INT, @arg3 INT, @arg4 INT AS BEGIN INSERT INTO Table1 (col1 ,col2) VALUES (@arg1 , @arg2) INSERT INTO Table2 (col3 ,col4) VALUES (@arg3 , @arg4) END GO ``` Frankly, it does not get anymore straightforward than that...
That is the point of a Stored Procedure. Here is an example: ``` CREATE PROCEDURE <Procedure_Name, sysname, ProcedureName> @Param1 INT, @Param2 INT AS BEGIN SET NOCOUNT ON; INSERT INTO TAB1 (COL1) VALUES (@Param1) INSERT INTO TAB2 (COL1) VALUES (@Param2) END GO ```
Can I use one stored procedure to insert values into two tables
[ "", "sql", "sql-server", "stored-procedures", "" ]
I have a table like this ``` +----+------------+------------+---------+-----+------+------+ | id | is_deleted | sort_order | version | cid | pid | qid | +----+------------+------------+---------+-----+------+------+ | 1 | | 1 | 0 | 1 | 1 | 1 | | 2 | | 2 | 0 | 1 | 1 | 2 | | 3 | | 3 | 0 | 1 | 1 | 3 | | 4 | | 1 | 0 | 1 | 2 | 7 | | 5 | | 2 | 0 | 1 | 2 | 1 | | 6 | ☺ | 1 | 1 | 1 | 6 | 14 | | 7 | ☺ | 1 | 1 | 1 | 5 | 13 | | 8 | | 1 | 0 | 1 | 4 | 12 | | 9 | | 3 | 0 | 1 | 2 | 2 | | 10 | | 4 | 0 | 1 | 1 | 4 | | 11 | | 5 | 0 | 1 | 1 | 5 | +----+------------+------------+---------+-----+------+------+ ``` as you can see pid is repeated. Is it possible to get like below format ``` pid qid 1 1,2,3,4,5 2 7,1,2 6 14 5 13 4 12 ``` I tried like this but the output I got is ``` SELECT pid,GROUP_CONCAT(qid) FROM client_parent_question ------+--------------------------+ pid | GROUP_CONCAT(qid) | ------+--------------------------+ 1 | 1,2,3,7,1,14,13,12,2,4,5 | ------+--------------------------+ ```
You are missing group by ``` SELECT pid,GROUP_CONCAT(qid) FROM client_parent_question group by pid ```
**Use `GROUP BY`** ``` SELECT pid, GROUP_CONCAT(qid) FROM client_parent_question GROUP BY pid ```
how to get duplicate column values in comma separated
[ "", "mysql", "sql", "" ]
I have the following query: ``` SELECT DISTINCT [WL].[Id] ,[WL].[UserId] ,[WL].[DIF] ,[WL].[MW] ,[WL].[Notes] ,[WL].[WDate] ,[WL].[CB] ,[WL].[MPH] ,[U].[Id] ,[U].[UserName] ,[U].[We] ,[U].[SLength] ,[U].[UP] ,[U].[PU] ,[U].[ANumber] ,[U].[G_CK] FROM [Wsite].[dbo].[WLog] as WL INNER JOIN [Wsite].[dbo].[Users] AS U ON [U].[Id] = [WL].[UserId] WHERE [WL].[WDate] >= CONVERT(datetime, '2012-01-01 00:00:00', 120) AND [WL].[WDate] <= CONVERT(datetime, GETDATE(), 120) GROUP BY [WL].[UserId] ``` And the error I get is: > Column 'Wsite.dbo.WLog.Id' is invalid in the select > list because it is not contained in either an aggregate function or > the GROUP BY clause. What I want is to just combine the data if there are more than one `UserID` in the list. As an example: ``` Id | UserId | .... | Id | UserName | SLength | .... 5843| 99304 | .... | 99304| Bob Barker | 14 | .... 5844| 06300 | .... | 06300| Dean Martin | 104 | .... 5845| 99304 | .... | 99304| Bob Barker | 8 | .... 5846| 99304 | .... | 99304| Bob Barker | 11 | .... 5847| 7699 | .... | 7699 | John Doe | 0 | .... ``` So it should look like this: ``` Id | UserId | .... | Id | UserName | SLength | .... 5843| 99304 | .... | 99304| Bob Barker | 33 | .... 5844| 06300 | .... | 06300| Dean Martin | 104 | .... 5847| 7699 | .... | 7699 | John Doe | 0 | .... ``` Notice that *Bob Barker's* **SLength** was combined (14+8+11=33). Any help would be great! Thanks! **UPDATE** ``` SELECT DISTINCT MIN([WL].[Id]) AS [WLID] ,MIN([WL].[UserId]) AS [WLUD] ,SUM([WL].[DIF]) AS [DIF] ,SUM([WL].[MW]) AS [MW] ,[WL].[Notes] ,[WL].[WDate] ,SUM([WL].[CB]) AS [CB] ,SUM([WL].[MPH]) AS [MPH] ,MIN([U].[Id] AS [UID] ,MIN([U].[UserName] AS [UUserName] ,[U].[We] ,SUM([U].[SL]) AS [SL] ,[U].[UP] ,[U].[PU] ,[U].[ANumber] ,[U].[G_CK] FROM [Wsite].[dbo].WLog as WL INNER JOIN [Wsite].[dbo].[Users] AS U ON [U].[Id] = [WL].[UserId] WHERE [WL].[WDate] >= CONVERT(datetime, '2012-01-01 00:00:00', 120) AND [WL].[WDate] <= CONVERT(datetime, GETDATE(), 120) GROUP BY [WL].[Notes] ,[WL].[WDate] ,[U].[We] ,[U].[UP] ,[U].[PU] ,[U].[ANumber] ,[U].[G_CK] ORDER BY [U].[UserName] DESC ```
When you use GROUP BY, it means you want to group like elements together and present some sort of aggregate (COUNT, SUM, MIN, etc.) of the remaining items you want to display. Becaus eof this, if you include [WL].[Id] in the select list, you'll basically get the full selection without any aggregation (since [WL].[Id] is unique). You'll need to do something like the following: ``` SELECT MIN([WL].[Id]) AS ID ,[WL].[UserId] ,[U].[Id] ,[U].[UserName] ,SUM([SLength]) AS SLength FROM [Wsite].[dbo].[WLog] as WL INNER JOIN [Wsite].[dbo].[Users] AS U ON [U].[Id] = [WL].[UserId] WHERE [WL].[WDate] >= CONVERT(datetime, '2012-01-01 00:00:00', 120) AND [WL].[WDate] <= CONVERT(datetime, GETDATE(), 120) GROUP BY [WL].[UserId] ,[U].[Id] ,[U].[UserName] ``` (For brevity, I did not include all the fields). Also note that since [WL].[Id] appears to be unique, it is meaningless in this context, unless you want to display its first occurance (then you can add MIN([WL].[Id]) to the SELECT list).
If you use a `GROUP BY`, then any columns that are not in the grouping have to be aggregated in some way (`SUM`, `MIN`, `MAX`, etc) Try this: ``` SELECT [WL].[Id] ,[WL].[UserId] ,[WL].[DIF] ,[WL].[MW] ,[WL].[Notes] ,[WL].[WDate] ,[WL].[CB] ,[WL].[MPH] ,Sum ([U].[SLength]) SLength FROM [Wsite].[dbo].[WLog] as WL INNER JOIN [Wsite].[dbo].[Users] AS U ON [U].[Id] = [WL].[UserId] WHERE [WL].[WDate] >= CONVERT(datetime, '2012-01-01 00:00:00', 120) AND [WL].[WDate] <= CONVERT(datetime, GETDATE(), 120) GROUP BY [WL].[Id] ,[WL].[UserId] ,[WL].[DIF] ,[WL].[MW] ,[WL].[Notes] ,[WL].[WDate] ,[WL].[CB] ,[WL].[MPH] ``` Here are the aggregate functions you can use: <http://msdn.microsoft.com/en-us/library/ms173454.aspx>
SQL Server : combining like rows
[ "", "sql", "sql-server", "sql-server-2008", "group-by", "inner-join", "" ]
I know googling this question comes back a lot of solutions, but none of them apply to my situation. I have a table such that: ``` CREATE TABLE [Batch]( [batch_id] [int] NOT NULL, ...(more columns) [date] [datetime] NULL) CONSTRAINT [pk_Positions] PRIMARY KEY CLUSTERED ( [batch_id] ASC, ...(more columns) ) ``` 1. `batch_id` and `date` has a one-on-one relationship. I.e., for a given batch\_id, all dates are the same, for a given date, all batch\_id are the same. (I know it's poor design. If I were to design the table I would probably create a separate table for batch\_id and date) 2. there can be multiple records that have the same `batch_id` Now I want to get a list of all distinct `date`s. Since the table is very huge and `date` is not an indexed column, I don't want to try anything like: ``` select distinct date from Batch ``` And for similar reasons, I have ruled out the option of creating a non-clustered index on `date` Instead, I want to do something like: ``` select First(date) from Batch Group by batch_id ``` or ``` select Top 1 date from Batch Group by batch_id ``` but MS SQL doesn't provide `First()` function, and the latter one returns a "not in a aggregate function" error. As far as I see based on my research, I should use `Min()` or `Max()` as an alternative to `First()`, such as: ``` select Max(date) from Batch Group by batch_id ``` However, since there can be sometimes over 100k records with the same `batch_id`, using `Min()` or `Max()` is not as efficient as just returning the first record without any comparison. So how can I optimize the last query to achieve better performance?
If you create this function:- ``` CREATE FUNCTION [dbo].GetDateForBatch_id ( @batch_id int ) RETURNS datetime AS BEGIN RETURN (SELECT TOP 1 [date] FROM dbo.Batch WHERE batch_id=@batch_id) END go ``` and then run this query:- ``` select b.batch_id, dbo.GetDateForBatch_id(b.batch_id) AS [date] FROM (SELECT DISTINCT batch_id FROM Batch) b ``` You should get optimal performance with the index strategy you have in place.
Since you say there's a one-to-one relationship between `batch_id` and `date` this will do the job: ``` SELECT DISTINCT batch_id, date FROM Batch ``` If it's not true, you can associate a row number to each record and retrieve only the first: ``` WITH BatchWithRowNum AS ( SELECT * , RowNum = ROW_NUMBER() OVER (PARTITION BY batch_id ORDER BY date) FROM Batch ) SELECT * FROM BatchWithRowNum WHERE RowNum = 1 ``` The third way of doing this which I expect to be faster than the row number approach is: ``` SELECT B.batch_id, T.MinDate AS date FROM Batch B INNER JOIN ( SELECT B2.batch_id, MIN(B2.date) AS MinDate FROM Batch B2 GROUP BY B2.batch_id ) T ON B.batch_id = T.batch_id GROUP BY B.batch_id, T.MinDate ``` The following is not generally an efficient solution, but may have a better performance in your case because it only relies on the already existing index on `batch_id`: ``` SELECT DISTINCT B.batch_id , date = (SELECT TOP 1 date FROM Batch B2 WHERE B2.batch_id = B.batch_id) FROM Batch B ``` If you have serious performance issues and adding index is not an option, none of the above will help you unless you narrow down the result-set with a `WHERE` clause. For example bring through a subset of batches with a certain set of `batch-id`s, or those in a specific `date` range.
select top 1 record per group in ms sql without using Max or Min
[ "", "sql", "sql-server-2008", "" ]
The title is clear enough, I created a new Filegroup "ArchiveFileGroup": ``` ALTER DATABASE MyDataBase ADD FILEGROUP ArchiveFileGroup; GO ``` I want to create a table called : arc\_myTable in order to store old data from this one : myTable I used the following query : ``` CREATE TABLE [dbo].acr_myTable( [Id] [bigint] NOT NULL, [label] [nvarchar](max) NOT NULL, )on ArchiveFileGroup ``` I'm not sure if it's the right way, I don't know where the FileGroup is created to check if it contains the table.
You can easily check with this [sql query](http://blog.sqlauthority.com/2009/06/01/sql-server-list-all-objects-created-on-all-filegroups-in-database/): ``` SELECT o.[name], o.[type], i.[name], i.[index_id], f.[name] FROM sys.indexes i INNER JOIN sys.filegroups f ON i.data_space_id = f.data_space_id INNER JOIN sys.all_objects o ON i.[object_id] = o.[object_id] WHERE i.data_space_id = f.data_space_id AND o.type = 'U' -- User Created Tables GO ``` Just add: ``` AND f.name = ArchiveFileGroup ``` to see everything in your new filegroup or: ``` AND o.name = acr_myTable ``` to see where your table is located. If you never added a file to your filegroup, then I would expect an error but you didn't include either an error message or anything saying you did create a file. If you did not, I suggest starting at the [microsoft documentation](http://msdn.microsoft.com/en-us/library/bb386271%28v=vs.100%29.aspx) if needed. The OP found the [this](http://forums.asp.net/t/1397838.aspx?The%20filegroup%20comn_data%20has%20no%20files%20assigned%20to%20it%20sql) helpful trying to create a new file in his filegroup.
You can use [sys.filegroups](http://msdn.microsoft.com/en-us/library/ms187782.aspx) to see all the created file groups in your server like ``` SELECT * FROM sys.filegroups ``` See here for more information [List All Objects Created on All Filegroups](http://blog.sqlauthority.com/2009/06/01/sql-server-list-all-objects-created-on-all-filegroups-in-database/)
Create a table on a filegroup other than the default
[ "", "sql", "sql-server", "database", "filegroup", "" ]
This is my query, but it show the result is not my expected. ``` SELECT a.PROJECT_NO, CASE WHEN a.PROJECT_NO = b.PROJECT_NO THEN 'TRUE' ELSE 'FALSE' END AS CONDITION FROM PROJECT a LEFT OUTER JOIN WORKSHEET b ON (b.PROJECT_NO = a.PROJECT_NO) Table a Table b Project_No Project_No 1111 1111 2222 3333 3333 4444 ``` The correct result should be ``` Project_No Condition 1111 True 2222 FALSE 3333 TRUE 4444 FALSE ``` My query result,all condition status show true ``` Project_No Condition 1111 TRUE 2222 TRUE 3333 TRUE 4444 TRUE ``` Anyone can help me to fix it.Thanks
`a.PROJECT_NO = b.PROJECT_NO` always returns `null` when `b.PROJECT_NO = null` and the first`WHEN` in `CASE` is choosen which returns `TRUE`, otherwise, when `b.PROJECT_NO` is not `NULL` `TRUE` is returned too due to correct equality result. So the easiest approach is to use `IS NULL` condition to the `t2.Project` column : ``` SELECT a.PROJECT_NO , (CASE WHEN b.PROJECT_NO IS NULL THEN 'TRUE' ELSE 'FALSE' END) AS CONDITION FROM PROJECT a LEFT OUTER JOIN WORKSHEET b ON b.PROJECT_NO = a.PROJECT_NO ```
wrap it with `COALESCE` ``` COALESCE(CASE WHEN a.PROJECT_NO = b.PROJECT_NO THEN 'TRUE' END, 'FALSE') ```
sql Column 1 and Column 2 contain same value then condition A else B
[ "", "sql", "join", "case", "" ]
Having a problem opening any of my databases in phpMyadmin I tried deleting a lot of old, irrelevant databases and may have in the process deleted something I shouldn't have and was wondering what I could do to resolve the error > #1146 - Table 'phpmyadmin.pma\_\_tracking' doesn't exist
All the phpMyAdmin tables are defined in the SQL dump that comes with the package in [sql/create\_tables.sql](https://github.com/phpmyadmin/phpmyadmin/blob/master/sql/create_tables.sql). You can import that file in it's entirety (will also re-create any other tables you might have dropped) or just create the missing table by running this query: ``` CREATE TABLE IF NOT EXISTS `pma__tracking` ( `db_name` varchar(64) NOT NULL, `table_name` varchar(64) NOT NULL, `version` int(10) unsigned NOT NULL, `date_created` datetime NOT NULL, `date_updated` datetime NOT NULL, `schema_snapshot` text NOT NULL, `schema_sql` text, `data_sql` longtext, `tracking` set('UPDATE','REPLACE','INSERT','DELETE','TRUNCATE','CREATE DATABASE','ALTER DATABASE','DROP DATABASE','CREATE TABLE','ALTER TABLE','RENAME TABLE','DROP TABLE','CREATE INDEX','DROP INDEX','CREATE VIEW','ALTER VIEW','DROP VIEW') default NULL, `tracking_active` int(1) unsigned NOT NULL default '1', PRIMARY KEY (`db_name`,`table_name`,`version`) ) COMMENT='Database changes tracking for phpMyAdmin' DEFAULT CHARACTER SET utf8 COLLATE utf8_bin; ``` Switch to the phpmyadmin database. You can then use the "SQL" tab to execute this query directly on the database.
I had this problem after installed XAMPP. I did the following: > 1. In `/opt/lampp/bin1` use `./mysql_upgrade -u root` with option `-p` if you use a password. > 2. In `/opt/lampp/var/mysql/phpmyadmin` `rm` all `*.ibd` files. > 3. Import [create\_tables.sql](https://github.com/phpmyadmin/phpmyadmin/blob/master/sql/create_tables.sql) in phpMyAdmin GUI or run it in console.
#1146 - Table 'phpmyadmin.pma__tracking' doesn't exist
[ "", "sql", "phpmyadmin", "" ]
According to [Microsoft's documentation on `NEWSEQUENTIALID`](http://msdn.microsoft.com/en-us/library/ms189786.aspx), the output of NEWSEQUENTIALID is predictable. But how predictable is predictable? Say I have a GUID that was generated by `NEWSEQUENTIALID`, how hard would it be to: * Calculate the next value? * Calculate the previous value? * Calculate the first value? * Calculate the first value, even without knowing any GUID's at all? * Calculate the amount of rows? E.g. when using integers, `/order?id=842` tells me that there are 842 orders in the application. Below is some background information about what I am doing and what the various tradeoffs are. One of the *security* benefits of using GUID's over integers as primary keys is that GUID's are hard to guess. E.g. say a hacker sees a URL like `/user?id=845` he might try to access `/user?id=0`, since it is probable that the first user in the database is an administrative user. Moreover, a hacker can iterate over `/user?id=0..1..2` to quickly gather all users. Similarly, a *privacy* downside of integers is that they leak information. `/order?id=482` tells me that the web shop has had 482 orders since its implementation. Unfortunately, using GUID's as primary keys has well-known *performance* downsides. To this end, SQL Server introduced the `NEWSEQUENTIALID` function. In this question, I would like to learn how predictable the output of `NEWSEQUENTIALID` is.
The underlying OS function is [`UuidCreateSequential`](http://msdn.microsoft.com/en-us/library/windows/desktop/aa379322%28v=vs.85%29.aspx). The value is derived from one of your network cards MAC address and a [per-os-boot incremental value](http://ayende.com/blog/4628/implementing-createsequentialuuid). See [RFC4122](http://www.ietf.org/rfc/rfc4122.txt). SQL Server does some [byte-shuffling](http://blogs.msdn.com/b/dbrowne/archive/2012/07/03/how-to-generate-sequential-guids-for-sql-server-in-net.aspx) to make the result sort properly. So the value is highly predictable, in a sense. Specifically, if you know a value you can immediately predict a range of similar value. However one cannot predict the equivalent of `id=0`, nor can it predict that `52DE358F-45F1-E311-93EA-00269E58F20D` means the store sold at least 482 items. The only 'approved' random generation is [`CRYPT_GEN_RANDOM`](http://msdn.microsoft.com/en-us/library/cc627408.aspx) (which wraps [`CryptGenRandom`](http://msdn.microsoft.com/en-us/library/windows/desktop/aa379942%28v=vs.85%29.aspx)) but that is obviously a horrible key candidate.
In most cases, the next `newsequentialid` can be predicted by taking the current value and adding one to the first hex pair. In other words: > **1E**29E599-45F1-E311-80CA-00155D008B1C is followed by > **1F**29E599-45F1-E311-80CA-00155D008B1C is followed by > **20**29E599-45F1-E311-80CA-00155D008B1C Occasionally, the sequence will restart from a new value. So, it's very predictable `NewSequentialID` is a wrapper around the windows function [`UuidCreateSequential`](http://msdn.microsoft.com/en-gb/library/windows/desktop/aa379322%28v=vs.85%29.aspx)
How predictable is NEWSEQUENTIALID?
[ "", "sql", "sql-server", "primary-key", "guid", "newsequentialid", "" ]
I would like to create a custom order in my sql query, just changing one rows position. This is my current sql results - ``` Age Category Female Male ------------------------------- 30-39 2772 3193 40-49 1587 2246 50-65 990 3718 Over 65 176 3487 Under 30 1359 1500 ``` I would like them to sort like this, with the 'under 30' at the top - ``` Age Category Female Male ------------------------------- Under 30 1359 1500 30-39 2772 3193 40-49 1587 2246 50-65 990 3718 Over 65 176 3487 ``` Here is my code - ``` SELECT DISTINCT CASE WHEN datediff(YYYY,birth_date,getdate()) <= 30 THEN 'Under 30' WHEN datediff(YYYY,birth_date,getdate()) BETWEEN 30 AND 39 THEN '30-39' WHEN datediff(YYYY,birth_date,getdate()) BETWEEN 40 AND 49 THEN '40-49' WHEN datediff(YYYY,birth_date,getdate()) BETWEEN 50 AND 65 THEN '50-65' WHEN datediff(YYYY,birth_date,getdate()) >= 65 THEN 'Over 65' END as 'Age Category', CASE WHEN datediff(YYYY,birth_date,getdate()) <= 30 THEN (select count(*) from name n1 where datediff(YYYY,n1.birth_date,getdate()) <= 30 and gender ='f' and status ='a' and member_type ='mm') WHEN datediff(YYYY,birth_date,getdate()) BETWEEN 30 AND 39 THEN (select count(*) from name n1 where datediff(YYYY,n1.birth_date,getdate())BETWEEN 30 AND 39 and gender ='f' and status ='a' and member_type ='mm') WHEN datediff(YYYY,birth_date,getdate()) BETWEEN 40 AND 49 THEN (select count(*) from name n1 where datediff(YYYY,n1.birth_date,getdate())BETWEEN 40 AND 49 and gender ='f' and status ='a' and member_type ='mm') WHEN datediff(YYYY,birth_date,getdate()) BETWEEN 50 AND 65 THEN (select count(*) from name n1 where datediff(YYYY,n1.birth_date,getdate())BETWEEN 50 AND 64 and gender ='f' and status ='a' and member_type ='mm') WHEN datediff(YYYY,birth_date,getdate()) >= 65 THEN (select count(*) from name n1 where datediff(YYYY,n1.birth_date,getdate()) >= 65 and gender ='f' and status ='a' and member_type ='mm') END as 'Female', CASE WHEN datediff(YYYY,birth_date,getdate()) <= 30 THEN (select count(*) from name n1 where datediff(YYYY,n1.birth_date,getdate()) <= 30 and gender ='m' and status ='a' and member_type ='mm') WHEN datediff(YYYY,birth_date,getdate()) BETWEEN 30 AND 39 THEN (select count(*) from name n1 where datediff(YYYY,n1.birth_date,getdate())BETWEEN 30 AND 39 and gender ='m' and status ='a' and member_type ='mm') WHEN datediff(YYYY,birth_date,getdate()) BETWEEN 40 AND 49 THEN (select count(*) from name n1 where datediff(YYYY,n1.birth_date,getdate())BETWEEN 40 AND 49 and gender ='m' and status ='a' and member_type ='mm') WHEN datediff(YYYY,birth_date,getdate()) BETWEEN 50 AND 65 THEN (select count(*) from name n1 where datediff(YYYY,n1.birth_date,getdate())BETWEEN 50 AND 64 and gender ='m' and status ='a' and member_type ='mm') WHEN datediff(YYYY,birth_date,getdate()) >= 65 THEN (select count(*) from name n1 where datediff(YYYY,n1.birth_date,getdate()) >= 65 and gender ='m' and status ='a' and member_type ='mm') END as 'Male' FROM NAME N1 WHERE [STATUS] ='A' AND MEMBER_TYPE IN ('MM') AND ( CASE WHEN datediff(YYYY,birth_date,getdate()) <= 30 THEN 'Under 30' WHEN datediff(YYYY,birth_date,getdate()) BETWEEN 30 AND 39 THEN '30-39' WHEN datediff(YYYY,birth_date,getdate()) BETWEEN 40 AND 49 THEN '40-49' WHEN datediff(YYYY,birth_date,getdate()) BETWEEN 50 AND 65 THEN '50-65' WHEN datediff(YYYY,birth_date,getdate()) >= 65 THEN 'Over 65' END ) IS NOT NULL group by datediff(YYYY,birth_date,getdate()), member_type ``` Much appreciated
Add a manually calculated SortOrder column, then order by that and [Age Category] ``` SELECT DISTINCT CASE WHEN datediff(YYYY,birth_date,getdate()) <= 30 THEN 'Under 30' WHEN datediff(YYYY,birth_date,getdate()) BETWEEN 30 AND 39 THEN '30-39' WHEN datediff(YYYY,birth_date,getdate()) BETWEEN 40 AND 49 THEN '40-49' WHEN datediff(YYYY,birth_date,getdate()) BETWEEN 50 AND 65 THEN '50-65' WHEN datediff(YYYY,birth_date,getdate()) >= 65 THEN 'Over 65' END as 'Age Category', CASE WHEN datediff(YYYY,birth_date,getdate()) <= 30 THEN (select count(*) from name n1 where datediff(YYYY,n1.birth_date,getdate()) <= 30 and gender ='f' and status ='a' and member_type ='mm') WHEN datediff(YYYY,birth_date,getdate()) BETWEEN 30 AND 39 THEN (select count(*) from name n1 where datediff(YYYY,n1.birth_date,getdate())BETWEEN 30 AND 39 and gender ='f' and status ='a' and member_type ='mm') WHEN datediff(YYYY,birth_date,getdate()) BETWEEN 40 AND 49 THEN (select count(*) from name n1 where datediff(YYYY,n1.birth_date,getdate())BETWEEN 40 AND 49 and gender ='f' and status ='a' and member_type ='mm') WHEN datediff(YYYY,birth_date,getdate()) BETWEEN 50 AND 65 THEN (select count(*) from name n1 where datediff(YYYY,n1.birth_date,getdate())BETWEEN 50 AND 64 and gender ='f' and status ='a' and member_type ='mm') WHEN datediff(YYYY,birth_date,getdate()) >= 65 THEN (select count(*) from name n1 where datediff(YYYY,n1.birth_date,getdate()) >= 65 and gender ='f' and status ='a' and member_type ='mm') END as 'Female', CASE WHEN datediff(YYYY,birth_date,getdate()) <= 30 THEN (select count(*) from name n1 where datediff(YYYY,n1.birth_date,getdate()) <= 30 and gender ='m' and status ='a' and member_type ='mm') WHEN datediff(YYYY,birth_date,getdate()) BETWEEN 30 AND 39 THEN (select count(*) from name n1 where datediff(YYYY,n1.birth_date,getdate())BETWEEN 30 AND 39 and gender ='m' and status ='a' and member_type ='mm') WHEN datediff(YYYY,birth_date,getdate()) BETWEEN 40 AND 49 THEN (select count(*) from name n1 where datediff(YYYY,n1.birth_date,getdate())BETWEEN 40 AND 49 and gender ='m' and status ='a' and member_type ='mm') WHEN datediff(YYYY,birth_date,getdate()) BETWEEN 50 AND 65 THEN (select count(*) from name n1 where datediff(YYYY,n1.birth_date,getdate())BETWEEN 50 AND 64 and gender ='m' and status ='a' and member_type ='mm') WHEN datediff(YYYY,birth_date,getdate()) >= 65 THEN (select count(*) from name n1 where datediff(YYYY,n1.birth_date,getdate()) >= 65 and gender ='m' and status ='a' and member_type ='mm') END as 'Male' -- Newly inserted code starts CASE WHEN datediff(YYYY,birth_date,getdate()) <= 30 THEN 1 WHEN datediff(YYYY,birth_date,getdate()) BETWEEN 30 AND 65 THEN 3 WHEN datediff(YYYY,birth_date,getdate()) >= 65 THEN 2 END as 'SortOrder', -- Newly Inserted Code Ends FROM NAME N1 WHERE [STATUS] ='A' AND MEMBER_TYPE IN ('MM') AND ( CASE WHEN datediff(YYYY,birth_date,getdate()) <= 30 THEN 'Under 30' WHEN datediff(YYYY,birth_date,getdate()) BETWEEN 30 AND 39 THEN '30-39' WHEN datediff(YYYY,birth_date,getdate()) BETWEEN 40 AND 49 THEN '40-49' WHEN datediff(YYYY,birth_date,getdate()) BETWEEN 50 AND 65 THEN '50-65' WHEN datediff(YYYY,birth_date,getdate()) >= 65 THEN 'Over 65' END ) IS NOT NULL group by datediff(YYYY,birth_date,getdate()), member_type -- newly inserted code ORDER BY SortOrder, [Age Category] ```
Isn't it enough to just add an order by? order by datediff(YYYY,birth\_date,getdate()) Otherwise, maybe you could also change the age categories to "30 and under" and "65 and above" *Edit* This query may be easier / less repetitive. By the way, the BETWEEN clause is inclusive, so you should use < 30 instead of <= 30 (and > 65 instead of >= 65) to make sure those ages aren't counted twice. ``` SELECT 'Under 30' AS 'Age Category', (select count(*) from name n1 where datediff(YYYY,n1.birth_date,getdate()) < 30 and gender ='f' and status ='a' and member_type ='mm') AS 'Female', (select count(*) from name n1 where datediff(YYYY,n1.birth_date,getdate()) < 30 and gender ='m' and status ='a' and member_type ='mm') AS 'Male' UNION ALL SELECT '30-39', (select count(*) from name n1 where datediff(YYYY,n1.birth_date,getdate()) BETWEEN 30 AND 39 and gender ='f' and status ='a' and member_type ='mm') (select count(*) from name n1 where datediff(YYYY,n1.birth_date,getdate()) BETWEEN 30 AND 39 and gender ='m' and status ='a' and member_type ='mm') UNION ALL SELECT '40-49', (select count(*) from name n1 where datediff(YYYY,n1.birth_date,getdate()) BETWEEN 40 AND 49 and gender ='f' and status ='a' and member_type ='mm') (select count(*) from name n1 where datediff(YYYY,n1.birth_date,getdate()) BETWEEN 40 AND 49 and gender ='m' and status ='a' and member_type ='mm') UNION ALL SELECT '50-65', (select count(*) from name n1 where datediff(YYYY,n1.birth_date,getdate()) BETWEEN 50 AND 65 and gender ='f' and status ='a' and member_type ='mm') (select count(*) from name n1 where datediff(YYYY,n1.birth_date,getdate()) BETWEEN 50 AND 65 and gender ='m' and status ='a' and member_type ='mm') UNION ALL SELECT 'Over 65', (select count(*) from name n1 where datediff(YYYY,n1.birth_date,getdate()) > 65 and gender ='f' and status ='a' and member_type ='mm') (select count(*) from name n1 where datediff(YYYY,n1.birth_date,getdate()) > 65 and gender ='m' and status ='a' and member_type ='mm') ```
Custom SQL Order By Statement
[ "", "sql", "t-sql", "sql-order-by", "" ]
I have a table in SQL Server where every row is a mail to deliver. Every email belongs to a domain. I need to select the first email to send based on the Date column but I want to alternate them by domain starting from the last domain used in a Round Robin way Table: ``` CREATE TABLE Delivery (Date datetime, Domain int); INSERT INTO Delivery (Date, Domain) VALUES ('2014-01-07 11:24:31', 1), ('2014-04-07 13:21:42', 2), ('2014-02-07 14:48:58', 3), ('2014-03-07 15:58:01', 1), ('2014-06-07 15:58:01', 2), ('2014-01-07 12:58:01', 3), ('2014-01-07 19:58:01', 1) ; ``` With this query I can sort them as I need but I cannot find a way to sort them with a starting value: ``` SELECT [Date],[Domain] FROM ( SELECT [Date] ,[Domain], ROW_NUMBER() OVER (PARTITION BY [Domain] ORDER BY [Date]) AS recID FROM Delivery ) AS r ORDER BY recID, [domain] ``` [**SqlFiddle**](http://sqlfiddle.com/#!3/a6129/2) I need to say something like: ``` ORDER BY [domain] > @lastuseddomain ``` something similar to the [mysql FIELD() function](http://www.w3resource.com/mysql/string-functions/mysql-field-function.php) I need to run this query at timed intervals with `TOP 1` The expected result is to get the earliest row with domain > domain of previous row if exist or restart with domain=1. Like a circular sorting on the domain
Thanks guys for your efforts but I think I find the solution: ``` SELECT TOP 1 [Date],[Domain] FROM ( SELECT [Date] ,[Domain], ROW_NUMBER() OVER (PARTITION BY [Domain] ORDER BY [Date]) AS recID FROM Delivery ) AS r ORDER BY recID, (CASE WHEN domain >@LASTUSEDDOMAIN THEN domain ELSE domain + (select top 1 domain from delivery order by domain desc) END) ```
You need to use a cte It allows you to use the ROW\_NUMBER() as a where condition ``` with cte as ( SELECT [Date] ,[Domain], ROW_NUMBER() OVER (PARTITION BY [Domain] ORDER BY [Date]) AS recID FROM Delivery ) select * from cte where recID = 1 order by domain ```
Order by statement with start value in a Round Robin Select
[ "", "sql", "t-sql", "sql-order-by", "round-robin", "" ]
I have to concatenate `First_Name` and `Last_Name` but when I do that that show incorrect syntax near `as`. My code: ``` select COUNT(*) as Leaves, Employee_Admission.Emp_Id, First_Name, Last_Name as Name from LeaveAssign join Employee_Admission on LeaveAssign.Emp_Id= Employee_Admission.Emp_Id WHERE D_Id='3' group by Employee_Admission.Emp_Id, First_Name, Last_Name as Name union select 0 as 'Leaves', Employee_Admission.Emp_Id, First_Name, Last_Name as Name from Employee_Admission where Emp_ID not in (Select Emp_ID from LeaveAssign) and D_Id='3' ```
you cannot give alias in group by statement, hence it is giving you error. Remove `as` from the group by statement and it wont give you error. And if you want to concatenate Firstname and Lastname, do it like ``` First_Name + ' ' + Last_Name as Name ```
`group by` does not contain `as` Keyword Just Try this ``` select COUNT(*) as Leaves, Employee_Admission.Emp_Id, First_Name, Last_Name as Name from LeaveAssign join Employee_Admission on LeaveAssign.Emp_Id= Employee_Admission.Emp_Id WHERE D_Id='3' group by Employee_Admission.Emp_Id, First_Name, Last_Name union select 0 as 'Leaves', Employee_Admission.Emp_Id, First_Name, Last_Name as Name from Employee_Admission where Emp_ID not in (Select Emp_ID from LeaveAssign) and D_Id='3' ```
Incorrect syntax near the keyword 'as' error is showing
[ "", "sql", "" ]
``` INSERT INTO purchase_record (purchased_from_id, purchase, paid, balance, bank, cheque_no, date, cheque_status,time) VALUES ( (SELECT purchased_from_id FROM purchased_from WHERE name = 'Hassan Izhar'), (SELECT total FROM purchase_receipt WHERE purchase_receipt_id = '0000001'), 10000, (SELECT balance FROM purchase_receipt WHERE purchase_receipt_id = '0000001'), 'UBL', '1234567', '10-JUN-1014', 'Cleared', '(SELECT SUBSTRING(convert(varchar, time,108), 1, 5) FROM purchase_receipt WHERE purchase_receipt_id = ''0000001'')'); ``` I tried this query but the following error showed up: ``` Msg 242, Level 16, State 3, Line 1 The conversion of a varchar data type to a datetime data type resulted in an out-of-range value. The statement has been terminated. ```
You need to construct the properly joined SQL Select statement and insert it into your table, like this: ``` INSERT INTO purchase_record (purchased_from_id, purchase, paid, balance, bank, cheque_no, date, cheque_status,time) SELECT f.purchased_from_id, r.total, r.balance, 10000,'UBL','1234567', '10-JUN-1014', 'Cleared', SUBSTRING(convert(varchar, r.time, 108), 1, 5) FROM purchased_from f INNER JOIN purchase_receipt r ON f.purchaser_id = r.purchaser_id WHERE f.purchase_receipt_id = '0000001'; ``` Please note, that in the above statement you will need to properly define the joining condition. I have put a placeholder `ON f.purchaser_id = r.purchaser_id` (which is obviously incorrect, as I do not know your database schema). Please find out how your purchased\_from and purchaser\_receipt tables are linked and use a proper column(s) for the join
What's the datatype of your date field? If it's **DATETIME** then it's Date range is **January 1, 1753, through December 31, 9999** and the date that you have passed is '10-JUN-1014' which is out-of-range for this datatype. If you want store this date then you should use datatype as **DATATIME2**. Also remember the best format for passing the date in SQL is YYYY-MM-DD.
How can I insert time in table 2 that matches time in table 1
[ "", "sql", "sql-server-2008", "datetime", "" ]
For the below Table structure, I am not able to build the required output, not sure if I need to apply transpose, of which I don't have a strong knowledge. SQL script for table creation and data insertion is given towards the end. Output should be in below format as shown above and ID\_MAX\_VAL of output result should be the max of ID for each manufacturer-country combination in main source table. I need to fetch the max id for each manufacturer-country combination and display/use them to send out a report. **Output:** ``` MANUFACTURER COUNTRY ID_MAX_VAL -------------------------------------- NISSAN USA 10 NISSAN UK 30 HONDA USA 80 HONDA UK 70 ``` Note: This is a test data and table structure to simulate the actual business requirement. A view similar to the table I have mentioned is the only thing that we have access to and that works as our sole source. Have to work with that only. **SQL Script:** ``` CREATE TABLE TB_TEST_01 ( ID NUMBER(6) NOT NULL , PARAM_NM VARCHAR2(200) NOT NULL , PARAM_VAL VARCHAR2(200) ); / INSERT INTO TB_TEST_01 (ID, PARAM_NM, PARAM_VAL) VALUES (10, 'MANUFACTURER', 'NISSAN'); INSERT INTO TB_TEST_01 (ID, PARAM_NM, PARAM_VAL) VALUES (10, 'COUNTRY', 'USA'); INSERT INTO TB_TEST_01 (ID, PARAM_NM, PARAM_VAL) VALUES (30, 'MANUFACTURER', 'NISSAN'); INSERT INTO TB_TEST_01 (ID, PARAM_NM, PARAM_VAL) VALUES (30, 'COUNTRY', 'UK'); INSERT INTO TB_TEST_01 (ID, PARAM_NM, PARAM_VAL) VALUES (20, 'MANUFACTURER', 'NISSAN'); INSERT INTO TB_TEST_01 (ID, PARAM_NM, PARAM_VAL) VALUES (20, 'COUNTRY', 'UK'); INSERT INTO TB_TEST_01 (ID, PARAM_NM, PARAM_VAL) VALUES (50, 'MANUFACTURER', 'HONDA'); INSERT INTO TB_TEST_01 (ID, PARAM_NM, PARAM_VAL) VALUES (50, 'COUNTRY', 'USA'); INSERT INTO TB_TEST_01 (ID, PARAM_NM, PARAM_VAL) VALUES (60, 'MANUFACTURER', 'HONDA'); INSERT INTO TB_TEST_01 (ID, PARAM_NM, PARAM_VAL) VALUES (60, 'COUNTRY', 'USA'); INSERT INTO TB_TEST_01 (ID, PARAM_NM, PARAM_VAL) VALUES (80, 'MANUFACTURER', 'HONDA'); INSERT INTO TB_TEST_01 (ID, PARAM_NM, PARAM_VAL) VALUES (80, 'COUNTRY', 'USA'); INSERT INTO TB_TEST_01 (ID, PARAM_NM, PARAM_VAL) VALUES (70, 'MANUFACTURER', 'HONDA'); INSERT INTO TB_TEST_01 (ID, PARAM_NM, PARAM_VAL) VALUES (70, 'COUNTRY', 'UK') ; / COMMIT; ```
In case of not a normalized design, we can self join the table, like one below. ``` SELECT T1.PARAM_VAL AS MANUFACTURER, T2.PARAM_VAL AS COUNTRY, MAX(T1.ID) AS MAX_ID_VAL FROM TB_TEST_01 T1,TB_TEST_01 T2 WHERE T1.ID = T2.ID AND T1.PARAM_NM='MANUFACTURER' AND T2.PARAM_NM='COUNTRY' GROUP BY T1.PARAM_VAL,T2.PARAM_VAL ```
You can use `PIVOT` function also: ``` select M_PARAM_VAL MANUFACTURER, C_PARAM_VAL COUNTRY, max(ID) ID_MAX_VAL from TB_TEST_01 pivot (min(PARAM_VAL) as PARAM_VAL for (PARAM_NM) in ('MANUFACTURER' as M, 'COUNTRY' as C)) group by M_PARAM_VAL, C_PARAM_VAL; ``` Output: ``` | MANUFACTURER | COUNTRY | ID_MAX_VAL | |--------------|---------|------------| | HONDA | UK | 70 | | NISSAN | USA | 10 | | NISSAN | UK | 30 | | HONDA | USA | 80 | ```
Oracle SQL Query - Help Required
[ "", "sql", "oracle", "oracle11g", "transpose", "" ]
We are struggling with a strange problem: a stored procedure become extremely slow when raw SQL is executed fairly fast. We have * SQL Server 2008 R2 Express Edition SP1 10.50.2500.0 with several databases on it. * a database (it's size is around 747Mb) * a stored procedure which takes different parameters and does select among multiple tables from the database. Code: ``` ALTER Procedure [dbo].[spGetMovieShortDataList]( @MediaID int = null, @Rfa nvarchar(8) = null, @LicenseWindow nvarchar(8) = null, @OwnerID uniqueidentifier = null, @LicenseType nvarchar(max) = null, @PriceGroupID uniqueidentifier = null, @Format nvarchar(max) = null, @GenreID uniqueidentifier = null, @Title nvarchar(max) = null, @Actor nvarchar(max) = null, @ProductionCountryID uniqueidentifier = null, @DontReturnMoviesWithNoLicense bit = 0, @DontReturnNotReadyMovies bit = 0, @take int = 10, @skip int = 0, @order nvarchar(max) = null, @asc bit = 1) as begin declare @SQLString nvarchar(max); declare @ascending nvarchar(5); declare @ParmDefinition nvarchar(max); set @ParmDefinition = '@MediaID int, declare @now DateTime; declare @Rfa nvarchar(8), @LicenseWindow nvarchar(8), @OwnerID uniqueidentifier, @LicenseType nvarchar(max), @PriceGroupID uniqueidentifier, @Format nvarchar(max), @GenreID uniqueidentifier, @Title nvarchar(max), @Actor nvarchar(max), @ProductionCountryID uniqueidentifier, @DontReturnMoviesWithNoLicense bit = 0, @DontReturnNotReadyMovies bit = 0, @take int, @skip int, @now DateTime'; set @ascending = case when @asc = 1 then 'ASC' else 'DESC' end set @now = GetDate(); set @SQLString = 'SELECT distinct m.ID, m.EpisodNo, m.MediaID, p.Dubbed, pf.Format, t.OriginalTitle into #temp FROM Media m inner join Asset a1 on m.ID=a1.ID inner join Asset a2 on a1.ParentID=a2.ID inner join Asset a3 on a2.ParentID=a3.ID inner join Title t on t.ID = a3.ID inner join Product p on a2.ID = p.ID left join AssetReady ar on ar.AssetID = a1.ID left join License l on l.ProductID=p.ID left join ProductFormat pf on pf.ID = p.Format ' + CASE WHEN @PriceGroupID IS NOT NULL THEN 'left join LicenseToPriceGroup lpg on lpg.LicenseID = l.ID ' ELSE '' END + CASE WHEN @Title IS NOT NULL THEN 'left join LanguageAsset la on la.AssetID = m.ID ' ELSE '' END + CASE WHEN @LicenseType IS NOT NULL THEN 'left join LicenseType lt on lt.ID=l.LicenseTypeID ' ELSE '' END + CASE WHEN @Actor IS NOT NULL THEN 'left join Cast c on c.AssetID = a1.ID ' ELSE '' END + CASE WHEN @GenreID IS NOT NULL THEN 'left join ListToCountryToAsset lca on lca.AssetID=a1.ID ' ELSE '' END + CASE WHEN @ProductionCountryID IS NOT NULL THEN 'left join ProductionCountryToAsset pca on pca.AssetID=t.ID ' ELSE '' END + 'where ( 1 = case when @Rfa = ''All'' then 1 when @Rfa = ''Ready'' then ar.Rfa when @Rfa = ''NotReady'' and (l.TbaWindowStart is null OR l.TbaWindowStart = 0) and ar.Rfa = 0 and ar.SkipRfa = 0 then 1 when @Rfa = ''Skipped'' and ar.SkipRfa = 1 then 1 end) ' + CASE WHEN @LicenseWindow IS NOT NULL THEN 'AND 1 = (case when (@LicenseWindow = 1 And (l.WindowEnd < @now and l.TbaWindowEnd = 0)) then 1 when (@LicenseWindow = 2 And (l.TbaWindowStart = 0 and l.WindowStart < @now and (l.TbaWindowEnd = 1 or l.WindowEnd > @now))) then 1 when (@LicenseWindow = 4 And ((l.TbaWindowStart = 1 or l.WindowStart > @now) and (l.TbaWindowEnd = 1 or l.WindowEnd > @now))) then 1 when (@LicenseWindow = 3 And ((l.WindowEnd < @now and l.TbaWindowEnd = 0) or (l.TbaWindowStart = 0 and l.WindowStart < @now and (l.TbaWindowEnd = 1 or l.WindowEnd > @now)))) then 1 when (@LicenseWindow = 5 And ((l.WindowEnd < @now and l.TbaWindowEnd = 0) or ((l.TbaWindowStart = 1 or l.WindowStart > @now) and (l.TbaWindowEnd = 1 or l.WindowEnd > @now)))) then 1 when (@LicenseWindow = 6 And ((l.TbaWindowStart = 0 and l.WindowStart < @now and (l.TbaWindowEnd = 1 or l.WindowEnd > @now)) or ((l.TbaWindowStart = 1 or l.WindowStart > @now) and (l.TbaWindowEnd = 1 or l.WindowEnd > @now)))) then 1 when ((@LicenseWindow = 7 Or @LicenseWindow = 0) And ((l.WindowEnd < @now and l.TbaWindowEnd = 0) or (l.TbaWindowStart = 0 and l.WindowStart < @now and (l.TbaWindowEnd = 1 or l.WindowEnd > @now)) or ((l.TbaWindowStart = 1 or l.WindowStart > @now) and (l.TbaWindowEnd = 1 or l.WindowEnd > @now)))) then 1 end) ' ELSE '' END + CASE WHEN @OwnerID IS NOT NULL THEN 'AND (l.OwnerID = @OwnerID) ' ELSE '' END + CASE WHEN @MediaID IS NOT NULL THEN 'AND (m.MediaID = @MediaID) ' ELSE '' END + CASE WHEN @LicenseType IS NOT NULL THEN 'AND (lt.Name = @LicenseType) ' ELSE '' END + CASE WHEN @PriceGroupID IS NOT NULL THEN 'AND (lpg.PriceGroupID = @PriceGroupID) ' ELSE '' END + CASE WHEN @Format IS NOT NULL THEN 'AND (pf.Format = @Format) ' ELSE '' END + CASE WHEN @GenreID IS NOT NULL THEN 'AND (lca.ListID = @GenreID) ' ELSE '' END + CASE WHEN @DontReturnMoviesWithNoLicense = 1 THEN 'AND (l.ID is not null) ' ELSE '' END + CASE WHEN @Title IS NOT NULL THEN 'AND (t.OriginalTitle like N''%' + @Title + '%'' OR la.LocalTitle like N''%' + @Title + '%'') ' ELSE '' END + CASE WHEN @Actor IS NOT NULL THEN 'AND (rtrim(ltrim(replace(c.FirstName + '' '' + c.MiddleName + '' '' + c.LastName, '' '', '' ''))) like ''%'' + rtrim(ltrim(replace(@Actor,'' '','' ''))) + ''%'') ' ELSE '' END + CASE WHEN @DontReturnNotReadyMovies = 1 THEN 'AND ((ar.ID is not null) AND (ar.Ready = 1) AND (ar.CountryID = l.CountryID))' ELSE '' END + CASE WHEN @ProductionCountryID IS NOT NULL THEN 'AND (pca.ProductionCountryID = @ProductionCountryID)' ELSE '' END + ' select #temp.* ,ROW_NUMBER() over (order by '; if @order = 'Title' begin set @SQLString = @SQLString + 'OriginalTitle'; end else if @order = 'MediaID' begin set @SQLString = @SQLString + 'MediaID'; end else begin set @SQLString = @SQLString + 'ID'; end set @SQLString = @SQLString + ' ' + @ascending + ' ) rn into #numbered from #temp declare @count int; select @count = MAX(#numbered.rn) from #numbered while (@skip >= @count ) begin set @skip = @skip - @take; end select ID, MediaID, EpisodNo, Dubbed, Format, OriginalTitle, @count TotalCount from #numbered where rn between @skip and @skip + @take drop table #temp drop table #numbered'; execute sp_executesql @SQLString,@ParmDefinition, @MediaID, @Rfa, @LicenseWindow, @OwnerID, @LicenseType, @PriceGroupID, @Format, @GenreID, @Title, @Actor, @ProductionCountryID, @DontReturnMoviesWithNoLicense,@DontReturnNotReadyMovies, @take, @skip, @now end ``` The stored procedure was working pretty good and fast (it's execution usually took 1-2 seconds). Example of call ``` DBCC FREEPROCCACHE EXEC value = [dbo].[spGetMovieShortDataList] @LicenseWindow =N'1', @Rfa = N'NotReady', @DontReturnMoviesWithNoLicense = False, @DontReturnNotReadyMovies = True, @take = 20, @skip = 0, @asc = False, @order = N'ID' ``` Basically during execution of the stored procedure the executed 3 SQL queries, the first `Select Into` query takes 99% of time. This query is ``` declare @now DateTime; set @now = GetDate(); SELECT DISTINCT m.ID, m.EpisodNo, m.MediaID, p.Dubbed, pf.Format, t.OriginalTitle FROM Media m INNER JOIN Asset a1 ON m.ID = a1.ID INNER JOIN Asset a2 ON a1.ParentID = a2.ID INNER JOIN Asset a3 ON a2.ParentID = a3.ID INNER JOIN Title t ON t.ID = a3.ID INNER JOIN Product p ON a2.ID = p.ID LEFT JOIN AssetReady ar ON ar.AssetID = a1.ID LEFT JOIN License l on l.ProductID = p.ID LEFT JOIN ProductFormat pf on pf.ID = p.Format WHERE ((l.TbaWindowStart is null OR l.TbaWindowStart = 0) and ar.Rfa = 0 and ar.SkipRfa = 0) And (l.WindowEnd < @now and l.TbaWindowEnd = 0 ) AND ((ar.ID is not null) AND (ar.Ready = 1) AND (ar.CountryID = l.CountryID)) ``` This stored procedure, after massive data update on the database (a lot tables and rows were affected by the update, however DB size was almost unchanged, now it is 752 ) become to work extremely slow. Now it takes from 20 to 90 seconds. If I take raw SQL query from the stored procedure - it is executed within 1-2 seconds. We've tried: 1. the stored procedure is created with parameters SET ANSI\_NULLS ON SET QUOTED\_IDENTIFIER ON 2. recreate the stored procedure with parameter `with recompile` 3. execute the stored procedure after purging prod cache `DBCC FREEPROCCACHE` 4. move part of where clauses into the join part 5. reindex tables 6. update statistics for the tables from the query using statements like `UPDATE STATISTICS Media WITH FULLSCAN` However the execution of the stored procedure is still >> 30 seconds. But if I run the SQL query which is generated by the SP - it is executed for less than 2 seconds. I've compared execution plans for SP and for the raw SQL - they are quite different. During execution of RAW SQL - the optimizer is using Merge Joins, but when we execute SP - it uses Hash Match (Inner Join), like there are no indexes. * [Execution Plan for RAW SQl - Fast](http://s000.tinyupload.com/?file_id=00083304655587795236) * [Execution Plan for SP - Slow](http://s000.tinyupload.com/?file_id=01823176161767684469) If someone knows what could it be - please help. Thanks in advance!
Try using using the hint `OPTIMIZE FOR UNKNOWN`. If it works, this may be better than forcing a recompile every time. The problem is that, the most efficient query plan depends on the actual value of the date paramter being supplied. When compiling the SP, sql server has to make a guess on what actual values will be supplied, and it is likely making the wrong guess here. `OPTIMIZE FOR UNKNOWN` is meant for this exact problem. At the end of your query, add ``` OPTION (OPTIMIZE FOR (@now UNKNOWN)) ``` <http://blogs.msdn.com/b/sqlprogrammability/archive/2008/11/26/optimize-for-unknown-a-little-known-sql-server-2008-feature.aspx>
Since you are using `sp_executesql` recompiling the procedure, or clearing the cached plan for the procedure won't actually help, the query plan for the query executed via sp\_executesql is cached separately to the stored procedure. You either need to add the query hint `WITH (RECOMPILE)` to the sql executed, or clear the cache for that specific sql before executing it: ``` DECLARE @PlanHandle VARBINARY(64); SELECT @PlanHandle = cp.Plan_Handle FROM sys.dm_exec_cached_plans cp CROSS APPLY sys.dm_exec_sql_text(plan_handle) AS st WHERE st.text LIKE '%' + @SQLString; DBCC FREEPROCCACHE (@PlanHandle); -- CLEAR THE CACHE FOR THIS QUERY EXECUTE sp_executesql @SQLString,@ParmDefinition, @MediaID, @Rfa, @LicenseWindow, @OwnerID, @LicenseType, @PriceGroupID, @Format, @GenreID, @Title, @Actor, @ProductionCountryID, @DontReturnMoviesWithNoLicense,@DontReturnNotReadyMovies, @take, @skip, @now; ``` This is of course irrelevant if when you executed `DBCC FREEPROCCACHE` you didn't pass any parameters and cleared the whole cache.
SQL Server: stored procedure become very slow, raw SQL query is still very fast
[ "", "sql", "sql-server", "performance", "stored-procedures", "" ]
DateTime Should update if the data is passed else update it with original value which is already saved. This update does not works ``` DECLARE @FolderStatusDate DATETIME = NULL SET @FolderStatusDate = '2012-07-04 14:09:04.043' UPDATE CM.PfmFolder SET FolderStatusDate = ISNULL(@FolderStatusDate, FolderStatusDate) WHERE Id = @Id ```
Why don't you move the check for `NULL` to the `WHERE` clause? ``` DECLARE @FolderStatusDate DATETIME = NULL SET @FolderStatusDate = '2012-07-04 14:09:04.043' UPDATE CM.PfmFolder SET FolderStatusDate = @FolderStatusDate WHERE Id = @Id AND @FolderStatusDate IS NOT NULL ```
A slight edit on **Hitesh Salian's** answer ``` DECLARE @FolderStatusDate DATETIME = NULL SET @FolderStatusDate = '2012-07-04 14:09:04.043' UPDATE CM.PfmFolder SET FolderStatusDate = case @FolderStatusDate is Null then FolderStatusDate else @FolderStatusDate end WHERE Id = @Id ```
Update query with ISNULL in SQL SERVER
[ "", "sql", "sql-server", "t-sql", "" ]
I have a table called `Ads` and another Table called `AdDetails` to store the details of each Ad in a Property / Value style, Here is a simplified example with dummy code: ``` [AdDetailID], [AdID], [PropertyName], [PropertyValue] 2 28 Color Red 3 28 Speed 100 4 27 Color Red 5 28 Fuel Petrol 6 27 Speed 70 ``` How to select `Ads` that matches many combinations of PropertyName and PropertyValue, for example : ``` where PropertyName='Color' and PropertyValue='Red' And where PropertyName='Speed' and CAST(PropertyValue AS INT) > 60 ```
You are probably going to do stuff like this a lot so I would start out by making a view that collapses all of the properties to a single row. ``` create view vDetail as select AdID, max(case PropertyName when 'Color' then PropertyValue end) as Color, cast(max(case PropertyName when 'Speed' then PropertyValue end) as Int) as Speed, max(case PropertyName when 'Fuel' then PropertyValue end) as Fuel from AdDetails group by AdID ``` This approach also solves the problem with casting Speed to an int. Then if I `select * from vDetails` ![enter image description here](https://i.stack.imgur.com/0Gvc9.png) This makes it easy to deal with when joined to the parent table. You said you needed a variable number of "matches" - note the where clause below. @MatchesNeeded would be the count of the number of variables that were not null. ``` select * from Ads a inner join vDetails v on a.AdID = v.AdID where case when v.Color = @Color then 1 else 0 end + case when v.Spead > @Speed then 1 else 0 end + case when v.Fuel = @Fuel then 1 else 0 end = @MatchesNeeded ```
I think you have two main problems to solve here. 1) You need to be able to CAST varchar values to integers where some values won't be integers. If you were using SQL 2012, you could use TRY\_CAST() ( [sql server - check to see if cast is possible](https://stackoverflow.com/questions/14719760/sql-server-check-to-see-if-cast-is-possible) ). Since you are using SQL 2008, you will need a combination of CASE and ISNUMERIC(). 2) You need an efficient way to check for the existence of multiple properties. I often see a combination of joins and where clauses for this, but I think this can quickly get messy as the number of properties that you check gets over... say one. Instead, using an EXISTS clause tends to be neater and I think it provides better clues to the SQL Optimizer instead. ``` SELECT AdID FROM Ads WHERE 1 = 1 AND EXISTS ( SELECT 1 FROM AdDetails WHERE AdID = Ads.AdID AND ( PropertyName='Color' and PropertyValue='Red' ) ) AND EXISTS ( SELECT 1 FROM AdDetails WHERE AdID = Ads.AdID AND PropertyName='Speed' AND ( CASE WHEN ISNUMERIC(PropertyValue) = 1 THEN CAST(PropertyValue AS INT) ELSE 0 END ) > 60 ) ``` You can add as many EXISTS clauses as you need without the query getting particularly difficult to read.
Select column value that matches a combination of other columns values on the same table
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I got this piece of sql server 2005 code: ``` SELECT DT, a.id_uf, hora FROM CALENDAR c LEFT OUTER JOIN CLIENT_MEDIDAS_COGEN a ON a.fecha_oferta = dt AND a.hora = c.H WHERE (id_tipo_medida IN (6, 1)) AND (id_magnitud = 1) AND (DT BETWEEN '10/01/2013' AND '10/01/2013') AND (id_tipo_fact = 3) ORDER BY DT, a.id_uf, a.hora ``` On calendar, I have got something like this: ![CALENDAR](https://i.stack.imgur.com/P7y5V.jpg) On CLIENT\_MEDIDAS\_COGEN I have got this: ![CLIENT_MEDIDAS_COGEN](https://i.stack.imgur.com/tAMwC.jpg) And finally, this is the result I am getting: ![Result](https://i.stack.imgur.com/ggJoi.jpg) The question is, how is it possible I'm not getting the rows for hours 1 and 2?? I'm using LEFT OUTER JOIN but it just doesn't seem to work properly. Thank you so much in advance for your help
That's because of where filters which make it work like inner join. You can move where filters to join conditions, like this: ``` SELECT c.ID, a.id_uf, c.H FROM CALENDAR c LEFT OUTER JOIN CLIENT_MEDIDAS_COGEN a ON a.fecha_oferta = dt AND a.hora = c.H AND (a.id_tipo_medida IN (6, 1)) AND (a.id_magnitud = 1) AND (a.id_tipo_fact = 3) WHERE (DT BETWEEN '10/01/2013' AND '10/01/2013') ORDER BY DT, a.id_uf, a.hora ``` That will make the desired left join effect.
Your `WHERE` clause states that `(id_magnitud = 1)`. Any rows in `Calender` that were not joined to `CLIENT_MEDIDAS_COGEN` will have `NULL` in the `id_magnitude` column. ``` SELECT c.DT, a.id_uf, c.hora FROM CALENDAR c LEFT OUTER JOIN CLIENT_MEDIDAS_COGEN a ON a.fecha_oferta = dt AND a.hora = c.H WHERE (c.DT BETWEEN '10/01/2013' AND '10/01/2013') AND (COALESCE(a.id_tipo_medida, 1) IN (6, 1)) AND (COALESCE(a.id_magnitud, 1) = 1) AND (COALESCE(a.id_tipo_fact, 3) = 3) ORDER BY c.DT, a.id_uf, a.hora ``` Also don't just use the table alias for common columns. Use them for all columns so it's clear where the column resides. As per MatBaille's comment moving the filters on `CLIENT_MEDIDAS_COGEN` to the join clause will perform better. The coalesce statments above just give an indication as to the reason for your issue. The below will likely perform better. ``` SELECT c.DT, a.id_uf, c.hora FROM CALENDAR c LEFT OUTER JOIN CLIENT_MEDIDAS_COGEN a ON a.fecha_oferta = dt AND a.hora = c.H AND (a.id_tipo_medida IN (6, 1)) AND (a.id_magnitud = 1) AND (a.id_tipo_fact = 3) WHERE (c.DT BETWEEN '10/01/2013' AND '10/01/2013') ORDER BY c.DT, a.id_uf, a.hora ```
Left Outer Join doesn't seem to work properly
[ "", "sql", "sql-server-2005", "" ]
I am trying to call a procedure I made earlier in new procedure. I am getting error in same line no matter what I try. My original line was : ``` CALL "PAYROLLDBTEST"."ABS_GetEmployeeHistoryDetail"(:EmpID) ``` on this I got error "invalid name of function or procedure: ABS\_GETEMPLOYEEHISTORYDETAILS: " then I tried CALL "PAYROLLDBTEST/ABS\_GetEmployeeHistoryDetail"(EmpID) on this I got error "sql syntax error: incorrect syntax near "(":" So please let me know whats wrong. EDIT: Heres the whole procedure : ``` CREATE PROCEDURE "PAYROLLDBTEST".GetEmploymentHistoryFunction (IN EmpID integer, IN StartDate timestamp, IN EndDate timestamp,OUT RETURNVALUE NVARCHAR) LANGUAGE SQLSCRIPT AS BEGIN SELECT *, DAYS_BETWEEN("FromDate", "ToDate") + 1 AS "DaysCount" FROM (SELECT "Code", "Name", "U_LineID", "U_empID", "U_Status", CASE WHEN ("ToDate" < :StartDate) THEN NULL WHEN ("FromDate" > :EndDate) THEN NULL WHEN ("FromDate" < :StartDate AND ("ToDate" BETWEEN :StartDate AND :EndDate)) THEN :StartDate WHEN ("FromDate" < :StartDate AND "ToDate" > :EndDate) THEN :StartDate WHEN (("FromDate" BETWEEN :StartDate AND :EndDate) AND ("ToDate" BETWEEN :StartDate AND :EndDate)) THEN "FromDate" WHEN (("FromDate" BETWEEN :StartDate AND :EndDate) AND "ToDate" > :EndDate) THEN "FromDate" WHEN ("ToDate" IS NULL AND "FromDate" < :StartDate) THEN :StartDate WHEN ("ToDate" IS NULL AND ("FromDate" BETWEEN :StartDate AND :EndDate)) THEN "FromDate" END AS "FromDate", CASE WHEN ("ToDate" < :StartDate) THEN NULL WHEN ("FromDate" > :EndDate) THEN NULL WHEN ("FromDate" < :StartDate AND ("ToDate" BETWEEN :StartDate AND :EndDate)) THEN "ToDate" WHEN (("FromDate" BETWEEN :StartDate AND :EndDate) AND ("ToDate" BETWEEN :StartDate AND :EndDate)) THEN "ToDate" WHEN ("FromDate" < :StartDate AND "ToDate" > :EndDate) THEN :EndDate WHEN (("FromDate" BETWEEN :StartDate AND :EndDate) AND "ToDate" > :EndDate) THEN :EndDate WHEN ("ToDate" IS NULL AND "FromDate" < :StartDate) THEN :EndDate WHEN ("ToDate" IS NULL AND ("FromDate" BETWEEN :StartDate AND :EndDate)) THEN :EndDate END AS "ToDate", "U_Position", "U_Project", "U_Sponsorship" FROM ( --select * from ABS_GetEmployeeHistoryDetails WHERE ("EmpID" IN (:EmpID)) --select * from "PAYROLLDBTEST"."ABS_GetEmployeeHistoryDetails" WHERE ("EmpID" IN (:EmpID)) CALL "PAYROLLDBTEST"."ABS_GetEmployeeHistoryDetails"(:EmpID,:RETURNVALUE); ) InnerQuery ) OuterQuery WHERE ("FromDate" between :StartDate and :EndDate OR "ToDate" between :StartDate and :EndDate); END; ``` Thanks
First of all, make sure that your procedure is really **located in the schema your are trying to call** (in your case *PAYROLLDBTEST*). You can check that by having a look in the Catalog in HANA Studio - open the Catalog, then your schema *PAYROLLDBTEST* and then the folder *Procedures*. Your procedure should be located in this folder. If not, try to refresh this folder. If the procedure is still not in there, it's definitely stored in another schema. Second, **be sure you call the procedure with the correct amount of parameters**. Your mentioned procedure seems to have only one parameter, *EmpID*, which seems to be an input parameter. The name of your procedure is *GetEmployeeHistoryDetail* so I assume you have at least one output parameter as second parameter (to get the details of the employee history back). If that's the case, you should call the procedure like this (assuming you are inside another procedure and want to use the output of the inner procedure in a scalar or table variable): ``` declare someOutputVariable bigint; CALL "PAYROLLDBTEST"."ABS_GetEmployeeHistoryDetail" (:EmpID, :someOutputVariable); ``` Or in case your out parameter is a table variable. *someTableVariable* could also be directly an output parameter of your outer procedure: ``` PROCEDURE "SOMESCHEMA"."yourProcedure" ( in someInputParameter1 NVARCHAR(255), in someInputParameter2 BIGINT, out someOutputParameter1 BIGINT, out yourSubProcedureOutputParameter "SOMESCHEMA"."some_tabletype") LANGUAGE SQLSCRIPT SQL SECURITY INVOKER AS BEGIN // ... (other code logic) CALL "PAYROLLDBTEST"."ABS_GetEmployeeHistoryDetail" (:EmpID, :yourSubProcedureOutputParameter); END; ```
Procedures should be called from the \_SYS\_BIC schema, please try the following call: ``` call "_SYS_BIC"."PAYROLLDBTEST/ABS_GetEmployeeHistoryDetail"(:EmpID) ```
not able to call procedure within procedure in HANA Studio
[ "", "sql", "stored-procedures", "hana", "hana-studio", "" ]
How to order SQL result by column `name` and if there is not set name, order it by `email`. ``` +----+-------+-----------------+ | ID | name | email | +----+-------+-----------------+ | 1 | John | john@gmail.com | | 2 | --- | linda@gmail.com | | 3 | --- | kikli@gmail.com | | 4 | Peter | peter@gmail.com | +----+-------+-----------------+ ``` Result should looks like that: ``` John, kikli@gmail.com, linda@gmail.com, Peter ``` Answer: ``` $users = $this->em->createQueryBuilder() ->select('a.id, coalesce(concat(a.firstName, concat(\' \', a.lastName)), a.email) as orderColumn') ->from('Company\User\Admin', 'a') ->orderBy('orderColumn','ASC') ->getQuery() ->getScalarResult(); ```
``` select coalesce(lastName || ' ' || firstName, email) from t order by 1 ``` The `1` in the `order by` clause means the first column in the select list. A `case` expression is not indexable while the coalesce function is ``` create index t_index on t (coalesce(lastName || ' ' || firstName, email)); ```
What means "not set name"? Is it `NULL` or `'---'`? However, you can use `CASE`: ``` SELECT CASE WHEN Name IS NULL THEN Email ELSE Name END AS User FROM dbo.TableName ORDER BY CASE WHEN Name IS NULL THEN Email ELSE Name END ASC, Email ASC ```
SQL - order by column if value exists, otherwise second column
[ "", "sql", "postgresql", "sql-order-by", "multiple-columns", "" ]
I have two tables, in the first table the course id is stored and in the second table the course id and different subject areas description are stored as shown below. Table PA\_CPNT: ``` CPNT_ID( Course ID) Course Title 06201826 AAAA ``` Table PA\_CPNT\_SUBJ ``` CPNT_ID SUBJ_ID SUBJ_DESC 06201826 PLNT_DEV Plant Enviroment & Safety 06201826 WRKS_COUN Works Council. ``` I have written the below select query ``` select * from PA_CPNT cp, PA_CPNT_SUBJ sb where cp.CPNT_ID = '06201826' and sb.CPNT_ID = cp.CPNT_ID(+) ``` My Output is ``` CPNT_ID COUrse Title SUBJ_ID SUBJ_DESC 06201826 AAAA PLNT_DEV Plant Enviroment & Safety 06201826 AAAA WRKS_COUN Works Council. ``` But my requirement is to show the below output. ``` CPNT_ID COUrse Title SUBJ_ID1 SUBJ_DESC1 SUBJ_ID2 SUBJ_DESC2 06201826 AAAA PLNT_DEV Plant Enviroment & Safety WRKS_COUN Works Council. ``` Kindly help me with a sample code to achieve the above desired output as I am totally new to Oracle.
Not sure, why you are not happy with your current result which is how join works. AFAIK, you can't get the data that way; but I have tried emulating the same using the below query. Find a DEMO here <http://sqlfiddle.com/#!2/4e683/10> and see if it helps. ``` select distinct CPNT_ID, "COURSE TITLE", SUBJ_ID1, SUBJ_DESC1, SUBJ_ID2, SUBJ_DESC2 from ( select a.cpnt_id, a."Course Title", b.subj_id as subj_id1, b.subj_desc as subj_desc1, c.subj_id as subj_id2, c.subj_desc as subj_desc2 from PA_CPNT a inner join PA_CPNT_SUBJ b on a.cpnt_id=b.cpnt_id inner join PA_CPNT_SUBJ c on a.cpnt_id=c.cpnt_id ) X where SUBJ_ID1 != SUBJ_ID2; ``` Results in below output: ![enter image description here](https://i.stack.imgur.com/O18vP.png) Probably use a `LIMIT 1` to get you only one row out of the result set.
I do not think you can do this generically as a "straight" query. My understanding is that all queries require a fixed set of columns in the result. A simple way to identify this problem is that if you cannot define your column headings before you know what data exists. e.g. In this case you don't know how many `SUBJ_IDx` `SUBJ_DESCx` pairs you will need until you have processed all the data. So to generate the report the way you'd like it you would normally use a programming language around the query like you have above. And you would then programatically aggregate the rows. Either caching all the results into some hash of arrays structure, or add an order by to your query and use "old-new" logic.
How to show a description in three different columns for one Course ID
[ "", "sql", "oracle", "" ]
This query is successfully pulling pairs of ids for rows in table taxi\_lines where a driver appears with more than one open assignment (eg in 2 taxis at once). My problem is that it pulls each pair of ids in both orders (eg 500 and 509 and then 509 and 500). ![data](https://i.stack.imgur.com/tnYDW.png) I tried adding `DISTINCT(CONCAT(tl1.id,tl2.id)),` and also `GROUP BY id1, id2` but these did not achieve unique pairs. Here is the working query that gets the data above. How do I refine to get unique pairs? ``` SELECT tl1.id AS id1, tl2.id AS id2 FROM taxi_lines tl1 JOIN taxi_lines tl2 ON tl1.driver = tl2.driver AND tl1.id != tl2.id AND tl1.driver > 0 WHERE tl1.end = '0000-00-00 00:00:00' AND tl2.end = '0000-00-00 00:00:00' ```
Replace `AND tl1.id != tl2.id` with `AND tl1.id > tl2.id`
Try something simple like: ``` AND t11.id > tl2.id ``` This will only return rows where the first id is larger than the second. It will get rid of half your rows.
Querying distinct pairs of ids
[ "", "mysql", "sql", "unique", "" ]
I'm passing through the following situation and have not found a good solution to this problem. I am going through a optimization of a API so am looking for fastest possible solution. The following description is not exactly what I am doing, but I think it represents the problem well. Let's say I have a table of products: ``` +----+----------+ | id | name | +----+----------+ | 1 | product1 | | 2 | product2 | +----+----------+ ``` And I have a table of attachments to each product, separate by language: ``` +----+----------+------------+-----------------------+ | id | language | product_id | attachment_url | +----+----------+------------+-----------------------+ | 1 | bb | 1 | image1_bb.jpg | | 1 | en | 1 | image1_en.jpg | | 1 | pt | 1 | image1_pt.jpg | | 2 | bb | 1 | image2_bb.jpg | | 2 | pt | 1 | image2_pt.jpg | +----+----------+------------+-----------------------+ ``` What I intend to do is to get the correct attachment according to the language selected on the request. As you can see above, I can have several attachments to each product. We use Babel (`bb`) as a generic language, so every time I don't have a attachment to the right language, I should get the babel version. Is also important to consider that the Primary Key of the attachments table is a composite of `id` + `language`. So, supposing I try to get all the data in `pt`, my first option to create a SQL query was: ``` SELECT p.id, p.name, GROUP_CONCAT( '{',a.id,',',a.attachment_url, '}' ) as attachments_list FROM products p LEFT JOIN attachments a ON (a.product_id=p.id AND (a.language='pt' OR a.language='bb')) ``` The problem is that, with this query I always get the `bb` data and I only want to get it when there is no attachment on the right language. I already tried to do a subquery changing attachments for: ``` (SELECT * FROM attachments GROUP BY id ORDER BY id ASC, language DESC) ``` but it doubles the time of the request. I also tried using `DISTINCT` inside the `GROUP_CONCAT`, but it only works if the whole result of each row is equal, so it does not work for me. Does anyone knows any other solution that I can apply directly into the query? **EDIT:** Combining the answers of @Vulcronos and @Barmar made the final solution at least 2x faster than the one I first suggested. Just to add some context, for anybody else who is looking for it. I am using Phalcon. Because of it, I had a lot of trouble putting the pieces together, as Phalcon PHQL does not support subqueries, nor a lot of the other stuff I had to use. **For my scenario**, where I had to deliver approximatelly 1.2MB of JSON content, with more than 2100 objects, using custom queries made the total request time up to 3x faster than Phalcon native relations management methods (`hasMany()`, `hasManyToMany()`, etc.) and 10x faster than my original solution (which used a lot the `find()` method).
Try doing two joins instead of one: ``` SELECT p.id, p.name, GROUP_CONCAT( '{',COALESCE(a.id, b.id),',',COALESCE(a.attachment_url, b.attachment_url), '}' ) as attachments_list FROM products p LEFT JOIN attachments a ON (a.product_id=p.id AND a.language='pt') LEFT JOIN attachments b ON (a.product_id=p.id AND a.language='bb') ``` and then using COALESCE to return b instead of a if a doesn't exist. You can also do it with a subselect if the above doesn't work.
`OR` conditions tend to make queries slow, because it's hard to optimize them with indexes. Try joining separately using the two different languages. ``` SELECT p.id, p.name, IFNULL(apt.attachment_url, abb.attachment_url) AS attachment_url FROM products AS p JOIN attachments AS abb ON abb.product_id = p.id LEFT JOIN attachments AS apt ON alang.product_id = p.id AND apt.language = 'pt' WHERE abb.language = 'bb' ``` This assumes that all products have a `bb` attachment, while `pt` is optional.
How to SELECT row B only if row A doesn't exist on GROUP BY
[ "", "mysql", "sql", "phalcon", "" ]
I have a table with multiple nullable string fields. For reporting I would like to combine them into a single string that is delimited with a space for every field. The right side must be trimmed. Test data: ``` DECLARE @test TABLE ( f1 NVARCHAR(100) , f2 NVARCHAR(100) , f3 NVARCHAR(100) ) INSERT INTO @test -- NULL values must be skipped SELECT NULL , NULL , NULL UNION SELECT NULL , NULL , 'foo' UNION SELECT NULL , 'foo', NULL UNION SELECT 'foo' , NULL , NULL UNION SELECT NULL , 'foo', 'bar' UNION SELECT 'foo' , 'bar', NULL UNION SELECT 'foo' , NULL , 'bar' UNION SELECT 'foo' , 'bar', 'baz' -- Empty string values must be skipped UNION SELECT 'foo' , '' , 'bar' UNION SELECT '' , '' , 'baz' -- Spaces in values must be kept UNION SELECT 'foo bar', '' , 'baz' -- The table should not contain values with surrounding space -- So this case is not important UNION SELECT ' foo ' , ' ' , 'baz ' ``` The expected results: ``` '' 'foo' 'foo' 'foo' 'foo bar' 'foo bar' 'foo bar' 'foo bar baz' 'foo bar' 'baz' 'foo bar baz' 'foo baz' ' foo baz' -- This is optional ``` My current code: ``` SELECT RTRIM( CASE WHEN ISNULL(f1, '') = '' THEN '' ELSE f1 + ' ' END + CASE WHEN ISNULL(f2, '') = '' THEN '' ELSE f2 + ' ' END + CASE WHEN ISNULL(f3, '') = '' THEN '' ELSE f3 + ' ' END ) FROM @test ``` This works, but I was wondering if there is a less verbose way to achieve this?
In SQL Server 2012, there is a CONCAT function. You can use the following as well: ``` SELECT RTRIM(CONCAT( NULLIF(f1, '') + ' ' , NULLIF(f2, '') + ' ' , NULLIF(f3, '') + ' ' )) FROM @test ```
Because `null` + a space is still `null`, you can use ``` select isnull(nullif(f1,'')+' ','') + isnull(nullif(f2,'')+' ','') + isnull(nullif(f3,'')+' ','') from @test ```
How to concat strings with space delimiter where each string is nullable?
[ "", "sql", "sql-server", "null", "string-concatenation", "null-coalescing", "" ]
I have two tables, one for the main and one for the contents 'Main' table: ``` pkey - pkey2 - contents ABC1 - 11324 - 3 KJPO - 14124 - 4 PJKJ - 767172 - 5 ``` 'Contents' table ``` pkey - pkey2 - details ABC1 - 11324 - some random info here ABC1 - 11324 - some random info here ABC1 - 11324 - some random info here KJPO - 14124 - some random info here KJPO - 14124 - some random info here KJPO - 14124 - some random info here ``` The 'Main' table specifies the maximum number of contents that should be connected to the 'Main' table. What I would need to have in my query is to get the lines from 'Main' table where the corresponding 'Contents' is not yet complete i.e. the number of lines that matches the entries in main table has not reached (not equal) the number of contents specified in the 'Main' table. It should return (based on the above example) like this: 'Main' table: ``` pkey - pkey2 - contents - missing KJPO - 14124 - 4 - 1 PJKJ - 767172 - 5 - 5 ``` I've tried making an inner join but cannot find it to work.. I'm thinking of just doing it on the client side e.g. vb.NET but I know it is not a recommended approach. I hope someone can help me or at least lead me to another solution/work around, thanks in advance EDIT: I have added here a code from dav1dsmith: ``` select m.pkey, m.pkey2, m.contents, m.contents-isnull(c.actual,0) as missing from dbo.Main m left join ( select pkey, pkey2, count(*) as actual from dbo.Contents group by pkey, pkey2 ) c on c.pkey=m.pkey and c.pkey2=m.pkey2 where c.actual<>m.contents ``` the code works, but the result did not include entries from the 'main' table where there is no corresponding entry in the 'contents' table.. it shows lines that are incomplete but for only those that have at least one entry in the contents table.. as i said, the code is working, but im still trying to edit it to give what I needed
``` select m.pkey, m.pkey2, m.contents, m.contents-isnull(c.actual,0) as missing from dbo.Main m left join ( select pkey, pkey2, count(*) as actual from dbo.Contents group by pkey, pkey2 ) c on c.pkey=m.pkey and c.pkey2=m.pkey2 where isnull(c.actual,0)<>m.contents ``` The derived table `c` resolves a `count(*)` of the rows in `dbo.contents` for each combination of `pkey` and `pkey2`. Any combinations that don't exist will, obviously, not return a row in `c`. The `dbo.main` table is `left join`ed onto these results - so that every row in `main` is returned in the result set regardless of whether any rows exist in the summarized `contents` table - using the same key columns. The `where` clause filters out any rows that have the same number of rows in `contents` as predicted in the `main` table (I forgot the `isnull()` here initially - which was dropping rows from `main` where no corresponding `c` rows exist). The initial column list then carries out the mathematics to calculate the `missing` column values.
Please tell me if this queries works - ``` select s1.pkey, s1.pkey2, s1.contents, (s1.contents - s2.cnt1) as missing from ( select pkey, pkey2, contents from Main ) as s1 left join ( select pkey, count(pkey2) as cnt1 from Contents group by pkey )as s2 on s1.pkey = s2.pkey where (s1.contents - s2.cnt1) > 0 ```
How to check if the specified number of entries from one table matches the number of actual entries in another table?
[ "", "sql", "sql-server", "sql-server-2008", "" ]
first an example of my table: ``` id_object;time;value;status 1;2014-05-22 09:30:00;1234;1 1;2014-05-22 09:31:00;2341;2 1;2014-05-22 09:32:00;1234;1 ... 1;2014-06-01 00:00:00;4321;1 ... ``` Now i need count all rows with status=1 and id\_object=1 monthwise for example. this is my query: ``` SELECT COUNT(*) FROM my_table WHERE id_object=1 AND status=1 AND extract(YEAR FROM time)=2014 GROUP BY extract(MONTH FROM time) ``` The result for this example is: ``` 2 1 ``` 2 for may and 1 for june but i need a output with all 12 months, also months with no data. for this example i need this ouput: ``` 0 0 0 0 2 1 0 0 0 0 0 0 ``` Thx for help.
you can use [`generate_series()`](http://www.postgresql.org/docs/current/static/functions-srf.html) function like this: ``` select g.month, count(m) from generate_series(1, 12) as g(month) left outer join my_table as m on m.id_object = 1 and m.status = 1 and extract(year from m.time) = 2014 and extract(month from m.time) = g.month group by g.month order by g.month ``` **`sql fiddle demo`**
Rather than comparing with an extracted value, you'll want to use a range-table instead. Something that looks like this: ``` month startOfMonth nextMonth 1 '2014-01-01' '2014-02-01' 2 '2014-02-01' '2014-03-01' ...... 12 '2014-12-01' '2015-01-01' ``` As in @Roman's answer, we'll start with `generate_series()`, this time using it to generate the range table: ``` WITH Month_Range AS (SELECT EXTRACT(MONTH FROM month) AS month, month AS startOfMonth, month + INTERVAL '1 MONTH' AS nextMonth FROM generate_series(CAST('2014-01-01' AS DATE), CAST('2014-12-01' AS DATE), INTERVAL '1 month') AS mr(month)) SELECT Month_Range.month, COUNT(My_Table) FROM Month_Range LEFT JOIN My_Table ON My_Table.time >= Month_Range.startOfMonth AND My_Table.time < Month_Range.nextMonth AND my_table.id_object = 1 AND my_table.status = 1 GROUP BY Month_Range.month ORDER BY Month_Range.month ``` (As a side note, I'm now annoyed at how PostgreSQL handles intervals) **`SQL Fiddle Demo`** The use of the range will allow any index including `My_Table.time` to be used (although not if an index was built over an `EXTRACT`ed column. # EDIT: Modified query to take advantage of the fact that `generate_series(...)` will also handle date/time series.
Postgresql group month wise with missing values
[ "", "sql", "postgresql", "group-by", "" ]
I need to sum up linear objects' length and to group them by intervals of several years. I have one table storing my objects like this : ``` - gid serial NOT NULL, - year INTEGER, - the_geom geometry(MULTILINESTRING) ; ``` I need a result like this: ``` period | length ----------------+----------- 2005 - 2014 | 18.6 1995 - 2004 | 16.1 1985 - 1994 | 7.6 1975 - 1984 | 19.0 1965 - 1974 | 28.2 1945 - 1964 | 10.2 before 1945 | 0.1 ``` I can't find out on the Web how to perform this, except by producing each line with a different query and to use `UNION ALL` to merge them together, which is not so good ...
You need a *discriminant function*, which one can create in a number of ways. For your purposes a case statement is just the ticket, something like: ``` select case when t.year >= 2015 then '2015-present' when t.year >= 2005 then '2005-2014' when t.year >= 1995 then '1995-2004' when t.year >= 1985 then '1985-1994' when t.year >= 1975 then '1975-1984' when t.year >= 1965 then '1965-1974' when t.year >= 1955 then '1955-1964' when t.year >= 1945 then '1945-1954' when t.year < 1945 then 'before 1945' else 'no year given' end as period , sum( compute_length_from_geometry( t.geometry) ) as length from some_table t where . . . group by case when t.year >= 2015 then '2015-present' when t.year >= 2005 then '2005-2014' when t.year >= 1995 then '1995-2004' when t.year >= 1985 then '1985-1994' when t.year >= 1975 then '1975-1984' when t.year >= 1965 then '1965-1974' when t.year >= 1955 then '1955-1964' when t.year >= 1945 then '1945-1954' when t.year < 1945 then 'before 1945' else 'no year given' end as period order by case when t.year >= 2015 then 1 when t.year >= 2005 then 2 when t.year >= 1995 then 3 when t.year >= 1985 then 4 when t.year >= 1975 then 5 when t.year >= 1965 then 6 when t.year >= 1955 then 7 when t.year >= 1945 then 8 when t.year < 1945 then 9 else 10 end as period ``` You might also just consider a bracketing table, either permanent or temporary, something like: ``` create table report_period ( period_id int not null , year_from int not null , year_thru int not null , period_description varchar(32) not null , primary key clustered ( period_id ) , unique nonclustered ( year_from , year_thru ) , ) insert report_period values ( 1 , 2015 , 9999 , '2015-present' ) insert report_period values ( 2 , 2005 , 2014 , '2005-2014' ) insert report_period values ( 3 , 1995 , 2004 , '1995-2004' ) insert report_period values ( 4 , 1985 , 1994 , '1985-1994' ) insert report_period values ( 5 , 1975 , 1984 , '1975-1984' ) insert report_period values ( 6 , 1965 , 1974 , '1965-1974' ) insert report_period values ( 7 , 1955 , 1964 , '1955-1964' ) insert report_period values ( 8 , 1945 , 1954 , '1945-1954' ) insert report_period values ( 9 , 0000 , 1944 , 'pre-1945' ) ``` Then your query simply becomes something like ``` select p.period_description as period , sum( compute_length_from_geometry( t.geometry ) ) as length from report_period p join some_table t on t.year between p.year_from and p.year_thru group by p.period_id , p.period_description order by p.period_id ``` you can even use *derived tables* to get the same effect ``` select p.period_description as period , sum( compute_length_from_geometry( t.geometry ) ) as length from ( select 1 as period_id , 2015 as year_from , 9999 as year_thru , '2015-present' as period_description UNION ALL select 2 as period_id , 2005 as year_from , 2014 as year_thru , '2005-2014' as period_description UNION ALL select 3 as period_id , 1995 as year_from , 2004 as year_thru , '1995-2004' as period_description ... ) p join some_table t on t.year between p.year_from and p.year_thru group by p.period_id , p.period_description order by p.period_id ``` Alternatively, you can also simply do integer division, something like ``` period_id = ( 2014 - t.year ) / 10 ``` This will give you a period identifier with the domain * > 0: 2015 or later * 0: 2005-2014 * -1: 1995-2004 * -2: 1985-1994 * -3: 1975-1984 * -4: 1965-1974 * -5: 1955-1964 * -6: 1945-1954 * < -7: before 1945 Then just add/subtract an appropriate offset to move the zero-point (or change the computation offset in years). This, however, will usually negate the use of any indices on the column `year` since it's now an *expression*.
Assuming that the following gets the length: ``` select year, length(the_geo) as len from table ``` Then your problem is to find consecutive years with the same length. I like the following trick for finding such sequences, assuming that you have one value for every year: ``` select min(year) || '-' || max(year), len from (select year, length(the_geo) as len, (row_number() over (order by year) - row_number() over (partition by length(the_geo) order by year) ) as grp from table ) t group by grp, len order by 1; ```
GROUP BY intervals of integer values
[ "", "sql", "postgresql", "group-by", "intervals", "" ]
I am still messing around with this query, which is working but is not returning the data I need. ``` SELECT prod.Code, prod.Description, prod.Groupp, prod.Quantity, prod.Unit, prod.Standard, prod.Type, prod.Model, prod.GroupBKM, prod.Note, comp.Unit, comp.Cost FROM dbo.Product1 prod INNER JOIN dbo.Components comp ON comp.Code = prod.Code ``` The above query is returning the data only if a comp.code=prod.code exists while I would like to get the data prod.\* in any case and obviously the data relevant comp.cost, if does not exist, will be null. I cannot get it right! Any help will be appreciated.
Replace `INNER JOIN` with `LEFT JOIN` ``` SELECT prod.Code, prod.Description, prod.Groupp, prod.Quantity, prod.Unit, prod.Standard, prod.Type, prod.Model, prod.GroupBKM, prod.Note, comp.Unit, comp.Cost FROM dbo.Product1 prod LEFT JOIN dbo.Components comp ON comp.Code = prod.Code ```
By definition you **cannot** do this with an `INNER JOIN` because an `INNER JOIN` is defined as only returning items for which a match was found. If you want to return rows in the base `SELECT` even if the `JOIN` predicate fails, then you want a `LEFT OUTER JOIN` ... which is defined as precisely that.
SQl query inner join to return the available data even if the join is missing
[ "", "sql", "" ]
I have a table called `contacts` and i want to select all the distinct contact names (int) in one line separated by comma. Currently i am using: ``` SELECT DISTINCT name FROM contacts ``` To get this result: ``` 11111 22222 33333 ``` But i need to have this result: ``` 11111, 22222, 33333 ``` I saw some examples with XML PATH but i couldn't adapt it to my code. Can anybody help me with the query please?
`STUFF` + `XML PATH` will do the job: ``` SELECT STUFF( (SELECT Distinct ',' + cast (name as varchar(20)) FROM contacts FOR XML PATH ('')) , 1, 1, '') ``` demo:<http://sqlfiddle.com/#!3/748e4/1>
You can use STUFF: ``` SELECT STUFF(( SELECT distinct ',' + c.contacts FROM dbo.contacts c FOR XML PATH('')), 1, 1, '') as names; ```
Comma Separated values in SQL Query
[ "", "sql", "sql-server", "" ]
I'm hoping this makes sense as what I'm trying to do is SUM rows based on other columns of existing rows. I have tried a couple different ways and what I hope is now close is what I have here. This is not my full SQL but hopefully this small example will get me on track ``` SELECT Price,SUM(Item) from table where Price >= Price group by Price Sample Data | PRICE | ITEM | |-------|-------| | 1.00 | 5 | | 2.00 | 9 | | 3.00 | 2 | Hopeful Result | PRICE | ITEM | |-------|-------| | 1.00 | 5 | | 2.00 | 14 | | 3.00 | 16 | ``` The actual result is more or less the sample data which I would expect as I am grouping by Price so it makes sense that it returns the rows like this. I just can't seem to think of away to include Price in my select without having to group or use an aggregate on it. I'm thinking I could maybe do this type of calculation with an inner select but I'm hoping there is a different way as my actual query has a lot of joins which could get messy if I go this route. Thanks for any help.
If you're using SQL server 2012... ``` Select price, item, sum(item) OVER(order by price rows unbounded preceding) as runningtotal from sample ``` <http://sqlfiddle.com/#!6/36e9f/1/0>
You can accomplish this with a sub-query, but a more efficient way might be to use a `CROSS/OUTER APPLY`. It depends on your specific data. I provide both methods of doing that below... See which one runs faster based on your specific data. **Sub-query method** ``` SELECT DISTINCT op.Price, (SELECT SUM(ip.Item) FROM table ip WHERE ip.Price <= op.Price) as ITEM FROM table op ORDER BY op.Price ASC ``` **Outer-apply method** ``` SELECT DISTINCT op.Price, a.Items FROM table op OUTER APPLY (SELECT SUM(ip.Item) as Items FROM TABLE ip WHERE ip.Price <= op.Price) a ORDER BY op.Price ASC ```
SQL Server sum with a where or having condition
[ "", "sql", "sql-server", "sum", "" ]
How to have 2 fields in 1 where conditions? ``` WHERE id = '2' AND id = '3' ``` Is this possible? My query keep on getting zero result, but the fact that 2 id combined is having more than 10 row.
How can one column have two distinct values at the same time? It can't. But there can be different values in different rows. To find these rows use `OR`: ``` WHERE id = '2' OR id = '3' ```
you could be looking for an OR instead of an AND. Try this: WHERE id = '2' OR id = '3';
Having 2 fields in 1 where condition
[ "", "mysql", "sql", "where-clause", "" ]
I need to display the 12 first result for each user, I tried Top(12) and it only select the first 12 rows, then I did some search and found out i need to use aggregation functions, this is what I have: ``` SELECT id, fname, lnam, Invoice, amnt, [bill period] FROM TABLE1 GROUP BY id, fname, lnam, Invoice, amnt, [bill period] HAVING --Not really sure! ``` This is Table1 ``` id fname lname Invoice amnt bill period 1 John Doe 480991 38.42 201406 1 John Doe 481102 38.42 201407 1 John Doe 481047 38.42 201408 1 John Doe 485053 38.42 201409 1 John Doe 489759 38.42 201410 1 John Doe 489788 38.42 201411 1 John Doe 489817 38.42 201412 1 John Doe 489846 38.42 201501 1 John Doe 489875 38.42 201502 1 John Doe 489905 38.42 201503 1 John Doe 489933 38.42 201504 1 John Doe 489963 38.42 201505 1 John Doe 490044 38.42 201506 1 John Doe 490138 38.42 201507 2 Rich Doe 480992 41.41 201406 2 Rich Doe 481103 41.41 201407 2 Rich Doe 481048 41.41 201408 2 Rich Doe 485057 41.41 201409 2 Rich Doe 489765 41.41 201410 2 Rich Doe 489794 41.41 201411 2 Rich Doe 489823 41.41 201412 2 Rich Doe 489852 41.41 201501 2 Rich Doe 489881 41.41 201502 2 Rich Doe 489911 41.41 201503 2 Rich Doe 489936 41.41 201504 2 Rich Doe 489979 41.41 201505 2 Rich Doe 490066 41.41 201506 2 Rich Doe 490160 41.41 201507 2 Rich Doe 490161 41.41 201508 ``` Thank you.
---- Use the analytic function rank() so that it returns the rank of a value in a group of values.With clause will hep you create a temporary set of data. ``` WITH TEMP AS ( SELECT id, fname, lnam, Invoice, amnt, bill_period, rank() OVER (PARTITION BY ID ORDER BY bill_period) AS RK FROM TABLE1 ) SELECT id, fname, lnam, Invoice, amnt, bill_period FROM TEMP WHERE RK<13; ```
Instead of using a `group by` clause, you can use the analytic [`row_number()`](http://msdn.microsoft.com/en-us/library/ms186734.aspx) function. Assuming you identify a user according to the `id` column: ``` SELECT id, fname, lnam, Invoice, amnt, [bill period] FROM (SELECT id, fname, lnam, Invoice, amnt, [bill period], ROW_NUMBER() OVER (PARTITION BY id ORDER BY [bill period] ASC) AS rn FROM TABLE1) t WHERE rn <= 12 ```
SQL Server - Display the first 12 records for each user
[ "", "sql", "sql-server", "" ]
I have the following schema: ``` CREATE TABLE author ( id integer , name varchar(255) ); CREATE TABLE book ( id integer , author_id integer , title varchar(255) , rating integer ); ``` And I want each author with its last book: ``` SELECT book.id, author.id, author.name, book.title as last_book FROM author JOIN book book ON book.author_id = author.id GROUP BY author.id ORDER BY book.id ASC ``` Apparently you can do that in mysql: [Join two tables in MySQL, returning just one row from the second table](https://stackoverflow.com/questions/6468314/join-two-tables-returning-just-one-row-from-the-second-table-mysql). But postgres gives this error: > ERROR: column "book.id" must appear in the GROUP BY clause or be used > in an aggregate function: SELECT book.id, author.id, author.name, > book.title as last\_book FROM author JOIN book book ON book.author\_id = > author.id GROUP BY author.id ORDER BY book.id ASC [It's because](http://www.postgresql.org/docs/current/interactive/sql-select.html#SQL-GROUPBY): > When GROUP BY is present, it is not valid for the SELECT list > expressions to refer to ungrouped columns except within aggregate > functions, since there would be more than one possible value to return > for an ungrouped column. How can I specify to postgres: "Give me only the last row, when ordered by `joined_table.id`, in the joined table ?" --- Edit: With this data: ``` INSERT INTO author (id, name) VALUES (1, 'Bob') , (2, 'David') , (3, 'John'); INSERT INTO book (id, author_id, title, rating) VALUES (1, 1, '1st book from bob', 5) , (2, 1, '2nd book from bob', 6) , (3, 1, '3rd book from bob', 7) , (4, 2, '1st book from David', 6) , (5, 2, '2nd book from David', 6); ``` I should see: ``` book_id author_id name last_book 3 1 "Bob" "3rd book from bob" 5 2 "David" "2nd book from David" ```
``` select distinct on (author.id) book.id, author.id, author.name, book.title as last_book from author inner join book on book.author_id = author.id order by author.id, book.id desc ``` Check [`distinct on`](http://www.postgresql.org/docs/current/static/sql-select.html) > SELECT DISTINCT ON ( expression [, ...] ) keeps only the first row of each set of rows where the given expressions evaluate to equal. The DISTINCT ON expressions are interpreted using the same rules as for ORDER BY (see above). Note that the "first row" of each set is unpredictable unless ORDER BY is used to ensure that the desired row appears first. With distinct on it is necessary to include the "distinct" columns in the `order by`. If that is not the order you want then you need to wrap the query and reorder ``` select * from ( select distinct on (author.id) book.id, author.id, author.name, book.title as last_book from author inner join book on book.author_id = author.id order by author.id, book.id desc ) authors_with_first_book order by authors_with_first_book.name ``` Another solution is to use a window function as in Lennart's answer. And another very generic one is this ``` select book.id, author.id, author.name, book.title as last_book from book inner join ( select author.id as author_id, max(book.id) as book_id from author inner join book on author.id = book.author_id group by author.id ) s on s.book_id = book.id inner join author on book.author_id = author.id ```
I've done something similar for a chat system, where room holds the metadata and list contains the messages. I ended up using the Postgresql LATERAL JOIN which worked like a charm. ``` SELECT MR.id AS room_id, MR.created_at AS room_created, lastmess.content as lastmessage_content, lastmess.datetime as lastmessage_when FROM message.room MR LEFT JOIN LATERAL ( SELECT content, datetime FROM message.list WHERE room_id = MR.id ORDER BY datetime DESC LIMIT 1) lastmess ON true ORDER BY lastmessage_when DESC NULLS LAST, MR.created_at DESC ``` For more info see <https://www.heap.io/blog/postgresqls-powerful-new-join-type-lateral>
How to join only one row in joined table with postgres?
[ "", "sql", "postgresql", "join", "" ]
Hi i have a event that selects this result from a larger table The query is: ``` UPDATE character_data tu, character_data ts SET tu.killsZ = ts.killsZ WHERE tu.playerUID = ts.playerUID and tu.numberid < ts.numberid; numberid | playerUID | alive | killsZ 1 555555 0 55 2 555555 1 55 ``` The event is simply ``` 1. UPDATE character_data tu, character_data ts SET tu.killsZ = ts.killsZ WHERE tu.playerUID = ts.playerUID and tu.numberid < ts.numberid; 2. DELETE FROM character_data WHERE alive = 0 GROUP BY playerUID HAVING COUNT(playerUID) > 1; ``` This line seems to be not be working correctly: ``` DELETE FROM character_data WHERE alive = 0 GROUP BY playerUID HAVING COUNT(playerUID) > 1; ``` The wierd thing is that it will delete all the rows where `alive = 1` but not `0`. If i do this: ``` DELETE FROM character_data WHERE alive = 1 GROUP BY playerUID HAVING COUNT(playerUID) > 1; ``` It will delete all rows where alive = 0 but every now and then it will delete the right row. Why is this doing the reverse and how can i get a reliable result?
This works: ``` delete from character_data where playeruid in (select t1.playeruid from character_data1 t1 where playeruid=t1.playeruid and killsz=t1.killsz and alive=0 group by playeruid having count(*)>1) limit 1 ``` **Note:** It works only when there are 2 duplicate records.If you have three duplicate playerid's and want to delete 2 of them then you should change the limit to 2 manually.The table character\_data1 is a temporary table created by me and you **cant use** the **same table inside sub-queries when using the delete function** so you should create a duplicate table to do your task.
What you want to do, if I read this correctly, is delete the rows where alive = 0 but only for those with playerUID values appearing more than once. So what you are looking for is a list of playerUID values that have at least one duplicate: ``` select playerUID from character_data group by playerUID having count(*) > 1 ``` Then use that list to delete where alive = 0: ``` delete from character_data where alive = 0 and playerUID in( query-shown-above ); ```
MYSQL : Deleting a row after UPDATE in scheduled event
[ "", "mysql", "sql", "" ]
I have the following query that gives a rounded result. How can I modify this to * Stop rounding * Display to 2 decimal places. E.g 3.456 -> 3.45 `SELECT SUM(Invoice_Details.Amount) AS TotalNet_C FROM Invoice_Details WHERE Invoice_Details.Invoice_Number = ||InvNo||`
To format it to 2 digits you can e.g. convert it to currency. ``` SELECT CAST(SUM(Invoice_Details.Amount) AS money) AS TotalNet_C FROM Invoice_Details WHERE Invoice_Details.Invoice_Number = ||InvNo|| ``` For be more specific you can use `CONVERT`. Check this for details: <http://msdn.microsoft.com/de-de/library/ms187928.aspx> About rounding: You want to crop the digits, not round them: ``` SELECT CAST(CAST(3.456 * 100 AS int) as float)/100 ``` This will do it. Your case then ``` SELECT CAST(CAST(CAST(SUM(Invoice_Details.Amount) * 100 AS int) as float)/100 AS money) AS TotalNet_C FROM Invoice_Details WHERE Invoice_Details.Invoice_Number = ||InvNo|| ```
you can simply do this for rounding your value ``` Str(ColumnName, 10, 2) 10-- this is for total length of your value. 2-- this is for how many decimal you want ```
SQL SUM without rounding
[ "", "sql", "sql-server", "rounding", "" ]
Is there a way to round down instead of up where values have .005? E.g. 1.234 -> 1.23 1.235 -> 1.23 1.236 -> 1.24 It's for invoicing purposes. I currently have an invoice which is showing totals of: £46.88 + £9.38 = £56.25 (the Grand total is what was agreed, but clearly the net & vat are misleading and don't add up exactly) I want to round down the net to £46.87 EDIT: What about a possible alternative to stop rounding altogether and just display to 2 decimal places? My Grand Total is calculating correctly anyway as it pulls from source rather than adding rounded subtotals. If I prevent the subtotals from rounding at all then they should display e.g. 1.23 and cut off .xx5xxx?
Assuming you're talking SQL Server: ``` DECLARE @val AS money SET @val = 5.555 --SET @val = 5.556 SELECT CASE WHEN (CAST(FLOOR(@val*1000) as int) % 10) <= 5 THEN FLOOR(@val*100) / 100 ELSE CEILING(@val*100) / 100 END ```
For MySQL use `select TRUNCATE(44.875, 2);` For SQLServer use `select ROUND(44.875, 2, 1)` A good trick is to multiply by 100 take the integer part and divide that with 100 (in SQLServer `select FLOOR(44.875 * 100)/100`) **Update:** As I read better the question I saw that x.xx5 should round down and x.xx6 should round up so I add this to the answer (example in SQLServer but should not be much different in other DBs) : ``` select CASE WHEN FLOOR(44.875 * 1000)-FLOOR(44.875 * 100)*10 <= 5 THEN FLOOR(44.875 * 100)/100 ELSE ROUND(44.875, 2) END ```
SQL Rounding down if x.xx5
[ "", "sql", "rounding", "" ]
I was wondering if it would be possible to write this without a join using something like a IN keyword, if so would it be better then what I wrote below. ``` SELECT DISTINCT vendor_name FROM vendors JOIN invoices ON vendors.vendor_id = invoices.vendor_id ORDER BY vendor_name; ```
The following query might be a more efficient way of running the query: ``` SELECT vendor_name FROM vendors v WHERE EXISTS (SELECT 1 FROM invoices i WHERE v.vendor_id = i.vendor_id ); ``` For best performance, you would want an index on `invoices(vendor_id)` and `vendors(vendor_name, vendor_id)`. First, I prefer the `join` version because I think it is more clearly written. But, this has the following advantages: 1. It eliminates the `select distinct` (by assuming that vendor names are already unique). That saves an aggregation for running the distinct. 2. Without the aggregation, the query can process the vendor ids in order using the index on `vendors`. 3. `exists` often has better performance characteristics than `in`. As a note, this won't have the best performance when the `invoices` table is much smaller than `vendors`.
Although a join is the preferred approach, converting it to `IN` would look like this: ``` SELECT DISTINCT vendor_name FROM vendors WHERE vendor_id IN (select vendor_id from invoices) ORDER BY vendor_name; ```
Any way to write this without using a join
[ "", "sql", "oracle", "" ]
I have a history table ('property\_histories') that logs events in our property management system. These events can be used to determine whether a given property was available to rent and I am trying to build a (weekly) summary of 'live' properties. The 4 events in question are 'published', 'unpublished', 'hidden\_from\_search' and 'unhidden\_from\_search. For a property to be live it must have been: * Published. * If it has ever been unpublished a subsequent published event mush be the most recent. * If it has ever been hidden\_from\_search a subsequent 'unhidden\_from\_search' event must have taken place more recently. Most properties will have a simple history that most likely consists of a single 'Published' event but some are more complicated an example is here: ``` property_histories ---------------------------- id | property_id | City | status | date 1 | 325407 | Paris | published | 2014-01-01 2 | 325407 | Paris | hidden_from_search | 2014-01-24 3 | 325407 | Paris | unhidden_from_search | 2014-02-05 4 | 325407 | Paris | unpublished | 2014-02-15 5 | 410008 | London | published | 2014-01-01 6 | 410008 | London | unpublished | 2014-01-10 7 | 410008 | London | published | 2014-01-18 ``` --- My aim is to be able to count 'live' properties by week: ``` weekly_count ---------------------------- Year | Week | City | Live_Count 2014 | 1 | Paris | 0 2014 | 1 | London | 0 2014 | 2 | Paris | 1 2014 | 2 | London | 1 2014 | 3 | Paris | 1 2014 | 3 | London | 0 2014 | 4 | Paris | 1 2014 | 4 | London | 1 2014 | 5 | Paris | 0 2014 | 5 | London | 1 2014 | 6 | Paris | 0 2014 | 6 | London | 1 2014 | 7 | Paris | 1 2014 | 7 | London | 0 2014 | 8 | Paris | 0 2014 | 8 | London | 1 2014 | 9 | Paris | 0 2014 | 9 | London | 1 ---------------------------- ``` Help appreciated!!
Your own test results don't match what you're asking for. You state the live count is by week, which means London should be live in week #1 as it was published in week #1 and then unpublished in week #2. Assuming week starts on a Sunday (sql default) then this will work. Just put in your own date range, and replace my numbers table with yours. If you need Monday to be your start date, use this at the top of your query ``` SET DATEFIRST 1 ``` Emulating your test: ``` -- Create dummy data CREATE TABLE #property_histories ( id int, property_id int, City varchar(50), status varchar(50), date date ) INSERT INTO #property_histories SELECT 1 , 325407 , 'Paris' , 'published' , '2014-01-01' UNION ALL SELECT 2 , 325407 , 'Paris' , 'hidden_from_search' , '2014-01-24' UNION ALL SELECT 3 , 325407 , 'Paris' , 'unhidden_from_search' , '2014-02-05' UNION ALL SELECT 4 , 325407 , 'Paris' , 'unpublished' , '2014-02-15' UNION ALL SELECT 5 , 410008 , 'London' , 'published' , '2014-01-01' UNION ALL SELECT 6 , 410008 , 'London' , 'unpublished' , '2014-01-10' UNION ALL SELECT 7 , 410008 , 'London' , 'published' , '2014-01-18' ``` Now the code: ``` -- TODO: Set your date range DECLARE @SD Datetime = '2014-01-01' DECLARE @ED Datetime = '2014-12-31' DECLARE @Wks INT = Datediff(week,@SD,@ED) -- Don't change this -- Generate dates table SELECT NumberID as 'Week', DATEADD(DAY, 1-DATEPART(WEEKDAY, DateAdd(week,NumberID-1,@SD)), DateAdd(week,NumberID-1,@SD)) as 'WeekStart', DATEADD(DAY, 7-DATEPART(WEEKDAY, DateAdd(week,NumberID-1,@SD)), DateAdd(week,NumberID-1,@SD)) as 'WeekEnd' INTO #Dates FROM Generic.tblNumbers -- TODO: use your own Numbers table here WHERE NumberID BETWEEN 1 AND @Wks -- Now generate report SELECT T.Year, T.Week, T.City, SUM(CASE WHEN PH1.status = 'published' THEN 1 WHEN PH1.status = 'unhidden_from_search' THEN 1 ELSE 0 END) as 'Live_Count' FROM #Dates D1 LEFT JOIN -- Get latest date per week (SELECT YEAR(D.WeekStart) as 'Year', D.Week, PH.City, PH.property_ID, MAX(PH.date) as MaxDate FROM #Dates D LEFT JOIN #property_histories PH ON PH.date BETWEEN @SD AND D.WeekEnd GROUP BY D.WeekStart, D.Week, D.WeekStart, D.WeekEnd, PH.City, PH.property_id ) T ON T.Week = D1.Week LEFT JOIN #property_histories PH1 ON PH1.City = T.City AND PH1.property_id = T.property_id AND PH1.date = T.MaxDate GROUP BY T.Year, T.Week, T.City ``` To break down the logic: Firstly I'm creating a helper table with week number, week start and week end dates. Week start is largely redundant but might come in handy for reporting. I then subquery to get the latest date relevant for each week / city / property. For this "max" date, city and property I get the status, and if it's live, I sum it. So in layman terms ; get the *latest* status per city per property per week and SUM(if live). Unlike the other answers posted, this solution caters for gaps in data. If the latest status recorded for a city and property was actually all the way back to week 1, it still works in any subsequent week.
I have a feeling I have missed a simpler way to do this. However the following query uses 2 sub queries. The first gets all the published / unpublished ranges for a property (ie, the smallest unpublished date following a published date), while the 2nd does the same for properties being hidden from search. These are then joined to properties on the property id, where the current date is within the range returned by the sub queries. The WHERE clause then checks that a record is matched for published and not found for the hidden sub queries Had to use DISTINCT as otherwise the multiple published dates for a single unpublish would trigger duplicate property rows being returned. ``` SELECT DISTINCT properties.* FROM properties INNER JOIN ( SELECT a.property_id, a.created_at AS start_date, IFNULL(MIN(b.created_at), NOW()) AS end_date FROM property_histories a LEFT OUTER JOIN property_histories b ON a.property_id = b.propert_id AND a.created_at < b.created_at WHERE a.status = 'published' AND b.status = 'unpublished' GROUP BY a.property_id, a.created_at ) published ON properties.property_id = published.property_id AND NOW() BETWEEN published.start_date AND published.end_date LEFT OUTER JOIN ( SELECT a.property_id, a.created_at AS start_date, MIN(b.created_at) AS end_date FROM property_histories a LEFT OUTER JOIN property_histories b ON a.property_id = b.propert_id AND a.created_at < b.created_at WHERE a.status = 'hidden_from_search' AND b.status = 'unhidden_from_search' GROUP BY a.property_id, a.created_at ) hidden ON properties.property_id = hidden.property_id AND NOW() BETWEEN hidden.start_date AND hidden.end_date WHERE published.property_id IS NOT NULL AND hidden.property_id IS NULL ```
MySQL SELECT Query to Turn History into Weekly Summary Over Time
[ "", "mysql", "sql", "" ]
In an SQL query I need to get some data only from the current month. To get a specific month, I would do this. ``` select (case when date_part('month', date) = 3 and date_part('year', date) = 2014 then amount end) ,(case when date_part('month', date) = 3 and date_part('year', date) = 2013 then amount end from <table> ``` My main purpose is to get the current month for this year, and the same month from the previous year. This is what I would try ``` select current_month(case when date_part('year', date) = 2014 then amount end) as current ,(case when current_month and date_part('year', date) = 2013 then amount end) as last_year ```
You can use `now()` or `current_timestamp` to get the current time, then the current month, year, and last year: ``` select (case when date_part('month', date) = extract(month from now()) and date_part('year', date) = extract(year from now()) then amount end) ,(case when date_part('month', date) = extract(month from now()) and date_part('year', date) = extract(year from now())-1 then amount end from <table> ```
Using comment feedback: ``` select (case when date_part('month', date) = date_part('month',current_Timestamp) and date_part('year', date) = date_part('Year',current_TimeStamp) then amount end) ,(case when date_part('month', date) = date_part('month',current_Timestamp) and date_part('year', date) = date_part('Year',current_TimeStamp)-1 then amount end from <table> ``` and example fiddle: sqlfiddle.com/#!15/e495f/1/0
Getting the current month?
[ "", "sql", "postgresql", "date", "" ]
I'm trying to write a query that gives me the count of records where execoffice\_status=1 (could equal =0 too). I want to output the results by using a different table employee which gives me their names. The query I wrote seems to give me some results but gives me all the records in the table even where execoffice\_status=0 (not sure how I would add that to the query). What I'm trying to get out off the query is the count of records that execoffice\_status=1 and from what year (execoffice\_date), what eventually i would like from the query is the top 10 from each year (order by year). With the query below I get all the record and even where execoffice\_status=0 query: ``` SELECT * FROM ( select ROW_NUMBER() OVER(PARTITION BY e.emp_namelast order by year(c.execoffice_date) desc ) as RowNum, year(c.execoffice_date) as year, e.emp_nameFirst + ' ' + e.emp_namelast as fullname, count(c.execoffice_status) as stars from phonelist.dbo.employee e join intranet.dbo.CSEReduxResponses c on c.employee = e.emp_id group by emp_namelast, emp_namefirst, year(c.execoffice_date) ) a order by year ``` Here is a <http://sqlfiddle.com/#!3/79f253/1> that I made with some dummy data.
For the first bit of your question you can simply add a where clause. ``` where c.execoffice_status=1 ``` To get the top values for each year, `Rank` can accomplish this: ``` SELECT * FROM ( select RANK() OVER(PARTITION BY year(c.execoffice_date) order by e.emp_namelast desc ) as Rank, year(c.execoffice_date) as year, e.emp_nameFirst + ' ' + e.emp_namelast as lastName, sum(c.execoffice_status) as stars from employee e join CSEReduxResponses c on c.employeee = e.emp_id where c.execoffice_status=1 group by emp_namelast, emp_namefirst, year(c.execoffice_date) ) a where rank <= 2 order by year ``` ## [fiddle](http://sqlfiddle.com/#!3/6e071/7) This numbers the users by their stars and gives you the top 2 for each year. (for 10 just `<= 10`)
Here is how I understood your requirements. Get count of records for each employee per year where execoffice\_status is 1 execoffice\_status can be one or zero in this case, you can use sum and group by, if execoffice\_status can be another number other than one or zero, then we would need to use rownum and count, instead of sum and group by let me know if this does what you want. ``` select * from( select a.employeEe,a.execoffice_date, SUM(a.execoffice_status) execoffice_status_count from CSEReduxResponses a group by a.employeEe,execoffice_date ) a left outer join employee b on b.emp_id = a.employeee where EXECOFFICE_STATUS_COUNT > 0 order by execoffice_date desc; ``` also if you want to get the top 10 rows, I think with sql server you do Select TOP 10 field1, field2, field3 from table
How to output results by order of year and count of records?
[ "", "sql", "sql-server-2008", "" ]
I have these `SQL` queries: ``` select count(*) as count1, sum(amount) as amount1 from v_purchase where amount >= 100 and amount < 100000 and p_date = '2014-06-12' select count(*) as count1, sum(amount) as amount1 from v_purchase where amount >= 100000 and amount < 250000 and p_date = '2014-06-12' select count(*) as count1, sum(amount) as amount1 from v_purchase where amount >= 250000 and amount < 500000 and p_date = '2014-06-12' select count(*) as count1, sum(amount) as amount1 from v_purchase where amount >= 500000 and amount < 1000000 and p_date = '2014-06-12' select count(*) as count1, sum(amount) as amount1 from v_purchase where amount >= 1000000 and amount < 2500000 and p_date = '2014-06-12' select count(*) as count1, sum(amount) as amount1 from v_purchase where amount >= 2500000 and amount < 5000000 and p_date = '2014-06-12' select count(*) as count1, sum(amount) as amount1 from v_purchase where amount >= 5000000 and amount < 10000000 and p_date = '2014-06-12' select count(*) as count1, sum(amount) as amount1 from v_purchase where amount >= 10000000 p_date = '2014-06-12' ``` Is there a way of combining these queries into one and execute it as a single query? The results can then be later separated in a code.
**UPDATED** Update based on discussion for a range of dates. I have created a [SQL Fiddle for you](http://www.sqlfiddle.com/#!6/98989/5) ``` select SUM(Case When amount >= 100 and amount < 100000 Then 1 else 0 End) as band1Count, SUM(Case When amount >= 100000 and amount < 250000 Then 1 else 0 End) as band2Count, SUM(Case When amount >= 250000 and amount < 500000 Then 1 else 0 End) as band3Count, SUM(Case When amount >= 500000 and amount < 1000000 Then 1 else 0 End) as band4Count, SUM(Case When amount >= 1000000 and amount < 2500000 Then 1 else 0 End) as band5Count, ... SUM(Case When amount >= 100 and amount < 100000 Then amount else 0 End) as band1Sum, SUM(Case When amount >= 100000 and amount < 250000 Then amount else 0 End) as band2Sum, SUM(Case When amount >= 250000 and amount < 500000 Then amount else 0 End) as band3Sum, SUM(Case When amount >= 500000 and amount < 1000000 Then amount else 0 End) as band4Sum, SUM(Case When amount >= 1000000 and amount < 2500000 Then amount else 0 End) as band5Sum, ... from v_purchase where p_date between '2014-06-10' and '2014-06-12' ``` ---
Write `UNION` keyword between those queries.
Combining SQL queries into one
[ "", "sql", "sql-server", "" ]
Assuming a simple query such as: **select name, role, placeOfWork, startDate, endDate from SampleTable** which displays the name of employees, the role they've occupied at a workplace from a start date to an end date. End dates are null when the job assignment is current. I have a resultset for such query where I get returned such sample: ``` Jack Cook Jimmy's Burger Joint 01-01-2010 21-01-2010 Jack Cook Jimmy's Burger Joint 21-01-2010 31-03-2010 Jack Cook Jimmy's Burger Joint 31-03-2010 24-12-2010 Ronald Marketing McDonald's 01-01-2010 22-01-2010 Ronald Marketing McDonald's 22-01-2010 06-06-2010 Ronald Marketing McDonald's 06-06-2010 NULL Jack Cosmonaut NASA 01-01-2011 NULL ... ``` I would like to aggregate job assignments into "single conceptual ones", e.g.: ``` Jack Cook Jimmy's Burger Joint 01-01-2010 24-12-2010 Ronald Marketing McDonald's 01-01-2010 NULL Jack Cosmonaut NASA 01-01-2011 NULL ... ``` As much as possible I would like to avoid temp tables as I need the query to run from various places. I could not work it out using either an inner join or a group by.
I would approach this with simple logic. An assignment starts when there is no overlap with the previous assignment. In that case, we can assign a value to each assignment which is the number of assignments in the past. This is easiest with `lag()` and cumulative sum. Here is a version without those: ``` with stp as ( select name, role, placeOfWork, startDate, endDate, (case when exists (select 1 from SampleTable st2 where st2.name = st.name and st2.role = st.role and st2.placeOfWork = st.placeOfWork and st2.endDate = st.StartDate ) then 0 else 1 end) as PeriodStart from SampleTable st ), stpg as ( select stp.*, (select sum(PeriodStart) from stp stp2 where stp2.name = stp.name and stp2.role = stp.role and stp2.placeOfWork = stp.placeOfWOrk and stp2.StartDate <= stp.StartDate ) as grp from stp select name, role, placeOfWork, min(StartDate) as StartDate, max(endDate) as endDate from stpg group by grp, name, role, placeOfWork; ```
My approach to this would be to first expand out your ranges into rows (using a numbers or calendar table), so this row for example: ``` StartDate | Enddate ------------+------------ 2010-01-01 | 2010-03-01 ``` Becomes ``` Date ------------ 2010-01-01 2010-01-02 2010-01-03 ``` Since a lot of the date functions are DBMS specific, I am using SQL-Server specific syntax, but this should be readily adaptable to sybase (which I am not familiar with at all), this will expand a simple table of just start and end date to all the dates in the range: ``` SELECT DATEADD(DAY, n.Number, t.StartDate) AS Date FROM T INNER JOIN Numbers n ON DATEADD(DAY, n.Number, t.StartDate) <= t.EndDate ``` Now you have a set than can be solved using [Gaps and Islands Logic](http://www.manning.com/nielsen/SampleChapter5.pdf). After expanding your range you then need to identify the gaps and islands, to do this I am using DENSE\_RANK which is supported in both sybase and SQL Server. This gives the column `GroupingSet` in the below. The final step is then to just aggregate based on your islands: ``` WITH Expanded AS ( SELECT Name, Job, Company, StartDate, DATEADD(DAY, n.Number, t.StartDate) AS Date, CASE WHEN EndDate IS NULL THEN 1 ELSE 0 END AS EndDateIsNull FROM T INNER JOIN Numbers n ON DATEADD(DAY, n.Number, t.StartDate) <= ISNULL(t.EndDate, t.StartDate) ), Grouped AS ( SELECT Name, Job, Company, Date, DATEADD(DAY, -DENSE_RANK() OVER(PARTITION BY Name, Job, Company ORDER BY Date), Date) AS GroupingSet, EndDateIsNull FROM Expanded ) SELECT Name, Job, Company, MIN(Date) AS StartDate, CASE WHEN MAX(EndDateIsNull) = 0 THEN MAX(Date) END AS EndDate FROM Grouped GROUP BY Name, Job, Company, GroupingSet ORDER BY Name, Job, StartDate; ``` **[Example on SQL Fiddle](http://sqlfiddle.com/#!3/84c1d/3)**
Aggregating groups of date ranges
[ "", "sql", "" ]
I have a database table ('map') with the following columns: * ptID (uniquely identifies patients) * uniid (uniqueidentifier for every row) * time (have a new row every 1 minute) * mapval (data point of interest) * validate (0 or 1) What I would like is a table with the preceding and following 30 mapval every time validate = 1. These 61 rows should come from the table 'map' and include all rows (i.e. when validate = 0 or 1, not just 1). I have tried queries using lag/lead, but am having no luck. Just FYI, once I have this table I plan to do some basic things to the data. Like an avg, median (with percentile\_cont) and mode (with an ordered count). At this point I can easily do the AVG with the table as is using the following: ``` SELECT [ptID],[time],[mapval],[validate], IIF([validate]=1, AVG(CAST ([mapval] AS decimal)) OVER( PARTITION BY [ptID] ORDER BY [ptID] ASC, [time] ASC ROWS BETWEEN 30 PRECEDING AND 30 FOLLOWING ) ,NULL) AS 'sixytminavg' ``` But unfortunately cannot do the median nor mode. Longwinded, but I hope it gives all the information. Thanks in advance!
Let me focus on this: > What I would like is a table with the preceding and following 30 > mapval every time validate = 1. These 61 rows should come from the > table 'map' and include all rows (i.e. when validate = 0 or 1, not > just 1). You can get the 30 rows after each validate -- assuming there is no other validate -- by breaking the rows into groups. The idea is to assign a validate group to each row. And, you can do this by counting the number of times that validate is 1 before each row. Then take the first 31 rows: ``` select m.* from (select m.*, row_number() over (partition by grp order by time) as seqnum from (select m.*, sum(validate) over (order by ptId order by time) as grp from map m ) m ) m where seqnum <= 31; ``` You can do this for before and after groups at the same time: ``` select m.* from (select m.*, row_number() over (partition by grp order by time) as seqnum_after, row_number() over (partition by grp order by time desc) as seqnum_before from (select m.*, sum(validate) over (order by ptId order by time) as grp_after, sum(validate) over (order by ptId order by time desc) as grp_before from map m ) m ) m where seqnum_after <= 31 or seqnum_before <= 31; ``` EDIT: If the validates are too close together, you an use a `join` approach instead: ``` select m.* from (select m.*, row_number() over (partition by ptid order by time) as seqnum from map m where validate = 1 ) v join (select m.*, row_number() over (partition by ptid order by time) as seqnum from map m ) m on v.ptid = m.ptid and m.seqnum between v.seqnum - 30 and v.seqnum + 30; ```
Off the top of my head, maybe something like the following? ``` SELECT m.ptid, m.uniid, m.time, m.mapval, m.validate FROM map m INNER JOIN (SELECT uniid FROM map WHERE validate=1) AS valids ON m.uniid BETWEEN valids.uniid-30 AND valids.uniid+30 ORDER BY m.uniid ```
SQL Create table from surrounding rows
[ "", "sql", "t-sql", "" ]
The question is pretty much in the header, but here are the specifics. For my senior design project we are going to be writing software to control some hardware and display diagnostics info on a web front. To accomplish this, I'm planning to use a combination of Python and nodejs. Theoretically, a python service script will communicate with the hardware via bacnet IP and log diagnostic info in an SQL database. The nodejs JavaScript server code will respond to webfront requests by querying the database and returning the relevant info. I'm pretty new to SQL, so my primary question is.. Is this possible? My second and more abstract questions would be... is this wise? And, is there any obvious advantage to using the same language on both ends, whether that language is Python, Java, or something else?
**tl;dr** You can use any programming language that provides a client for the database server of your choice. To the database server, as long as the client is communicating as per the server's requirements (that is, it is using the server's library, protocol, etc.), then there is no difference to what programming language or system is being used. --- The database drivers provide a common abstract layer, providing a guarantee that the database server and the client are speaking the same language. The programming language's interface to the database driver takes care of the language specifics - for example, providing syntax that conforms to the language; and on the opposite side it the driver will ensure that all commands are sent in the protocol that the server expects. Since drivers are such a core requirement, there are usually multiple drivers available for databases; and also because good database access is a core requirement for programmers, each language strives to have a "standard" API for all databases. For example Java has JDBC Python has the DB-API, .NET has ODBC (and ADO I believe, but I am not a .NET expert). These are what the database drivers will conform to, so that it doesn't matter which database server you are using, you have one standard way to connect, one standard way to execute queries and one standard way to fetch results - in effect, making your life as a programmer easier. In most cases, there is a reference driver (and API/library) provided by the database vendor. It is usually in C, and it is also what the "native" client to the database uses. For example the `mysql` *client* for the MySQL database server using the MySQL C drivers to connect, and it is the same driver that is used by the Python MySQLdb driver; which conforms to the Python DB-API.
Yes its possible. Two applications with different languages using one database is almost exactly the same as one application using several connections to it, so you are probably already doing it. All the possible problems are exactly the same. The database won't even know whether the connections are made from one application or the other.
Can two programs, written in different languages, connect to the same SQL database?
[ "", "sql", "node.js", "python-2.7", "" ]
**Question:** *What should I take into account when make performance comparison of those two queries?* **Queries:** ``` SELECT id FROM table WHERE id = ( SELECT table_id_fk FROM another_table WHERE id = @id ) ``` vs ``` SELECT id FROM table t JOIN another_table at ON t.id = at.table_id_fk WHERE at.id = @id ``` Actually they do the same but in other ways. **P.S.** It's not enough to just launch it on my server and look at response time. I want to understand the difference and understand what happens when, for instance, my db will grow up.
With a minor bit of investigation. I have sympathy with the point above that readability is important, although I find the join to be readable while sub queries I find less readable (although in this case the sub query is quite simple so not a major issue either way). Normally I would hope that MySQL would manage to optimise a non correlated sub query away and execute it just as efficiently as if it were a join. This sub query at first glance does appear to be non correlated (ie, the results of it do not depend on the containing query). However playing on SQL fiddle this doesn't appear to be the case:- <http://www.sqlfiddle.com/#!2/7696c/2> Using the sub query the explain says it is an *UNCACHEABLE SUBQUERY* which from the manual is :- *A subquery for which the result cannot be cached and must be re-evaluated for each row of the outer query* Doing much the same sub query by specifying the value rather than passing it in as a variable gives a different explain and just describes it as a *SUBQUERY* . This I suspect is just as efficient as the join. My feeling is that MySQL is confused by the use of the variable, and has planned the query on the assumption that the value of the variable can change between rows. Hence it needs to re execute the sub query for every row. It hasn't managed to recognise that there is nothing in the query that modifies the value of the variable. If you want to try yourself here are the details to set up the test:- ``` CREATE TABLE `table` ( id INT, PRIMARY KEY id(id) ); CREATE TABLE another_table ( id INT, table_id_fk INT, PRIMARY KEY id (id), INDEX table_id_fk (table_id_fk) ); INSERT INTO `table` VALUES (1), (2), (3), (4), (5), (6), (7), (8); INSERT INTO another_table VALUES (11,1), (12,3), (13,5), (14,7), (15,9), (16,11), (17,13), (18,15); ``` SQL to execute:- ``` SET @id:=13; SELECT t.id FROM `table` t WHERE id = ( SELECT table_id_fk FROM another_table WHERE id = @id ); SELECT t.id FROM `table` t JOIN another_table at ON t.id = at.table_id_fk WHERE at.id = @id; SELECT t.id FROM `table` t WHERE id = ( SELECT table_id_fk FROM another_table WHERE id = 13 ); EXPLAIN SELECT t.id FROM `table` t WHERE id = ( SELECT table_id_fk FROM another_table WHERE id = @id ); EXPLAIN SELECT t.id FROM `table` t JOIN another_table at ON t.id = at.table_id_fk WHERE at.id = @id; EXPLAIN SELECT t.id FROM `table` t WHERE id = ( SELECT table_id_fk FROM another_table WHERE id = 13 ); ``` Explain results:- ``` ID SELECT_TYPE TABLE TYPE POSSIBLE_KEYS KEY KEY_LEN REF ROWS EXTRA 1 PRIMARY t index (null) PRIMARY 4 (null) 8 Using where; Using index 2 UNCACHEABLE SUBQUERY another_table const PRIMARY PRIMARY 4 const 1 ID SELECT_TYPE TABLE TYPE POSSIBLE_KEYS KEY KEY_LEN REF ROWS EXTRA 1 SIMPLE at const PRIMARY,table_id_fk PRIMARY 4 const 1 1 SIMPLE t const PRIMARY PRIMARY 4 const 1 Using index ID SELECT_TYPE TABLE TYPE POSSIBLE_KEYS KEY KEY_LEN REF ROWS EXTRA 1 PRIMARY t const PRIMARY PRIMARY 4 const 1 Using index 2 SUBQUERY another_table const PRIMARY PRIMARY 4 1 ```
Performance is not always the main option when writing SQL queries. I would choose the first query for readability. "Give me the record of table that is referenced by the record of another\_table with @id". This is straight-forward and easy to read. As to performance: You access one record by primary key to access another record (in another table) by Primary key. This can hardly get any faster. The second statement joins both tables to get to the id (the selected id lacks the qualifier t by the way). So it does the same, but is not obvious at first glance. "Join both tables, but limit this to the another\_table record with @id and give me table's id". This means the same, but gives the dbms the freedom to choose how to execute it. It could for instance join all records first and then remove all where @id doesn't match. However a good dbms won't do this; it will create the same execution plan as for statement 1. Good dbms detect situations like these, they re-write qwueries internally, find out that the queries mean the same and come to the same execution plan. This is getting better and better, but it doesn't always work perfectly. So sometimes it does matter how to write the statement. When statements become more complex, then sometimes the second syntax to join everything and filter what you need leads to better execution plans. Unfortunately. So you often have to decide between readability and performance. I usually write my queries as readable as possible and change them only when it comes to performance problems.
MySQL: JOIN vs WHERE performance
[ "", "mysql", "sql", "query-performance", "" ]
I have one question. I was creating below procedure temporary. When I Execute it in below format it works fine: ``` CREATE PROCEDURE Get_TableList_By_Name @Proc_Name VARCHAR(255) AS BEGIN SELECT * FROM sys.tables WHERE name LIKE '%' + @Proc_Name + '%' END GO EXEC Get_TableList_By_Name 'norway' GO DROP PROCEDURE Get_TableList_By_Name GO ``` But when I execute same SQL in below format it giving error saying: "Incorrect syntax near 'GO'." ``` CREATE PROCEDURE Get_TableList_By_Name @Proc_Name VARCHAR(255) AS BEGIN SELECT * FROM sys.tables WHERE name LIKE '%' + @Proc_Name + '%' END GO EXEC Get_TableList_By_Name 'norway' GO DROP PROCEDURE Get_TableList_By_Name GO CREATE PROCEDURE Get_TableList_By_Name @Proc_Name VARCHAR(255) AS BEGIN SELECT * FROM sys.tables WHERE name LIKE '%' + @Proc_Name + '%' END GO 1 EXEC Get_TableList_By_Name 'norway' GO 1 DROP PROCEDURE Get_TableList_By_Name GO 1 ``` How to write same SQL with GO statement in single line? Is it possible? If not then Why? Thanks, Vishal
From [GO (Transact-SQL)](http://msdn.microsoft.com/query/dev10.query?appId=Dev10IDEF1&l=EN-US&k=k%28GO_TSQL%29;k%28SQL11.SWB.TSQLRESULTS.F1%29;k%28SQL11.SWB.TSQLQUERY.F1%29;k%28MISCELLANEOUSFILESPROJECT%29;k%28DevLang-TSQL%29&rd=true) > A Transact-SQL statement cannot occupy the same line as a GO command. > However, the line can contain comments. So Go needs to be on its own line, except for comments.
'GO' is not a SQL command. It is a batch terminator recognized by tools like Query Analyzer, SSMS, SQLCMD, etc. These tools generally require the GO to be on a separate line and send the preceding SQL statements as a batch to SQL Server when the GO is encountered GO Statement must be written in new line as it is not T-SQL command. T-SQL statement can not occupy the same line as GO. GO statement can contain comments.
Single line GO(Batch) statement giving error in SQL Server?
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "" ]
I have following table: ![enter image description here](https://i.stack.imgur.com/iwV6W.png) Find each country that belongs to a continent where all populations are less than 25000000. ``` SELECT name, continent, population FROM world WHERE continent IN (SELECT continent FROM world t WHERE 25000000 > ALL (SELECT population FROM world WHERE continent = t.continent)) ``` Can I write this in a better way using a keyword or function that wouldn't require so many levels of nesting?
You are looking for continents where all countries have a population less then 25,000,000. So group by continents and keep those where even the maximum population doesn't exceed this amount. ``` SELECT name, continent, population FROM world WHERE continent IN ( SELECT continent FROM world GROUP BY continent HAVING MAX(population) < 25000000 ); ```
``` SELECT w1.name,w1.continent,w1.population FROM world w1 INNER JOIN ( SELECT continent, SUM(population) as continent_population FROM world GROUP BY continent ) w2 ON w2.continent = w1.continent WHERE w2.continent_population < 25000000 ``` .. or .. ``` SELECT name,continent,population FROM world w1 WHERE EXISTS ( SELECT 1 FROM world w2 WHERE w2.continent = w1.continent GROUP BY continent HAVING SUM(population) < 25000000 ) ```
How can I rewrite this query to be more readable?
[ "", "mysql", "sql", "" ]
Couple of databases produced an error this morning whilst running in Single User Mode. Due to the following error I am unable to do anything :( ``` Msg 1205, Level 13, State 68, Line 1 Transaction (Process ID 62) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. ``` I receive that error when trying the following (using the Master Database as a Sys Admin): ``` ALTER DATABASE dbname SET MULTI_USER; GO ``` For the sake of it I have tried Restarting the SQL Server, I have tried killing any processes and I have even tried resetting the single user myself: ``` ALTER DATABASE dbname SET SINGLE_USER WITH ROLLBACK IMMEDIATE; GO ``` The job which was running was designed to copy the database and put it in single user mode immediately to try and make it faster. Anyway I can remove the locks?
Ok, I will answer my own. I had to use the following: ``` sp_who ``` which displayed details of the current connected users and sessions, I then remembered about Activity Monitor which shows the same sort of stuff...Anyway that led me away from my desk to some bugger who had maintained connections to the database against my wishes... Anyway once I had shut the PC down (by unplugging it...deserved it) I could then run the SQL to amend it into `MULTI_USER` mode (using system admin user): ``` USE Master GO ALTER DATABASE dbname SET MULTI_USER; GO ``` FYI for those who care, this can be used to immediately set the DB to `SINGLE_USER`: ``` ALTER DATABASE dbname SET SINGLE_USER WITH ROLLBACK IMMEDIATE; GO ``` Further details, if you know the process id you can use `kill pid`: ``` kill 62 ``` Bare in mind SSMS creates a process for your user as well, in my case this was being rejected due to another. EDIT: As Per Bobby's recommendations we can use: ``` sp_Who2 ``` This can show us which process is blocked by the other process.
Had the same problem. This worked for me: ``` set deadlock_priority high; -- could also try "10" instead of "high" (5) alter database dbname set multi_user; -- can also add "with rollback immediate" ``` From ideas/explanation: [http://myadventuresincoding.wordpress.com/2014/03/06...](https://myadventuresincoding.wordpress.com/2014/03/06/sql-server-alter-database-in-single-user-mode-to-multi-user-mode/) [http://www.sqlservercentral.com/blogs/pearlknows/2014/04/07/...](http://www.sqlservercentral.com/blogs/pearlknows/2014/04/07/help-i-m-stuck-in-single-user-mode-and-can-t-get-out/)
SQL deadlocking..in single user mode now
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have the following table ``` CREATE TABLE Test (`Id` int, `value` varchar(20), `adate` varchar(20)) ; INSERT INTO Test (`Id`, `value`, `adate`) VALUES (1, 100, '2014-01-01'), (1, 200, '2014-01-02'), (1, 300, '2014-01-03'), (2, 200, '2014-01-01'), (2, 400, '2014-01-02'), (2, 30 , '2014-01-04'), (3, 800, '2014-01-01'), (3, 300, '2014-01-02'), (3, 60 , '2014-01-04') ; ``` I want to achieve the result which selects only Id having max value of date. ie Id ,value ,adate ``` 1, 300,'2014-01-03' 2, 30 ,'2014-01-04' 3, 60 ,'2014-01-04' ``` how can I achieve this using `group by`? I have done as follows but it is not working. ``` Select Id,value,adate from Test group by Id,value,adate having adate = MAX(adate) ``` Can someone help with the query?
If you are using a DBMS that has analytical functions you can use ROW\_NUMBER: ``` SELECT Id, Value, ADate FROM ( SELECT ID, Value, ADate, ROW_NUMBER() OVER(PARTITION BY ID ORDER BY Adate DESC) AS RowNum FROM Test ) AS T WHERE RowNum = 1; ``` Otherwise you will need to use a join to the aggregated max date by Id to filter the results from `Test` to only those where the date matches the maximum date for that Id ``` SELECT Test.Id, Test.Value, Test.ADate FROM Test INNER JOIN ( SELECT ID, MAX(ADate) AS ADate FROM Test GROUP BY ID ) AS MaxT ON MaxT.ID = Test.ID AND MaxT.ADate = Test.ADate; ```
Select the maximum dates for each id. ``` select id, max(adate) max_date from test group by id ``` Join on that to get the rest of the columns. ``` select t1.* from test t1 inner join (select id, max(adate) max_date from test group by id) t2 on t1.id = t2.id and t1.adate = t2.max_date; ```
how to use SQL group to filter rows with maximum date value
[ "", "sql", "group-by", "having-clause", "" ]
Is there a way for me to show dates using a select statement in sql dates from to To? Like if I select the date Jan. 15 2013 as from and Jan. 20, 2013 as To the query will show the following: ``` DATE 2013/01/15 12:00 2013/01/16 12:00 2013/01/16 12:00 2013/01/17 12:00 2013/01/18 12:00 2013/01/19 12:00 2013/01/20 12:00 ``` Is this possible?
A better approach would be to write something as: ``` DECLARE @from DATE, @to DATE; SELECT @from = '2013-01-15' , @to = '2013-01-20'; SELECT DATEADD(DAY, n-1, @from) FROM ( SELECT TOP (DATEDIFF(DAY, @from, @to)+1) ROW_NUMBER() OVER (ORDER BY s1.[object_id]) FROM sys.all_objects AS s1 CROSS JOIN sys.all_objects AS s2 ) AS x(n); ``` Check [Demo](http://rextester.com/JGLNP57037) here.
Use `BETWEEN...AND`: ``` SELECT DateCol FROM TableName WHERE DateCol BETWEEN @FromDate AND @ToDate ORDER BY DateCol ```
How can I select dates in SQL server to show?
[ "", "sql", "sql-server", "" ]
I'm trying to create an new interface for a database but I don't know how to do what I want. I have 3 tables : ``` - table1(id1, time, ...) id11 .. id12 .. id13 .. - table2(id2, price, ...) id21 .. id22 .. id23 .. - table1_table2(#id1, #id2, value) id11, id22, 6 id11, id23, 10 id13, id22, 5 ``` So I want to have something like this : ``` id11, id21, 0 id11, id22, 6 id11, id23, 10 id12, id21, 0 id12, id22, 0 id12, id23, 0 id13, id21, 0 id13, id22, 5 id13, id23, 0 ``` I've tried lots of requests but nothing efficient.. Please, help me ^^ EDIT : I'm using Access ( :'( ) 2007, and apparently, it doesn't support CROSS JOIN... I tried to use this : <http://blog.jooq.org/2014/02/12/no-cross-join-in-ms-access/> but still have a syntax error on the JOIN or the FROM.. EDIT 2 : Here is my query (I'm french, so don't take care of names please ^^) ``` SELECT Chantier.id_chantier, Indicateur.id_indicateur, Indicateur_chantier.valeur FROM ((Chantier INNER JOIN Indicateur ON (Chantier.id_chantier*0 = Indicateur.id_indicateur*0)) LEFT JOIN Indicateur_chantier ON ( (Chantier.id_chantier = Indicateur_chantier.id_chantier) AND (Indicateur.id_indicateur = Indicateur_chantier.id_indicateur) ) ) ```
You should first cross join table1 and table2 to produce their Cartesian product and the left join to get the values where exist : ``` SELECT t1.id1,t2.id2,ISNULL(t12.value,0) FROM table1 t1 CROSS JOIN table2 t2 LEFT JOIN table1_table2 t12 on t12.id1=t.id1 and t12.id2=t2.id2 ``` Finally use ISNULL to replace null values with zeros.
Answer may vary by database, this works in SQL Server, you need a `CROSS JOIN` to get every combination of `table1` and `table2`, then a `LEFT JOIN` to return pairs with values: ``` SELECT a.id1, b.id2, COALESCE(c.value,0) FROM table1 a CROSS JOIN table2 b LEFT JOIN table3 c ON a.id1 = c.id1 AND b.id2 = c.id2 ``` Pairs without values would return `NULL`, so you can use `COALESCE()` to return 0 instead. Demo: [SQL Fiddle](http://sqlfiddle.com/#!6/9e85fd/1/0)
LEFT JOIN on 3 tables to get a value
[ "", "sql", "ms-access-2007", "" ]
I have a table `Item` with autoinc int primary key `Id` and a foreign key `UserId`. And I have a table `User` with autoinc int primary key `Id`. Default is that the index for `Item.Id` gets clustered. I will mostly query items on `user-id` so my question is: Would it be better to set the `UserId` foreign key index to be clustered instead?
Having the clustered index on the identity field has the advantage that the records will be stored in the order that they are created. New records are added at the end of the table. If you use the foreign key as clustered index, the records will be stored in that order instead. When you create new records the data will be fragmented as records are inserted in the middle, which can reduce performance. If you want an index on the foreign key, then just add a non-clustered index for it.
The answer depends only on usage scenario. For example, Guffa tolds that data will be fragmented. That's wrong. If your queries depends mostly on UserId, then data clustered by ItemId is fragmented for you, because items for same user may be spreaded over a lot of pages. Of course, compared to sequential ItemId (if it is sequential in your schema), using UserId as clustered key can cause page splits while inserting. This is two additional page writes at maximum. But when you're selecting by some user, his items may be fragmented over tens of pages (depends on items per user, item size, insertion strategy, etc) and therefor a lot of page reads. If you have a lof of such selects per single single insert (very often used web/olap scenarios), you can face hundreds of IO operations compared to few ones spent on page splitting. That was the clustering index was created for, not only for clustering by surrogate IDs. So there is no clear answer, are the clustered UserId in your case good or bad, because this is highly depends on context. What is ratio between selects/inserts operations? How fragmented user ids are if clustered by itemid? How many additional indicies are on the table, because there is a pitfall (below) in sql server. As you might know, clustered index requires unique values. This is not a big problem, because you can create index on pair (UserId, ItemId). Clustered index isn't itself stored on disk, so no matter how many fields are there. But non-clustered indices store clustered index values in their leaves. So if you have clustered index on UserId+ItemId (lets imagine their type is [int] and size is 8 bytes) and non-clustered index on ItemId, then this index will have twice size (8 bytes per a b-tree leaf) compared to just the ItemId as clustered index (4 bytes per a leaf).
Clustered index on foreign key or primary key?
[ "", "sql", "sql-server", "clustered-index", "" ]
I have a bunch of invoices in a SQL table which have a field called "Status". There a two possible statuses: "P" or "U". I want to select all invoices however, I want to replace all instances of the "P" with "Posted" and all instances of "U" with "Unapproved". The following query does exactly what I want, but for only one record at a time. ``` Replace(Type, 'U', 'Unapproved') ``` How can I get the "Replace" function to work in such a way that all records in the column are replaced with either "Posted" or "Unapproved"? For example, I want to change this table: ``` +-----------+--------+ | Invoice # | Status | +-----------+--------+ | 00001921 | P | | 00001932 | P | | 0001937 | P | | 00890483 | U | | 00902945 | U | | 08MAY14 | P | | 1012 | P | | 10619 | P | | 10620 | P | | 10633 | P | | 10641 | P | | 130 | P | | 1303 | P | | 1307 | P | | 217731 | U | | 2714 | U | | 2720 | P | | 6523 | P | | 712 | P | +-----------+--------+ ``` Into this: ``` +-----------+------------+ | Invoice # | Status | +-----------+------------+ | 1921 | Posted | | 1932 | Posted | | 1937 | Posted | | 890483 | Unapproved | | 902945 | Unapproved | | 8-May-14 | Posted | | 1012 | Posted | | 10619 | Posted | | 10620 | Posted | | 10633 | Posted | | 10641 | Posted | | 130 | Posted | | 1303 | Posted | | 1307 | Posted | | 217731 | Unapproved | | 2714 | Unapproved | | 2720 | Posted | | 6523 | Posted | | 712 | Posted | +-----------+------------+ ```
Use these 2 statemens: ``` Update table set status = 'Unapproved' where status = 'U' ``` and ``` Update table set status = 'Posted' where status = 'P' ``` Sounds like you want a select statement with a case in it: ``` SELECT CASE WHEN status = 'P' THEN 'Posted' WHEN status = 'U' THEN 'Unapproved' END FROM [table] ```
``` UPDATE SampleTable SET [Status] = CASE [Status] WHEN 'P' THEN 'Posted' WHEN 'U' THEN 'Unapproved' END ``` **SELECT:** ``` SELECT Invoice, [Status] = CASE [Status] WHEN 'P' THEN 'Posted' WHEN 'U' THEN 'Unapproved' END FROM SampleTable ```
Can the SQL Replace Function be used for multiple values?
[ "", "sql", "function", "replace", "" ]
I have the following problem, I want to have **Composite Primary Key** like: ``` PRIMARY KEY (`base`, `id`); ``` for which when I insert a `base` the id to be auto-incremented based on the previous `id` for the same `base` Example: ``` base id A 1 A 2 B 1 C 1 ``` Is there a way when I say: `INSERT INTO table(base) VALUES ('A')` to insert a new record with `id` 3 because that is the next id for `base` 'A'? The resulting table should be: ``` base id A 1 A 2 B 1 C 1 A 3 ``` Is it possible to do it on the DB exactly since if done programmatically it could cause racing conditions. **EDIT** The `base` currently represents a company, the `id` represents invoice number. There should be auto-incrementing invoice numbers for each company but there could be cases where two companies have invoices with the same number. Users logged with a company should be able to sort, filter and search by those invoice numbers.
Ever since someone posted a similar question, I've been pondering this. The first problem is that DBs don't provide "partitionable" sequences (that would restart/remember based on different keys). The second is that the `SEQUENCE` objects that *are* provided are geared around fast access, and can't be rolled back (ie, you *will* get gaps). This essentially this rules out using a built-in utility... meaning we have to roll our own. The first thing we're going to need is a table to store our sequence numbers. This can be fairly simple: ``` CREATE TABLE Invoice_Sequence (base CHAR(1) PRIMARY KEY CLUSTERED, invoiceNumber INTEGER); ``` In reality the `base` column should be a foreign-key reference to whatever table/id defines the business(es)/entities you're issuing invoices for. In this table, you want entries to be unique per issued-entity. Next, you want a stored proc that will take a key (`base`) and spit out the next number in the sequence (`invoiceNumber`). The set of keys necessary will vary (ie, some invoice numbers must contain the year or full date of issue), but the base form for this situation is as follows: ``` CREATE PROCEDURE Next_Invoice_Number @baseKey CHAR(1), @invoiceNumber INTEGER OUTPUT AS MERGE INTO Invoice_Sequence Stored USING (VALUES (@baseKey)) Incoming(base) ON Incoming.base = Stored.base WHEN MATCHED THEN UPDATE SET Stored.invoiceNumber = Stored.invoiceNumber + 1 WHEN NOT MATCHED BY TARGET THEN INSERT (base) VALUES(@baseKey) OUTPUT INSERTED.invoiceNumber ;; ``` Note that: 1. You **must** run this in a serialized transaction 2. The transaction **must** be the same one that's inserting into the destination (invoice) table. That's right, you'll still get blocking per-business when issuing invoice numbers. You **can't** avoid this if invoice numbers must be sequential, with no gaps - until the row is actually committed, it might be rolled back, meaning that the invoice number wouldn't have been issued. Now, since you don't want to have to remember to call the procedure for the entry, wrap it up in a trigger: ``` CREATE TRIGGER Populate_Invoice_Number ON Invoice INSTEAD OF INSERT AS DECLARE @invoiceNumber INTEGER BEGIN EXEC Next_Invoice_Number Inserted.base, @invoiceNumber OUTPUT INSERT INTO Invoice (base, invoiceNumber) VALUES (Inserted.base, @invoiceNumber) END ``` (obviously, you have more columns, including others that should be auto-populated - you'll need to fill them in) ...which you can then use by simply saying: ``` INSERT INTO Invoice (base) VALUES('A'); ``` So what have we done? Mostly, all this work was about shrinking the number of rows locked by a transaction. Until this `INSERT` is committed, there are only two rows locked: * The row in `Invoice_Sequence` maintaining the sequence number * The row in `Invoice` for the new invoice. All other rows for a particular `base` are free - they can be updated or queried at will (deleting information out of this kind of system tends to make accountants nervous). You probably need to decide what should happen when queries would normally include the pending invoice...
you can use the trigger for before insert and assign the next value by taking the max(id) with "base" filter which is "A" in this case. That will give you the max(id) value as 2 and than increment it by max(id)+1. now push the new value to the "id" field. before insert. I think this may help you MSSQL Triggers: <http://msdn.microsoft.com/en-in/library/ms189799.aspx>
SQL Server Unique Composite Key of Two Field With Second Field Auto-Increment
[ "", "sql", "sql-server", "unique", "auto-increment", "" ]
My data looks like: ``` ABCAbbz XXZxxz ZAAAZa Xaaaab ``` I need them into two columns: ``` ABCA bbz XXZ xxz AAAZ a X aaaab ``` I tried something with `COLLATE Latin1_General_BIN LIKE '%[a-z]'` but I don't know how to split. (`AAxAx` will never happen.)
You can do this with the `PATINDEX()` function and forcing it to be case sensitive: ``` SELECT SUBSTRING( t.MyColumn ,0 ,PATINDEX('%[a-z]%', t.MyColumn COLLATE Latin1_General_BIN)) AS uppercase ,SUBSTRING( t.MyColumn ,PATINDEX('%[a-z]%', t.MyColumn COLLATE Latin1_General_BIN) ,LEN(t.MyColumn)) AS lowercase FROM MyTable t ``` [SQL Fiddle Demo](http://sqlfiddle.com/#!3/684c9/2)
Try this: <http://sqlfiddle.com/#!3/4b857a/7> ``` SELECT LEFT(ColumnA, PATINDEX('%[abcdefghijklmnopqrstuvwxyz]%' COLLATE Latin1_General_CS_AS,ColumnA)-1), SUBSTRING(ColumnA, PATINDEX('%[abcdefghijklmnopqrstuvwxyz]%' COLLATE Latin1_General_CS_AS,ColumnA),LEN(ColumnA)) FROM Table1 ```
How to split a varchar like 'ABCxxz' into an upper case 'ABC' and a lower case 'xxz'?
[ "", "sql", "sql-server", "sql-server-2008", "" ]
i don't know why this query take 45 min to give me the result table1= 831293 row table2= 2000.000 row ``` SELECT ID, name FROM table1 WHERE name not IN (SELECT name FROM table2 ) LIMIT 831293 ``` my.cnf file look like this : ``` max_allowed_packet = 64M innodb_buffer_pool_size = 8G innodb_log_file_size = 256M query_cache_type = 1 query_cache_limit = 1M ```
Do you have an index on name? If you have to loop each time through table2 per row, you effectively loop through 831293\*200000 rows, which is a lot. When you have an index on name, you can get a major performance increase, as it is easier to search for a name in this index. (probably even O(1) as there will probably be a hashtable) you can do this as follows ``` ALTER TABLE `table2` ADD INDEX (`name`) ```
This select is very expensive because in the worst case for every record in table1 you must go through 2.000.000 records in table2. I am not MySQL specialist, but on the Oracle platform I would consider indexing and table spaces for table2. BTW, the LIMIT part is useless.
MYSQL (NOT IN) query is very slow
[ "", "mysql", "sql", "performance", "" ]
I have a query that I built using an answer found [here](https://stackoverflow.com/questions/20556819/sql-how-do-i-query-for-re-admissions-in-tsql/20557509#20557509) and it was very helpful. I have added some things to it to suit my needs. One of the things that I added was a `ROW_NUMBER()` in order to count how many times someone has been readmitted within 30 days over any time length. I have inserted the `cte` results into a temp table as suggested in the first answer and by a question [that was posted here](https://stackoverflow.com/questions/23026240/insert-into-table-variable-cte). This does not solve thought, the sequence length and sequence count issue. This is the query: ``` -- CREATE TABLE TO STORE CTE RESULTS DECLARE @PPR TABLE( VISIT1 VARCHAR(20) -- THIS IS A UNIQUE VALUE , READMIT VARCHAR(20) -- THIS IS A UNIQUE VALUE , MRN VARCHAR(10) -- THIS IS UNIQUE TO A PERSON , INIT_DISC DATETIME , RA_ADM DATETIME , R1 INT , R2 INT , INTERIM1 VARCHAR(20) , RA_COUNT INT , FLAG VARCHAR(2) ); -- THE CTE THAT WILL GET USED TO POPULATE THE ABOVE TABLE WITH cte AS ( SELECT PTNO_NUM , Med_Rec_No , Dsch_Date , Adm_Date , ROW_NUMBER() OVER ( PARTITION BY MED_REC_NO ORDER BY PtNo_Num ) AS r FROM smsdss.BMH_PLM_PtAcct_V WHERE Plm_Pt_Acct_Type = 'I' AND PtNo_Num < '20000000' ) -- INSERT CTE RESULTS INTO PPR TABLE INSERT INTO @PPR SELECT c1.PtNo_Num AS [INDEX] , c2.PtNo_Num AS [READMIT] , c1.Med_Rec_No AS [MRN] , c1.Dsch_Date AS [INITIAL DISCHARGE] , c2.Adm_Date AS [READMIT DATE] , C1.r , C2.r , DATEDIFF(DAY, c1.Dsch_Date, c2.Adm_Date) AS INTERIM1 , ROW_NUMBER() OVER ( PARTITION BY C1.MED_REC_NO ORDER BY C1.PTNO_NUM ASC ) AS [RA COUNT] , CASE WHEN DATEDIFF(DAY, c1.Dsch_Date, c2.Adm_Date) <= 30 THEN 1 ELSE 0 END [FLAG] FROM cte C1 INNER JOIN cte C2 ON C1.Med_Rec_No = C2.Med_Rec_No WHERE C1.Adm_Date <> C2.Adm_Date AND C1.r + 1 = C2.r ORDER BY C1.Med_Rec_No, C1.Dsch_Date -- MANIPULATE PPR TABLE SELECT PPR.VISIT1 , PPR.READMIT , PPR.MRN , PPR.INIT_DISC , PPR.RA_ADM --, PPR.R1 --, PPR.R2 , PPR.INTERIM1 --, PPR.RA_COUNT , PPR.FLAG -- THE BELOW DOES NOT WORK AT ALL , CASE WHILE (SELECT PPR.INTERIM1 FROM @PPR PPR) <= 30 BEGIN ROW_NUMBER() OVER (PARTITION BY PPR.MRN, PPR.VISIT1 ORDER BY PPR.VISIT1 ) IF (SELECT PPR.INTERIM1 FROM @PPR PPR) > 30 BREAK END END FROM @PPR PPR WHERE PPR.MRN = 'A NUMBER' ``` Example of current output: ``` INDEX | READMIT | MRN | INIT DISCHARGE | RA DATE | INTERIM | RACOUNT | FLAG | FLAG_2 12345 | 12349 | 123 | 2005-07-05 | 2005-07-09| 4 | 1 | 1 | 0 12349 | 12351 | 123 | 2005-07-11 | 2005-07-15| 4 | 2 | 1 | 0 ``` So the third line is obviously not a readmit in 30 days but just a point in time where the patient came back to the hospital so the RA\_Count goes back to 1 and the flag goes to 0 because it is not a 30day readmit. Should I create a table instead of using a `cte`? What I would like to add is a Chain Length and a Chain Count. Here are some definitions: Chain Length: How many times in a row has someone been readmitted within 30 days of subsequent visits. For example ``` INDEX | READMIT | MRN | INITIAL DISCHARGE | READMIT DATE | CHAIN LEN | Count 123 | 133 | 1236 | 2009-05-13 | 2009-06-12 | 1 | 1 133 | 145 | 1236 | 2009-06-16 | 2009-07-04 | 2 | 1 145 | 157 | 1236 | 2009-07-06 | 2009-07-15 | 3 | 1 165 | 189 | 1236 | 2011-01-01 | 2011-01-12 | 1 | 2 189 | 195 | 1236 | 2011-02-06 | 2011-03-01 | 2 | 2 ``` Chain count would then be how many chains are there: so in the above table there would be 2. I am trying to use the `case` statement to make the chain length Here is an SQL Fiddle with some sample data as it will appear before the `CTE` is executed [SQL Fiddle](http://sqlfiddle.com/#!3/9f807/1) Thank you,
**Update #1: Two events are linked if maximum difference between them is 30 days. [COUNT] values are generated per person.** You could adapt following example which use a [recursive common table expression](http://technet.microsoft.com/en-us/library/ms186243(v=sql.105).aspx): ``` CREATE TABLE dbo.Events ( EventID INT IDENTITY(1,1) PRIMARY KEY, EventDate DATE NOT NULL, PersonID INT NOT NULL ); GO INSERT dbo.Events (EventDate, PersonID) VALUES ('2014-01-01', 1), ('2014-01-05', 1), ('2014-02-02', 1), ('2014-03-30', 1), ('2014-04-04', 1), ('2014-01-11', 2), ('2014-02-02', 2), ('2014-01-03', 3), ('2014-03-03', 3); GO DECLARE @EventsWithNum TABLE ( EventID INT NOT NULL, EventDate DATE NOT NULL, PersonID INT NOT NULL, EventNum INT NOT NULL, PRIMARY KEY (EventNum, PersonID) ); INSERT @EventsWithNum SELECT crt.EventID, crt.EventDate, crt.PersonID, ROW_NUMBER() OVER(PARTITION BY crt.PersonID ORDER BY crt.EventDate, crt.EventID) AS EventNum FROM dbo.Events crt; WITH CountingSequentiaEvents AS ( SELECT crt.EventID, crt.EventDate, crt.PersonID, crt.EventNum, 1 AS GroupNum, 1 AS GroupEventNum FROM @EventsWithNum crt WHERE crt.EventNum = 1 UNION ALL SELECT crt.EventID, crt.EventDate, crt.PersonID, crt.EventNum, CASE WHEN DATEDIFF(DAY, prev.EventDate, crt.EventDate) <= 30 THEN prev.GroupNum ELSE prev.GroupNum + 1 END AS GroupNum, CASE WHEN DATEDIFF(DAY, prev.EventDate, crt.EventDate) <= 30 THEN prev.GroupEventNum + 1 ELSE 1 END AS GroupEventNum FROM @EventsWithNum crt JOIN CountingSequentiaEvents prev ON crt.PersonID = prev.PersonID AND crt.EventNum = prev.EventNum + 1 ) SELECT x.EventID, x.EventDate, x.PersonID, x.GroupEventNum AS [CHAIN LEN], x.GroupNum AS [Count] FROM CountingSequentiaEvents x ORDER BY x.PersonID, x.EventDate -- 1000 means 1000 + 1 = maximum 1001 events / person OPTION (MAXRECURSION 1000); -- Please read http://msdn.microsoft.com/en-us/library/ms175972.aspx (section Guidelines for Defining and Using Recursive Common Table Expressions) ``` Output: ``` EventID EventDate PersonID CHAIN LEN Count ------- ---------- -------- --------- ----- 1 2014-01-01 1 1 1 2 2014-01-05 1 2 1 3 2014-02-02 1 3 1 ------- ---------- -------- --------- ----- 4 2014-03-30 1 1 2 5 2014-04-04 1 2 2 ------- ---------- -------- --------- ----- 6 2014-01-11 2 1 1 7 2014-02-02 2 2 1 ------- ---------- -------- --------- ----- 8 2014-01-03 3 1 1 ------- ---------- -------- --------- ----- 9 2014-03-03 3 1 2 ------- ---------- -------- --------- ----- ``` As you can see ![enter image description here](https://i.stack.imgur.com/tI9HJ.png) the execution plan contains, for the last statement, two `Index Seek` operators because this constraint `PRIMARY KEY (EventNum, PersonID)` defined on `@EventsWithNum` forces SQL Server to create (in this case) a clustered index with a compound key `EventNum, PersonID`. Also, we can see that the estimate cost for `INSERT @EventsWithNum ...` is greater than the estimated cost for `WITH CountingSequentiaEvents (...) SELECT ... FROM CountingSequentiaEvents ...`.
The solution is to start with a view that summarizes the data for each VisitID. I use a view for this instead of CTE because it seems like something you are going to us in more than 1 time. ``` create view vReadmits as select t.VisitID, t.UID, min(r.AdmitDT) ReadmittedDT, min(r.VisitID) NextVisitID, sum(case when r.AdmitDT < dateadd(d, 30, isnull(t.DischargeDT, t.AdmitDT)) then 1 else 0 end) ReadmitNext30 from t left join t as r on t.UID = r.UID and t.VisitID < r.VisitID group by t.VisitID, t.UID ``` This takes each VisitID and finds the next VisitID for that UID. At the same time it sums up the future visits that are less then 30 days. It uses ISNULL() to account for the missing DischargeDTs. You can then add the logic for Chains in a CTE. Then you can join to the view and CTE to include the columns in the view. ``` with Chains as ( select v.UID, sum(case when r.ReadmittedDT < dateadd(d, 30, v.ReadmittedDT) then 0 else 1 end) as ChainCount from vReadmits v left join vReadmits r on r.NextVisitID = v.VisitID group by v.UID ) select t.UID, t.VisitId, t.AdmitDT, t.DischargeDT, v.NextVisitID, v.ReadmitNext30, v.ReadmittedDT, c.ChainCount from t join vReadmits v on t.VisitID = v.VisitID inner join Chains c on v.UID = c.UID order by t.UID, t.VisitID ``` Here is the [SQLFiddle](http://sqlfiddle.com/#!3/4f72f/9/0) The assumption I made was that if the VisitID was greater than another, then its AdmitDT would be greater too. This should be the case (especially for the same UID), but if not you would change the view to use AdmitDTs instead of VisitID.
Counting sequential events and counts of sequences SQL
[ "", "sql", "sql-server-2008-r2", "parent-child", "common-table-expression", "row-number", "" ]
If I understand this correctly then specifying an empty key on a table will ensure that it will only contain zero or exactly one row. Is it possible to specify an empty key in SQL?
You could make your primary key computed: ``` CREATE TABLE T ( ID AS 1, SomeField VARCHAR(100), CONSTRAINT PK_T_ID PRIMARY KEY (ID) ); INSERT T (SomeField) VALUES ('Test'); INSERT T (SomeField) VALUES ('Test2'); ``` The first insert will work, but the second will throw the error: > Violation of PRIMARY KEY constraint 'PK\_T\_ID'. Cannot insert duplicate key in object 'dbo.T'. The duplicate key value is (1). This will ensure your table contains 0 or 1 rows.
To answer your actual question, "Is it possible to specify an empty key in SQL" : no. Just like it is impossible to define tables with no columns (not the same issue, but tangential). Tricks and hacks like in GarethD's answer will be necessary.
Is there a way to specify an empty key in SQL?
[ "", "sql", "sql-server", "sql-server-2012", "relational-database", "rdbms", "" ]
I need to select multiple rows in db (using MYSQL). I create the following query by iterating over array of ids: ``` $ids = array(1,2); $idq = array(); $q = "SELECT * FROM table_name WHERE "; foreach($ids as $id) { $idq[] = "`ID` = '" . $id . "'"; } $q .= implode(" AND ", $idq); SELECT * FROM table_name WHERE `ID` = '1' AND `ID` = '2' ``` For some reason it doesn't seem to work: The above won't work. Is there a better way? When I do: ``` `SELECT * FROM table_name WHERE `ID` = '1'` ``` It works fine but when I add the `AND` ID `= '2'` it won't work at all.
Use `WHERE IN` `SELECT * FROM table_name WHERE`ID`IN ('1', '2')`
``` SELECT * FROM table_name WHERE `ID` = '1' AND `ID` = '2' ``` The ID cannot be 1 and 2 at the same time. You should use OR. ``` SELECT * FROM table_name WHERE `ID` = '1' OR `ID` = '2' ``` or ``` SELECT * FROM table_name WHERE `ID` IN (1,2,...) ```
How can i select multiple rows by a certain ids in sql
[ "", "mysql", "sql", "" ]
So write now I have two types of data in my DB Table "forecast" data and "Actual" data all the field for these rows are the same except for the 'Type' Field that indicates whether the data is a "forcast" or Actual and the Quantity field. What I need to do is determine the accuracy of the forecast by dividing the matching Rows Quantity Field. T So I would have this **Table Orders** ``` Order No. Delivery Date Quantity(cases) Type 1234 6/20/2014 100 Forecast 1234 6/20/2014 70 Actual ``` Then do a query that returns as Accuracy so something like... ``` SELECT Order No., Deliverydate FROM orders WHERE Order No. = "1234" then (SubQuery) As Accuracy ``` **Query Result** ``` Order No. Delivery Date Accuracy 1234 6/20/2014 70% ``` So the Sub query/queries need to match the forecasts then return the divided quantity as a new column.
This may be way after the fact... Using the field names in the original question: ``` Select f.orderNo , f.deliveryDate , format(a.quantity/f.quantity, 'Percent') as accuracy From (Select * from tbl_Orders where type='forecast') f Inner Join (Select * from tbl_Orders where type='actual') a on f.orderNo = a.orderNo ``` This was done in MS Access 2010
I made a query for SQL server. Let me know if it works for MS Access - ``` select orderno, deliverydate, Forecast, [Order], convert(decimal(5,2), ([Order]*100.0) / Forecast) as Accuracy from ( select orderno, deliverydate, quantity, [type] from Orders ) as src pivot( max(quantity) for [type] in ([Forecast],[Order]) )as pvt ``` **For MS Access (what I have so far) -** **I have never used MS Access**. But, I tried to make the MS Access query using the example given here - [TRANSFORM and PIVOT in Access 2013 SQL](https://stackoverflow.com/questions/16691853/transform-and-pivot-in-access-2013-sql). Let me know if that works for you. So your actual column names are different from what you gave. The queries based on those columns names are given below. ![enter image description here](https://i.stack.imgur.com/Llccf.jpg) Create a query called queryPivot - ``` TRANSFORM Max(Quantity) SELECT UPC, RDDto FROM order_signals GROUP BY UPC, RDDto PIVOT [Type]; ``` Create another query called querySelectFromPivot - ``` SELECT queryPivot.UPC, queryPivot.RDDto, (queryPivot.Actual * 100.0/ queryPivot.Forecast) as 'Accuracy' FROM queryPivot; ``` This is where I am stuck. It gives me blank values due to the missing values in your sample data. I tried to select just Actual \* 100.0 instead, but that does not give me any decimal values. So, try to find out how to convert a column to decimal form. After that this query should work. (Man...access is so unintuitive for sql folks.)
Match columns with a sub query and return query as column?
[ "", "sql", "ms-access", "" ]
I have a table that has been left-joined from user Ids and another value: ``` + - - - - - - - - - + ¦ Id ¦ Code ¦ + - - - - + - - - - + ¦ 1 ¦ 0 ¦ + - - - - + - - - - + ¦ 1 ¦ 1 ¦ + - - - - + - - - - + ¦ 2 ¦ 2 ¦ + - - - - + - - - - + ¦ 2 ¦ 2 ¦ + - - - - + - - - - + ¦ 3 ¦ 1 ¦ + - - - - + - - - - + ¦ 3 ¦ 1 ¦ + - - - - + - - - - + ``` I'm trying to write a SQL query that will return the Ids that have all Codes the same value. So the `Id`'s where all `Code=1` should return `3` only, and not `1` because it has a `0` `Code` somewhere. I am using this to get all Id's that have all code values the same, except I need to specify only code value of 1: ``` SELECT Id, COUNT(distinct Code) AS CodeGroups FROM @groupedUsersTable GROUP BY Id HAVING COUNT(distinct Code) = 1 ```
Something like this: ``` SELECT ID, MAX(CODE) CODE, COUNT(1) CNT FROM TABLE GROUP BY ID HAVING MAX(CODE) = MIN(CODE) AND MAX(CODE) = 1 ``` Not sure the part after the AND is part of what you are looking for?
Something like this? ``` DECLARE @id INT SET @id = 1 -- Code to search for SELECT DISTINCT Id FROM CodeTable t1 WHERE Code = @id AND NOT EXISTS (SELECT * FROM CodeTable t2 WHERE t2.Id = t1.Id AND t2.Code <> @id) ```
Getting Ids that have value all the same
[ "", "sql", "join", "count", "" ]
Assume table: ``` MyTable idPrimary idPerson idSchool ----------------------------------- 1 20 6 2 20 3 3 21 2 4 22 6 5 23 6 6 24 3 7 22 7 ``` I would like to find all the persons, who went to school 6 but did not go to school 2 or 3. That means that from the table above, the answer would be students 22 and 23. Student 20 went to school 6 but sadly went to school 3, thereby negating this student. According to the matrix in question [38549](https://stackoverflow.com/questions/38549/difference-between-inner-and-outer-join), I want the LEFT JOIN WHERE B.Key is NULL. (Just out of curiosity is that what is called LEFT OUTER JOIN?) The main formula is: ``` SELECT <SELECT_LIST> FROM TableA.A LEFT JOIN TableB.B ON A.Key = B.Key WHERE B.Key IS NULL; ``` Table A would be: ``` SELECT * FROM `MyTable` WHERE `idSchool` = '6'; ``` Table B would be: ``` SELECT * FROM `MyTable` WHERE `idSchool` = '2' OR `idSchool` = '3'; ``` The resultant table should be: ``` SELECT `idPerson` FROM SELECT * FROM `MyTable` WHERE `idSchool` = '6' LEFT JOIN SELECT * FROM `MyTable` WHERE `idSchool` = '2' OR `idSchool` = '3' ON `idSchool` = `idSchool` WHERE `idSchool` = NULL; ``` Sadly MySQL Workbench throws me an error: ``` Error Code: 1064. You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ``` Near the LEFT JOIN, so basically the SQL engine does not like my TableA. If I wrap TableA in parenthesis, then I get an alias error. What is the proper way to issue this query?
Do a LEFT JOIN, checking for the schools not wanted in the ON clause, and check for no match in the WHERE clause:- ``` SELECT DISTINCT a.idPrimary, a.idPerson, a.idSchool FROM MyTable a LEFT OUTER JOIN MyTable b ON a.idPerson = b.idPerson AND b.idSchool IN (2,3) WHERE a.idSchool = 6 AND b.idSchool IS NULL ```
Use the `AS` keyword for your table aliases: ``` SELECT <SELECT_LIST> FROM TableA AS A LEFT JOIN TableB AS B ON A.Key = B.Key WHERE B.Key IS NULL; ```
SQL LEFT JOIN ON A SINGLE TABLE
[ "", "mysql", "sql", "join", "left-join", "" ]
I have a table with one column `dateX` formatted as `datetime` and containing standard dates. How can I select all records from this table where this dateX equals a certain date, e.g. May 9, 2014 ? I tried the following, but this returns nothing even if I have several records with this date. ``` SELECT * FROM dbo.LogRequests WHERE (CONVERT(VARCHAR(10), dateX, 101) = '09/05/14') ``` Edit: In the database the above example looks as follows, using SQL 2012: `2014-05-09 00:00:00.000`
The easiest way is to convert to a date: ``` SELECT * FROM dbo.LogRequests WHERE cast(dateX as date) = '2014-05-09'; ``` Often, such expressions preclude the use of an index. However, according to various sources on the web, the above is sargable (meaning it will use an index), such as [this](http://msmvps.com/blogs/robfarley/archive/2010/01/22/sargable-functions-in-sql-server.aspx) and [this](https://dba.stackexchange.com/questions/34047/cast-to-date-is-sargable-but-is-it-a-good-idea). I would be inclined to use the following, just out of habit: ``` SELECT * FROM dbo.LogRequests WHERE dateX >= '2014-05-09' and dateX < '2014-05-10'; ```
For Perfect `DateTime` Match in SQL Server ``` SELECT ID FROM [Table Name] WHERE (DateLog between '2017-02-16 **00:00:00.000**' and '2017-12-16 **23:59:00.999**') ORDER BY DateLog DESC ```
SQL Server: how to select records with specific date from datetime column
[ "", "sql", "sql-server", "datetime", "" ]
Should we end stored procedures with GO statement, if so what are the advantages of using GO? ``` CREATE PROCEDURE uspGetAddress @City nvarchar(30) AS SELECT * FROM AdventureWorks.Person.Address WHERE City = @City GO ```
The statement `go`, [per the documentation](http://msdn.microsoft.com/en-us/library/ms188037%28v=sql.110%29.aspx) > Signals the end of a batch of Transact-SQL statements to the SQL Server utilities. > > ... > > `GO` is not a Transact-SQL statement; it is a command recognized by the sqlcmd and osql > utilities and SQL Server Management Studio Code editor. > > SQL Server utilities interpret `GO` as a signal that they should send the current batch > of Transact-SQL statements to an instance of SQL Server. The current batch of statements > is composed of all statements entered since the last `GO`, or since the start of the > ad-hoc session or script if this is the first `GO`. > > A Transact-SQL statement cannot occupy the same line as a `GO` command. However, the line > can contain comments. > > Users must follow the rules for batches. For example, any execution of a stored procedure > after the first statement in a batch must include the `EXECUTE` keyword. The scope of > local (user-defined) variables is limited to a batch, and cannot be referenced after a > `GO` command. A stored procedure definition, [per the documentation for `create procedure`](http://msdn.microsoft.com/en-us/library/ms187926%28v=sql.110%29.aspx), comes with restrictions. it must be the first (and only) statement in the batch: > The `CREATE PROCEDURE` statement cannot be combined with other Transact-SQL statements in > a single batch. That means the body of stored procedure ends with the batch. Adding `GO` in your source file is good practice. Especially since it's common to do things prior to and following the creation of a stored procedure. You'll often see source files that look something like this: ``` if (object_id('dbo.foobar') is not null ) drop procedure dbo.foobar GO -- dbo.foobar -------------------------------------------- -- -- This stored procedure does amazing and wonderful things ---------------------------------------------------------- create procedure dbo.foobar as ... {a sequence of amazing and wonderful SQL statements} ... return 0 GO grant execute on dbo.foobar to some_schema GO ``` And the value for `GO` is adjustable in Sql Server Management Studio's options. If you'd like to use something like `jump` instead of `go`, you can (bearing in mind that you're almost certainly going to give yourself grief in doing so.). ![options screenshot](https://i.stack.imgur.com/bWNgr.png)
No, you should end your procedure with `RETURN`. ``` CREATE PROCEDURE uspGetAddress @City nvarchar(30) AS SELECT * FROM AdventureWorks.Person.Address WHERE City = @City RETURN ``` The `GO` is really meant to separate commands in a sql script.
Should we end stored procedures with the GO statement?
[ "", "sql", "sql-server", "t-sql", "stored-procedures", "" ]
I have the following query which displays a table with date: ``` SELECT * FROM [Db].[dbo].[btotals] ORDER BY [Date] DESC ``` Which displays: ``` Date 06/07/2014 05/31/2014 05/24/2014 05/17/2014 05/10/2014 05/03/2014 ``` If I pick SELECT TOP 1 will give me the first row. How can I modify my query so I get the week prior to last week? In this case the `5/31/14` row?
If your dates are always a week apart, and you just want the second row you can use [`ROW_NUMBER()`](http://msdn.microsoft.com/en-GB/library/ms186734.aspx): ``` SELECT Date FROM ( SELECT Date, RowNumber = ROW_NUMBER() OVER(ORDER BY Date DESC) FROM [Db].[dbo].[btotals] ) AS d WHERE d.RowNumber = 2; ``` Otherwise you can use the following to get the saturday of 2 weeks ago: ``` SELECT DATEADD(DAY, -((DATEPART(WEEKDAY, GETDATE()) + @@DATEFIRST) % 7) - 7, CAST(GETDATE() AS DATE)); ``` Then select your first date that is on or after that: ``` SELECT TOP 1 Date FROM [Db].[dbo].[btotals] WHERE Date >= DATEADD(DAY, -((DATEPART(WEEKDAY, GETDATE()) + @@DATEFIRST) % 7) - 7, CAST(GETDATE() AS DATE)) ORDER BY Date; ```
This should also work, if you are trying to select the second date, Though Gareth's approach of using `ROW_NUNMBER` is a better one. ``` SELECT TOP 1 * FROM ( SELECT TOP 2 * FROM [Db].[dbo].[btotals] ORDER BY [Date] DESC ) as X ORDER BY Date ASC ```
How to get the date for two saturdays ago
[ "", "sql", "sql-server", "" ]
Can somebody help me with this query I am trying to write. ``` SELECT sku FROM product p where sku not in(SELECT sku FROM price_adjustment p) and sku in (SELECT sku FROM searchable_product s where is_visible = 'T') and sku in (SELECT sku FROM channel_pricing c where offer_price > 10.00 and offer_price % 2 = 0) limit 5; ``` This query works fine but I also want to only pick skus where the search\_prod\_id is distinct. I thought this would work but it doesn't: ``` SELECT sku, search_prod_id FROM product p where sku not in(SELECT sku FROM price_adjustment p) and sku in (SELECT sku FROM searchable_product s where is_visible = 'T') and sku in (SELECT sku FROM channel_pricing c where offer_price > 10.00 and offer_price % 2 = 0) and sku in (SELECT distinct search_prod_id from product p ) limit 5; ``` When I run that I can see the search\_prod\_id is the same for each sku. Can somebody tell me what I'm doing wrong?
You can force the `search_prod_id` to be different by grouping them ``` SELECT MAX(sku) sku, search_prod_id FROM product p WHERE sku NOT IN(SELECT sku FROM price_adjustment) AND sku IN (SELECT sku FROM searchable_product WHERE is_visible = 'T') AND sku IN (SELECT sku FROM channel_pricing c WHERE offer_price > 10.00 AND offer_price % 2 = 0) GROUP BY search_prod_id LIMIT 5; ``` also in your query you're checking sku against search\_prod\_id, every match it's just luck (bad luck IMO) ``` sku in (SELECT distinct search_prod_id from product p ) ```
Try this Query it may work, ``` 'SELECT sku FROM product p where search_prod_id in( SELECT DISTINCT search_prod_id ,sku FROM product p where sku not in( SELECT sku FROM price_adjustment p) and sku in ( SELECT sku FROM searchable_product s where is_visible = 'T') and sku in (SELECT sku FROM channel_pricing c where offer_price > 10.00 and offer_price % 2 = 0) limit 5;' ```
Getting data from SQL query where another column is unique
[ "", "mysql", "sql", "database", "" ]
Can someone suggest a method for selecting rows that do not have a unique value for a column? If I have five records, with 1 record that has CustNo = 7, 1 record that has CustNo = 9, and three records that have CustNo = 11, I only want to select the three rows that have CustNo = 11
to display all the rows which are not distinct you need to have a sub query like this ``` select * from CustDetails WHERE CustNo IN (SELECT CustNo from CustDetails group by CustNo having count(CustNo) > 1) ``` here is the sql fiddle for the same <http://sqlfiddle.com/#!3/1c9a4/4>
This will find records, and count of them, that have duplicates in CustNo ``` Select CustNo, Count(*) From Table1 Group By CustNo Having Count(*) > 1 ``` And another way for full solution (get full record of only those that have duplicates), using group and join ``` Select t1.* From Table1 t1 inner join (Select CustNo cn From Table1 Group By CustNo Having Count(*) > 1) t2 On t1.CustNo = t2.cn ```
Select rows that do NOT have a DISTINCT value
[ "", "sql", "sql-server", "sql-server-2008", "" ]
The scenario is this: I have two tables: the first with a list of onlinespeakers and the second with rooms they are speaking in. ``` OnlineSpeaker ------------ - ID - Time - RoomName - SpeakerName Rooms ----- - ID - Time - RoomName - Location ``` The speakers are logged in OnlineSpeaker routinely. The Rooms table is used for manually adding the location of Rooms. Each RoomName can have multiple locations, but it's only the latest (according to Time) that is valid. I want to see each (unique) RoomName from OnlineSpeaker and the location (if it exists) from Rooms. ``` RoomName Location ------------------------ 403 Germany 405 France 506 England ``` ## Edit I have added the query that is not yet perfect. It does give a list of room names and their location. But it gives duplicate room names and locations if there is several entries for a room in the Room table. I prefer to only show one location for each room and that location should be the latest added in the Rooms table. ``` SELECT DISTINCT a.RoomName, b.Location FROM OnlineSpeakers AS a LEFT OUTER JOIN Rooms AS b ON a.RoomName = b.RoomName ORDER BY a.RoomName ```
Get the maximum time per room to find the latest, i.e. current, room entries. Outer join these to the online speakers. ``` select valid_room_locations.roomname, valid_room_locations.location from onlinespeaker left outer join ( select * from rooms join ( select roomname, max(time) as last_time from rooms group by roomname ) as last_room_time on last_room_time.roomname = rooms.roomname and last_room_time.last_time = rooms.time ) as valid_room_locations on valid_room_locations.roomname = onlinespeaker.roomname -- logged into that room --and valid_room_locations.time <= onlinespeaker.time -- provided you want only logins in this valid location where onlinespeaker.id = 1; ``` I don't know if you only want to consider logins into the latest room location. Then you would have to compare login time with the room's "valid since time" (the commented line). Moreover, if it's possible a speaker logs in to a room location, logs out and logs in again, then you will get duplicates. Use `SELECT DISTINCT` in that case, as you did in your original query.
``` SELECT DISTINCT RoomName, Location FROM OnlineSpeakers LEFT OUTER JOIN Rooms on OnlineSpeakers.RoomName = Rooms.RoomName ```
Beginner SQL - use of join
[ "", "sql", "" ]
I have a table (myItems) with an item "`id`" and "`date`". Now i want to read out, how many items there are per month (i also want to distinguish between October 2013 and October 2014). I started with: ``` SELECT Count(okt.id) AS Oktober, Count(nov.id) AS November FROM `myItems` as okt, `myItems` as nov WHERE (okt.date between '2013-10-01' and '2013-10-31') OR (nov.date between '2013-11-01' and '2013-11-30') ``` But it prints out a ridiculously large number. What am i doing wrong?
Try this. This will divide data into months and then do the COUNT : ``` SELECT SUM(CASE WHEN MONTH(date) = 10 THEN 1 ELSE 0 END) AS Oktober , SUM(CASE WHEN MONTH(date) = 11 THEN 1 ELSE 0 END) AS November FROM `myItems` ``` Demo: [SQL Fiddle](http://sqlfiddle.com/#!2/43cb2/2) --- With YEAR integrated: ``` SELECT 2013 as Year , SUM(CASE WHEN MONTH(date) = 10 THEN 1 ELSE 0 END) AS Oktober , SUM(CASE WHEN MONTH(date) = 11 THEN 1 ELSE 0 END) AS November FROM `myItems` WHERE YEAR(date) = 2013 UNION ALL SELECT 2014 as Year , SUM(CASE WHEN MONTH(date) = 10 THEN 1 ELSE 0 END) AS Oktober , SUM(CASE WHEN MONTH(date) = 11 THEN 1 ELSE 0 END) AS November FROM `myItems` WHERE YEAR(date) = 2014; ; ``` Demo: [Fiddle](http://sqlfiddle.com/#!2/43cb2/5) --- With inspiration from @user77318 ``` SELECT YEAR(date) as Year, month(date) as month, count(id) as count FROM myItems GROUP BY YEAR(date), MONTH(date); ``` I personally recommend this, more beautiful. Then you can do all the presentation stuffs on Application Layer.
Try this. Group the result by month: > SELECT month(date) as month, count(id) as count FROM myItems WHERE date between '2013-10-01' and '2013-11-30' GROUP BY MONTH(date); Example of output result: > Month | Count > > 10 | 100 > > 11 | 200
How do I count records based on time (month, year) ?
[ "", "mysql", "sql", "database", "" ]
hey this is my database ``` users table pages table |user_firstname| |page_firstname | ----------------------------- -------------------------------- | john | | matt | | james | | quentin | | harry | | roland | | oliver | | thomas | ``` And this is my query for a search application ``` SELECT user_id, user_firstname, user_lastname, user_profile_picture FROM users WHERE user_firstname LIKE :user_firstname OR user_lastname LIKE :user_lastname ``` It works for users table. However I want to search from pages column too. For example when I press the letter M it founds jaMes, but I want Matt and thoMas as well. The thing is if I type Q it should only find the Quentin from the pages and if I type J it should only from the users. how do I include the pages table in this query? UPDATE I tried this query ``` SELECT user_id, user_firstname, user_lastname, user_profile_picture FROM users WHERE user_firstname LIKE :user_firstname OR user_lastname LIKE :user_lastname UNION ALL SELECT page_id, page_firstname, page_lastname, page_profile_picture FROM pages WHERE page_firstname LIKE :page_firstname OR page_lastname LIKE :page_lastname ``` and my result was like this: ``` array (size=4) 'user_id' => int 5 'user_firstname' => string 'Roland' (length=3) 'user_lastname' => string 'lastname' (length=7) 'user_profile_picture' => string '5_1399841223_536fe1c70ea2b_user_profile.jpg' (length=43) ``` even if the roland was in the PAGES table i am getting the result like it is in users table. i would like to separate them from each other
you can do something like this : ``` SELECT user_id, user_firstname, user_lastname, user_profile_picture, 'users' as 'type' FROM users WHERE user_firstname LIKE :user_firstname OR user_lastname LIKE :user_lastname UNION ALL SELECT page_id, page_firstname, page_lastname, page_profile_picture ,'pages' as 'type' FROM pages WHERE page_firstname LIKE :page_firstname OR page_lastname LIKE :page_lastname ```
Depends what you're trying to accomplish - if you want to basically treat them like the same table then something like this would work ``` SELECT user_id, user_firstname, user_lastname, user_profile_picture FROM users WHERE user_firstname LIKE :user_firstname OR user_lastname LIKE :user_lastname UNION ALL SELECT user_id, user_firstname, user_lastname, user_profile_picture FROM pages WHERE user_firstname LIKE :user_firstname OR user_lastname LIKE :user_lastname ```
Search for same data in two different columns
[ "", "mysql", "sql", "" ]
How can one get a running total from a quantity that was calculated by a count? ``` select Hour, count(*) TotalPerHOur, sum(TotalPerHOur) TotalCumulative from table1 group by Year, Month, Day, Hour ``` This is my query - but doesnt work... Error I get is > invalid column TotalPerHOur Example of result table ``` HOUR Total_per_Hour Total Cumulative 6:00-7:00 8 8 7:00-8:00 13 21 8:00-9:00 20 41 9:00-10:00 22 63 10:00-11:00 10 73 11:00-12:00 23 96 ```
Try something like this: ``` SELECT t1.Hour, count(*) TotalPerHOur, ( SELECT count(*) FROM table1 t2 WHERE t2.Year <= t1.Year and t2.Month <= t1.Month and t2.Day <= t1.Day and t2.Hour <= t1.Hour ) TotalCumulative FROM table1 t1 GROUP BY t1.Year, t1.Month, t1.Day, t1.Hour ``` In [SQL Fiddle](http://www.sqlfiddle.com/#!6/3a747/1).
You are getting this error because `TotalPerHOur` is not yet defined. And you are using it to calculate `TotalCumulative`. Following query should work: ``` SELECT Hour, TotalPerHOur, sum(TotalPerHOur) TotalCumulative FROM ( SELECT Hour, count(*) TotalPerHOur FROM table1 GROUP BY Year, Month, Day, Hour ) ```
SQL running total from a count
[ "", "sql", "sql-server", "" ]
I'm trying to write (what I think is a straight forward) update query, but as I'm new to the world of SQL its a little troublesome. My scenario: **Table1** ``` Parent Child Alias -------------------------- New Member1 AliasABC New Member2 AliasDEF New Member3 AliasGHI ``` **Table2** ``` Parent Child Alias -------------------------- Parent08 Member8 Alias08 Parent09 Member2 Alias09 Parent10 Member9 Alias10 ``` The result of the query should look like: **Table1** ``` Parent Child Alias -------------------------- New Member1 AliasABC Parent09 Member2 AliasDEF New Member3 AliasGHI ``` I only want to update the Parent column if the Child already exists in Table2 and leave everything else untouched. I've tried using update Correlated queries, but have drawn a blank. Update: Partial success with this query: ``` update TABLE1 p1 set (p1.PARENT) = ( select p2.PARENT from TABLE2 p2 where p2.CHILD = p1.CHILD ) ``` And results in: **Table1** ``` Parent Child Alias -------------------------- (null) Member1 AliasABC Parent09 Member2 AliasDEF (null) Member3 AliasGHI ``` Thanks in advance, Mark
I think this will do it for oracle: ``` UPDATE table1 SET table1.Parent = ( SELECT table2.Parent FROM table2 WHERE table1.Child = table2.Child ) WHERE EXISTS (SELECT table2.Parent FROM table2 WHERE table1.Child = table2.Child); ``` [SQLFiddle](http://sqlfiddle.com/#!4/4338b/1)
If you want to do this in Oracle, you'll need a correlated subquery: ``` update table1 set parent = (select parent from table2 where table2.child = table1.child) where exists (select 1 from table2 where table2.child = table1.child); ``` This is standard SQL and should work in all databases, particularly Oracle.
Oracle SQL - Update Query between 2 tables
[ "", "sql", "oracle", "sql-update", "correlated-subquery", "" ]
According to [this answer](https://stackoverflow.com/a/1313293/1101095), the best way to get the latest record in each group is like this: ``` SELECT m1.* FROM messages m1 LEFT JOIN messages m2 ON (m1.name = m2.name AND m1.id < m2.id) WHERE m2.id IS NULL ``` I have tried this and it works great. However, I also need to check whether the latest record is the *only* record in the group. I tried modifying the query to this: ``` SELECT m1.*, COUNT(m3.name) FROM messages m1 LEFT JOIN messages m2 ON (m1.name = m2.name AND m1.id < m2.id) LEFT JOIN messages m3 ON m1.name = m3.name WHERE m2.id IS NULL ``` But it only returns one row. If I remove the `COUNT()` statement, leaving us with this: ``` SELECT m1.* FROM messages m1 LEFT JOIN messages m2 ON (m1.name = m2.name AND m1.id < m2.id) LEFT JOIN messages m3 ON m1.name = m3.name WHERE m2.id IS NULL ``` Duplicate rows are returned, so apparently the additional `LEFT JOIN` messes up the query. Is there a simple way to check whether the latest record returned is the *only* record in the group? A simple bool value would be fine, or the number of records in the group would work, too. EDIT: The reason I'm trying to do this is that I'm writing a commenting system and I want users to be able to edit comments. When a comment is edited, I want to display a link showing it was edited that, when clicked, takes you to a page showing the edits (like on facebook, or how the revision system works for questions on stackoverflow). So I need to get the latest revision of each comment, as well as an indicator letting me know whether there are multiple revisions of the comment (so I know whether or not to show an "edited" link). The solution needs to be efficient, since there might be hundreds of comments in a thread.
Try: ``` SELECT m1.*, m2.total FROM messages m1, (select max(id) id, count(*) total, name from messages group by name) m2 where m1.name = m2.name and m1.id = m2.id ``` You can convert this to join syntax if you want, but the idea is to run a subquery and join once instead of twice and use only equality joins which could give you a performance boost. I would benchmark both my solution and Aquillo's and see which is faster in your case.
Not tested but I guess something like this would do: ``` SELECT DISTINCT m1.* , CASE WHEN m3.id IS NULL THEN 'only record with this name' ELSE 'not only record with this name' END FROM messages m1 LEFT JOIN messages m2 ON (m1.name = m2.name AND m1.id < m2.id) LEFT JOIN messages m3 ON (m1.name = m3.name AND m1.id > m3.id) WHERE m2.id IS NULL ``` First `LEFT JOIN` + `WHERE` says "only give me the record with given name where there's no higher `id`". Second `LEFT JOIN` says "give the records with given name and a smaller `id`". Since there might be more records, I've used this together with the `DISTINCT`. Last the `CASE WHEN THEN END` determines whether there was any smaller `id` at all.
How to get the latest record in each group AND check if it's the only record?
[ "", "mysql", "sql", "" ]
I need help on T-SQL. I have a stored procedure in my SQL Server with query (sample) as below: ``` select mpid, empname, sal, phone, [e-mail] from emp where empid = @empid or empname = @empname or sal = @sal or phone = @phone ``` In the above sample query, I have four conditions in my where clause. When I run the query, I would get results if any of the conditions are matched with the input variables (ex: @empid,@empname etc). My requirement now is to identify which of the where conditions got matched with the input data. I just need to know the *first* where condition that matched with the input values.
Modified Gordon Linoff Answer ``` Select empId,EName,Sal,Phone,Match From (select empId,EName,Sal,Phone, ((case when empId=@empId Then (CONVERT(VARCHAR,empId) +';'+ 'empId') when EName=@EName Then (CONVERT(VARCHAR,EName) +';'+ 'EName') when Sal=@Sal Then (CONVERT(VARCHAR,Sal) +';'+ 'Sal') when Phone=@Phone Then (CONVERT(VARCHAR,Phone) +';'+ 'Phone') else NULL end)) as Match from Employee e ) e Where Match IS NOT NULL ``` Removed case statements for each columns as it will check conditions for each column irrespective of satisfied condition Added only one case for all columns as `the control will return as soon as any one of the conditions` is successful.
You can do this with a `CASE` statement in your `SELECT`: ``` Select empid, empname, sal, phone, [e-mail], Case When empid = @empid Then 'EmpId' When empname = @empname Then 'EmpName' When sal = @sal Then 'Sal' When phone = @phone Then 'Phone' End As MatchingColumn From emp Where empid = @empid Or empname = @empname Or sal = @sal Or phone = @phone ```
SQL Server - identify successful where clause
[ "", "sql", "sql-server", "" ]