Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
This should (hopefully) be a fast one, unfortunately I am a little bit slow today and need your help. I've basically wrote an INSERT statement with the following structure in order to insert multiple rows with a single statement: ``` INSERT INTO TABLE_TO_INSERT_INTO SELECT -- ..., -- ..., -- ..., (SELECT MAX(ID)+1 FROM SOME_TABLE) As ID, -- ..., -- ..., FROM ( -- Subqueries and Joins here -- ... ) ; ``` This generally works perfectly fine, but for one part: the ID. It has got to be unique (a constraint makes sure of that), but with the current way of selecting the ID that is not the case, as it uses the selected value for all rows that are inserted via the above statement. Is there any way of forcing a reevalution of the `(SELECT MAX(ID)+1 FROM SOME_TABLE) As ID` part after every row inserted? I don't really want to use a trigger, but instead implement it on a pure SQL basis. I hope someone has a solution, I am at a loss today and can't really see it. Thanks in advance!
As you are using Oracle, then I would suggest you to use [SEQUENCE](http://docs.oracle.com/cd/B28359_01/server.111/b28286/statements_6015.htm) ``` CREATE SEQUENCE customers_seq START WITH 1000 INCREMENT BY 1 NOCACHE NOCYCLE; ``` In your query ``` SELECT customers_seq.nextval from yourTable; ``` The above query guarantee to return unique value after each execution.
Unless you are inserting into SOME\_TABLE (which you aren't), (SELECT MAX(ID)+1 FROM SOME\_TABLE) As ID will always return the same value because the MAX(ID) is not affected by additional rows in TABLE\_TO\_INSERT\_INTO. You could in theory use some kind of row number offset plus your MAX(ID) value BUT I would strongly advise against it. You would be better off using a field which has a seed and increment computed value as per best practice as this will work well with distributed transactions.
Incrementing a value in a mutliple row INSERT statement
[ "", "sql", "database", "oracle", "" ]
I have a decently large table (~20 million) with a primary key column and another column with large text strings (~ 250 char). The primary key column is all unique but the text strings have repeats. I want to remove all redundant text strings and “distinctify” the table or create a new table to the same affect. I want to leave only rows with distinct text strings. I don’t care which of the several primary keys with each url are thrown away. Even running a select distinct on only the string column results in memory overflow. What other options are there?
Your memory overflow is likely due to returning such a large resulset back to SSMS. If you `Select min(ID), TextColumn Into TableXYZ Group By TextColumn`, you should get around this issue. Once the 'Distinctified' results are in a separate table, you can then go about deleting/archiving the records in the original table.
At one level this is the same question as that asked in [How can I remove duplicate rows?](https://stackoverflow.com/q/18932/73226). You could use [the `ROW_NUMBER` approach](https://stackoverflow.com/a/3822833/73226) but this will require sorting the 20 million rows. Or [the `GROUP BY` approach](https://stackoverflow.com/a/18949/73226) which may be less memory demanding if it uses a hash aggregate. Another more offbeat approach that might be considered here, as you are anticipating less than 50% of the table will be retained, would be to create a new table as below ``` CREATE TABLE Deduped ( Id INT, CharColumn VARCHAR(255) PRIMARY KEY CLUSTERED WITH (IGNORE_DUP_KEY = ON) ) ``` and insert all rows into it ``` DECLARE @I INT = 2000000000; INSERT INTO Deduped SELECT TOP(@I) Id, CharColumn FROM OriginalTable OPTION (OPTIMIZE FOR (@I = 0)) ``` This may well avoid any memory consuming operators at all as duplicates are discarded using the B tree being created.
Distinct Grouping on Large table
[ "", "sql", "sql-server", "" ]
I could use some advice in how to create my select statement, so it will work with the check-in and check-out dates. I have 2 tables: Rooms and Booking. To help you understand, they look like this: ``` Rooms table: Room number (prim-key) Type Price Booking table: BookingId (prim-key) Check-In date: Check-Out date: Room-number (foreign key) to Room number in Room table. ``` I have an Check available room button which run the select statement in my gridview. Based on input from user it should find the rooms that are not reserved already. The datatype of Check-in and Check-out are "Date" DD/MM/YYY" Lets say if room 101 is booked between the 13-07-2015 and the 15-07-2015. The table will then looks like this. ``` BookingId = a long number , Check-In = 13-07-2015 , Check-Out = 15-07-2015 , RoomNumber = 101. ``` So , how do i make the select statement if a user writes in the dates: 14-07-2015 too 16-07-2015? Then it should not show room 101, because it's reserved. Hope someone can help guide me in the right direction. If you need any code or something , please let me know! **Update:** I'm still trying to make this work.. not sure what causing the issues i got. Right now when i run the code from Tim and Hogan (tried them both) it will retrieve all the room in the hotel, and not sort the specific rooms out which is reserved. As you see at the picture below, room 102 is reserved ***Code of text boxes where users write in the dates:*** ``` <div class="form-group"> <asp:Label ID="CheckinLabel" runat="server" Text="Check-in Date"></asp:Label> <asp:TextBox ID="datetimepicker1" ClientIDMode="Static" runat="server" CSSclass="form-control"></asp:TextBox> </div> <div class="form-group"> <asp:Label ID="CheckoutLabel" runat="server" Text="Check-out Date"></asp:Label> <asp:TextBox ID="datetimepicker2" ClientIDMode="Static" runat="server" CSSclass="form-control"></asp:TextBox> </div> ``` ***Pictures of my two tables, so you can see how they look like:*** ![enter image description here](https://i.stack.imgur.com/6xIWW.png) The CheckIn and CheckOut datatype is: nchar(10) , have tried with "date" datatype but then it given me the following error *"Conversion failed when converting date and/or time from character string."* ![enter image description here](https://i.stack.imgur.com/RziNh.png) The important thing here, is that if you tries to reserve a room and check-In or Check-Out date is a date between 15-07-2015 - 20-07-2015 then room 102 is reserved already and should not be shown.
I think Tim misses one case -- using Between make the logic clearer maybe? *EDIT: Nevermind, Tim's is correct -- here is another way to do it.* the cases -- our checkin or checkout is sometime between when the room is used or our checkin and checkout "surround" when the room is used. ``` SELECT r.* FROM Room r WHERE NOT EXISTS ( SELECT 1 FROM Booking b WHERE b.RoomNumber = r.RoomNumber AND ( @Checkin BETWEEN b.CheckIn AND b.CheckOut OR @Checkout BETWEEN b.CheckIn AND b.CheckOut OR (@Checkin <= b.CheckIn AND @CheckOut >= b.CheckPut ) ) ```
``` SELECT r.* FROM Room r WHERE NOT EXISTS ( SELECT 1 FROM Booking b WHERE b.RoomNumber = r.RoomNumber AND ( (@CheckIn >= b.CheckIn AND @CheckIn <= b.CheckOut) OR (@CheckIn <= b.CheckIn AND @Checkout >= b.CheckIn) ) ) ```
Checking available room resevation between check-in and check-out date with SQL in ASP.net
[ "", "sql", "asp.net", "" ]
I am trying to 'obfuscate' data in a SQL database for the purposes of testing. I have a single field in a single table where I want the values to be replaced with a random string - however the same string has to have the same value. So for example: ``` Cat Dog Cat Fish Monkey ``` Would have to replaced with ``` YuW -- same JiK YuW -- same IPoQ KYiLwY ``` I don't want this to be reversible (so no ROT13, etc..) EDIT: i need for it to remain the same length as well. This database will be used for performance testing purposes, and I want realistic string sizes to be used.
You should use [hashing](https://msdn.microsoft.com/en-us/library/ms174415.aspx): ``` SELECT HashBytes('MD5', yourcolumnname) ``` This will give you a non-reversible 'obfuscation' for which the same input value will return the same value. Edit: if you don't want MD5 HashBytes can handle MD2, MD4, MD5, SHA, SHA1, or SHA2. Edit 2: to keep the same length (at least up to the length of the hash value) do: ``` SELECT (SELECT SUBSTRING(HASHBYTES('MD5',[yourcolumnname]),0,len([yourcolumnname]))) ```
If it's just for testing purpose and a matter of interest, I'd do it like that: 1. Put distinct records into temporary table and add a new column, let's name it `[Randomized]` 2. Generate desired random text and make sure it has same `LEN()` as actual text (Use `LEFT()`, `RIGHT()`, `SUBSTRING()` or any other function to do that) 3. Query your actual table and join them on your predicate. 4. Update your actual table with randomized columns Not sure if it fits your needs or not.
Generating the same random string in SQL for a set of rows
[ "", "sql", "sql-server", "random", "" ]
I gotta problem. When I run the below code I get an error saying: invalid length parameter passed to the right function. What does that mean and how can it go away? Its to parse a name like smith, steve s to steve r smith in columns firstname, middlename and lastname ``` UPDATE table5 SET lastName = LEFT(Name, CHARINDEX(', ', Name) - 1), firstname = SUBSTRING(Name, CHARINDEX(', ', Name) + 2, CASE WHEN CHARINDEX(' ', Name, CHARINDEX(', ', Name) + 2) = 0 THEN LEN(Name) + 1 ELSE CHARINDEX(' ', Name, CHARINDEX(', ', Name) + 2) END - CHARINDEX(', ', Name) - 2 ), middlename= RIGHT(Name, LEN(Name) - CASE WHEN CHARINDEX(' ', Name, CHARINDEX(', ', Name) + 2) = 0 THEN LEN(Name) ELSE CHARINDEX(' ', Name, CHARINDEX(', ', Name) + 2) END ) ``` the "name" column is varchar (50) and firstname, middlename and lastname columns are also set to varchar (50) . I'm kinda stumped.. thanks in advance
The error is because `CHARINDEX()` is returning 0 and `SUBSTRING()` require at number bigger than 0. Your assumption is correct string without middle name will give you error. Can you check this fiddle and let me know if that is the format you are using for name. Because the Gordon Linoff query seem to be working for select [SQL FIDDLE](http://sqlfiddle.com/#!6/cddd44/2) **NOTE:** I update the SqlFiddle with ``` WHERE CHARINDEX(', ', Name) > 0 ``` My guess is some of your name doesnt have `,` You can validate doing ``` SELECT Name FROM test WHERE CHARINDEX(', ', Name) = 0 ``` You can also check your other CHARINDEX condition with your data to see which one RETURN 0
I would prefer to write a function, Here is the code. ``` IF object_id(N'udf_NameSplitter', N'FN') IS NOT NULL DROP FUNCTION udf_NameSplitter GO CREATE FUNCTION udf_NameSplitter ( @FullName VARCHAR(50), @NameSplitCharacter CHAR(1), @NamePart CHAR(50) /*FN-firstname,MN-middlName,LN-lastName*/ ) RETURNS VARCHAR(50) AS BEGIN DECLARE @StartIndex INT, @EndIndex INT, @NameTblString VARCHAR(50) DECLARE @NameTbl TABLE (ID INT Identity(1,1),Item NVARCHAR(1000)) SET @StartIndex = 1 IF SUBSTRING(@FullName, LEN(@FullName) - 1, LEN(@FullName)) <> @NameSplitCharacter BEGIN SET @FullName = @FullName + @NameSplitCharacter END WHILE CHARINDEX(@NameSplitCharacter, @FullName) > 0 BEGIN SET @EndIndex = CHARINDEX(@NameSplitCharacter, @FullName) INSERT INTO @NameTbl(Item) SELECT SUBSTRING(@FullName, @StartIndex, @EndIndex - 1) SET @FullName = SUBSTRING(@FullName, @EndIndex + 1, LEN(@FullName)) END SELECT @NameTblString = LTRIM(RTRIM(Item)) FROM @NameTbl WHERE ID = CASE WHEN @NamePart = 'LN' THEN 1 WHEN @NamePart = 'FN' THEN 2 ELSE 3 END RETURN (@NameTblString) END GO ``` Test the function with a few scenarios. I think I covered most but its worth having a second look. I would highly recommend doing a select before an update and see if the data is accurate or as expected. ``` DECLARE @Name VARCHAR(50) = 'lastName , firstname ,middleName ' SELECT lastName=dbo.udf_NameSplitter(@Name,',','LN'), firstname=dbo.udf_NameSplitter(@Name,',','FN'), middleName=dbo.udf_NameSplitter(@Name,',','MN') ``` Usage with your table ``` UPDATE table5 SET lastName=LEFT(Name, CHARINDEX(', ', Name) - 1), firstname=SUBSTRING(Name, CHARINDEX(', ', Name) + 2, CASE WHEN CHARINDEX(' ', Name, CHARINDEX(', ', Name) + 2) = 0 THEN LEN(Name) + 1 ELSE CHARINDEX(' ', Name, CHARINDEX(', ', Name) + 2) END - CHARINDEX(', ', Name) - 2), middlename=RIGHT(Name, LEN(Name) - CASE WHEN CHARINDEX(' ', Name, CHARINDEX(', ', Name) + 2) = 0 THEN LEN(Name) ELSE CHARINDEX(' ', Name, CHARINDEX(', ', Name) + 2) END) ```
Error: invalid length parameter passed to the right function in name parsing script
[ "", "sql", "sql-server", "t-sql", "" ]
Suppose I have the following SQL (which can be [run against the Data Explorer](https://data.stackexchange.com/meta.stackexchange/query/336425/fiddle-for-so-question-31417041), if you'd like): ``` SELECT COUNT(Id) AS "Count" INTO #temp FROM Posts PRINT (SELECT * FROM #temp) ``` This produces an error: > "Subqueries are not allowed in this context. Only scalar expressions are allowed." Now, in this case, I know that `#temp` is a table of one row and one column, and hence that `(SELECT * FROM #temp)` will produce only one value. Is there any way to persuade SQL Server to treat it as a scalar? I am aware that I can save it off to a variable and then `PRINT` that instead: ``` DECLARE @count int = (SELECT * FROM #temp) PRINT @count ``` But this seems like an extra step that shouldn't be necessary.
No this isn't possible according to the grammar. The only way of doing it other than assigning to a variable at the same scope would be to wrap the `select` in a UDF as far as I can see. The [documentation](https://msdn.microsoft.com/en-gb/library/ms176047.aspx) States > PRINT msg\_str | @local\_variable | string\_expr > > msg\_str Is a character string or Unicode string constant. For more > information, see Constants (Transact-SQL). > > @ local\_variable Is a variable of any valid character data type. > @local\_variable must be > char, nchar, varchar, or nvarchar, or it must be able to be implicitly > converted to those data types. > > string\_expr Is an expression that > returns a string. Can include concatenated literal values, functions, > and variables. For more information, see Expressions (Transact-SQL). So assuming that "functions" includes user defined functions and not just built in functions this would work. Otherwise you're out of luck. And for your specific use case you are certainly out of luck as, even ignoring the ridiculousness of creating a scalar UDF for this, they can't access temp tables anyway.
As far as I can tell, nope. Even a statement as simple as `PRINT (SELECT 1)` or `PRINT (SELECT TOP (1) 1)` fails. My guess is that PRINT simply won't execute SQL of any kind to prevent possible injections. It's PRINT, after all, not EXEC. It's meant to return a string message to the client.
Can I treat a subquery with one row and one column as a scalar?
[ "", "sql", "sql-server", "" ]
I want to add a new column to a table to record the number of attributes whose value are null for each tuple (row). How can I use SQL to get the number? for example, if a tuple is like this: ``` Name | Age | Sex -----+-----+----- Blice| 100 | null ``` I want to update the tuple as this: ``` Name | Age | Sex | nNULL -----+-----+-----+-------- Blice| 100 | null| 1 ``` Also, because I'm writing a PL/pgSQL function and the table name is obtained from argument, I don't know the schema of a table beforehand. That means I need to update the table with the input table name. Anyone know how to do this?
Possible **without spelling out columns**. Unpivot columns to rows and count. The aggregate function [`count(<expression>)`](https://www.postgresql.org/docs/current/functions-aggregate.html#FUNCTIONS-AGGREGATE-TABLE) only counts non-null values, while [`count(*)`](https://www.postgresql.org/docs/current/functions-aggregate.html#FUNCTIONS-AGGREGATE-TABLE) counts *all* rows. The shortest and fastest way to count NULL values for more than a few columns is `count(*) - count(col)` ... Works for *any* table with *any* number of columns of `any` data types. In Postgres 9.3+ with built-in [JSON functions](https://www.postgresql.org/docs/current/functions-json.html#FUNCTIONS-JSON-PROCESSING-TABLE): ``` SELECT *, (SELECT count(*) - count(v) FROM json_each_text(row_to_json(t)) x(k,v)) AS ct_nulls FROM tbl t; ``` What is `x(k,v)`? `json_each_text()` returns a set of rows with two columns. Default column names are `key` and `value` as can be seen in the [manual where I linked](https://www.postgresql.org/docs/current/functions-json.html#FUNCTIONS-JSON-PROCESSING-TABLE). I provided table and column aliases so we don't have to rely on default names. The second column is named `v`. Or, in any Postgres version since at least 8.3 with the additional module [**`hstore`**](https://www.postgresql.org/docs/current/hstore.html) installed, even shorter and a bit faster: ``` SELECT *, (SELECT count(*) - count(v) FROM svals(hstore(t)) v) AS ct_nulls FROM tbl t; ``` This simpler version only returns a set of single values. I only provide a simple alias `v`, which is automatically taken to be table *and* column alias. * [Best way to install hstore on multiple schemas in a Postgres database?](https://stackoverflow.com/questions/19146433/best-way-to-install-hstore-on-multiple-schemas-in-a-postgres-database/19146824#19146824) Since the additional column is **functionally dependent** I would consider *not* to persist it in the table at all. Rather compute it on the fly like demonstrated above or create a tiny function with a [polymorphic](https://www.postgresql.org/docs/current/extend-type-system.html#EXTEND-TYPES-POLYMORPHIC) input type for the purpose: ``` CREATE OR REPLACE FUNCTION f_ct_nulls(_row anyelement) RETURNS int LANGUAGE sql IMMUTABLE PARALLEL SAFE AS 'SELECT (count(*) - count(v))::int FROM svals(hstore(_row)) v'; ``` (`PARALLEL SAFE` only for Postgres 9.6 or later.) Then: ``` SELECT *, f_ct_nulls(t) AS ct_nulls FROM tbl t; ``` You could wrap this into a `VIEW` ... *db<>fiddle [here](https://dbfiddle.uk/?rdbms=postgres_14&fiddle=e8fd000340b7189d2561ffd2830bd1ea)* - demonstrating all Old [sqlfiddle](http://sqlfiddle.com/#!17/86a232/1) This should also answer your second question: > ... the table name is obtained from argument, I don't know the schema of a table beforehand. That means I need to update the table with the input table name.
In Postgres, you can express this as: ``` select t.*, ((name is null)::int + (age is null)::int + (sex is null)::int ) as numnulls from table t; ``` In order to implement this on an unknown table, you will need to use dynamic SQL and obtaining a list of columns (say from `information_schema.columns)`).
Count the number of attributes that are NULL for a row
[ "", "sql", "postgresql", "count", "null", "plpgsql", "" ]
This almost works. I get an error at the last line that looks like it's complaining about the C1 reference. Is there a simple way around this? There is nothing wrong with the query or connection. ``` Dim CmdString As String Dim con As New SqlConnection Try con.ConnectionString = PubConn CmdString = "select * from " & PubDB & ".dbo.Suppliers as S " & _ " join " & PubDB & ".dbo.Address as A" & _ " on S.Supplier_Address_Code = A.Address_IDX" & _ " join " & PubDB & ".dbo.Contacts as C1" & _ " on S.Supplier_Contact1 = C1.Contact_IDX" & " join " & PubDB & ".dbo.Contacts as C2" & _ " on S.Supplier_Contact2 = C2.Contact_IDX" & " WHERE S.Supplier_IDX = " & LookupIDX Dim cmd As New SqlCommand(CmdString) cmd.Connection = con con.Open() Dim DAdapt As New SqlClient.SqlDataAdapter(cmd) Dim Dset As New DataSet DAdapt.Fill(Dset) con.Close() With Dset.Tables(0).Rows(0) txtAddress1.Text = .Item("Address1").ToString txtAddress2.Text = .Item("Address2").ToString txtSupplierName.Text = .Item("Address_Title").ToString txtAttn.Text = .Item("Attn").ToString txtBusinessPhone1.Text = .Item("C1.Contact_Business_Phone").ToString ```
The alias C1 is used by SQL Server and is not persisted to the result set. Have you taken this query into SQL Management Studio to see the results? Since you requested all columns (\*) and joined to the Contacts table twice, you'll end up with duplicate column names in the result. For example, if the Contacts table has a LastName field, you'll end up with TWO LastName columns in your result. I haven't tried to duplicate this in my local environment, but I can't imagine the data adapter is going to like having duplicate column names. I recommend specifically including the columns you want to return instead of using the \*. That's where you'll use the alias of C1, then you can rename the duplicate columns using the AS keyword: ``` SELECT C1.LastName AS [Supplier1_LastName], C2.LastName AS [Supplier2_LastName], ... ``` This should solve your problem. Good Luck!
You would not include the "C1" table alias as part of your column name. It will be returned from your query as Contact\_Business\_Phone. For accessing multiple rows you could use the indexer as you are in the example above "Rows(0)" by placing your With block into a For loop and accessing the "Rows(i)" with your loop variable. However, this would not help much as your are assigning this to individual text boxes, so you'd only see the last value on your page/screen.
How do I access multiple records from the same table using SQLDataAdapter?
[ "", "sql", "sql-server", "vb.net", "" ]
Suppose, `order_id` `4646`, `4647` and `4648` are from same customer. ``` SELECT customer_id FROM orders WHERE order_id IN (4646, 4647, 4648) ``` Result: ``` customer_id 2589 2589 2589 ``` Every customer has a `gcm_registration_token`. ``` SELECT gcm_registration_token FROM customer_details WHERE customer_id IN (2589, 2589, 2589) ``` Result: ``` gcm_registration_token dyB_PhRHddU:APA91bGAbuxAIlHUmH2XYK0pWM3ON37O_mTF7g... ``` I want the second query to return 3 rows with same `gcm_registration_token`. Expected result: ``` gcm_registration_token dyB_PhRHddU:APA91bGAbuxAIlHUmH2XYK0pWM3ON37O_mTF7g... dyB_PhRHddU:APA91bGAbuxAIlHUmH2XYK0pWM3ON37O_mTF7g... dyB_PhRHddU:APA91bGAbuxAIlHUmH2XYK0pWM3ON37O_mTF7g... ``` A way around would be to fire query for each value. But, is it possible with single query.
You can use [**INNER JOIN**](http://www.w3schools.com/sql/sql_join_inner.asp) in following: ``` SELECT gcm_registration_token FROM orders o INNER JOIN customer_details d ON o.customer_id = d.customer_id WHERE order_id IN (4646, 4647, 4648) ```
Use `JOIN` like this: ``` SELECT c.gcm_registration_token FROM orders o JOIN customer_details c ON c.customer_id = o.customer_id WHERE o.order_id IN (4646, 4647, 4648) ```
Need multiple rows with same value IN OPERATOR
[ "", "sql", "" ]
**Background:** Table A has columns `userid`, `friend_id`. It stores all `friend_id`s of user having `userid`. e.g. Table A ``` ----------------------- | userid | friend_id | |---------------------| | 3 | 5 | | 3 | 6 | | 3 | 7 | | 3 | 8 | | 3 | 9 | ----------------------- ``` Table B has columns `blocker_userid`, `blocked_userid` e.g. Table B ``` ------------------------------------- | blocker_userid | blocked_userid | |-----------------------------------| | 6 | 3 | | 9 | 3 | ------------------------------------- ``` **Problem Statement:** I want to get from table A friends list for given `userid`, except those friends who have blocked the `userid`. In case of above example, the query should return all `friend_id`s from table A for `userid` 3 except 6 and 9 (because they have blocked `userid` 3) Can someone please tell how this can be done with ***single SELECT query***? **Current Solution:** Currently I am firing multiple queries. One to get all `friend_id`s for given `userid`. And then query to table B with each `friend_id` to find if they have blocked the `userid`. This approach is not at all efficient. **Update:** wewesthemenace's solution below works perfect here. However, in case if we want to *not* to get *"those who have blocked `userid` as well as those who have been blocked by `userid`"* then here is solution (By Hanno Binder who has answered it [here](https://stackoverflow.com/questions/31429375/select-query-with-exclusions-specified-in-other-table-2)) ``` SELECT f.* FROM friends f LEFT OUTER JOIN blocks b1 ON b1.blocker_userid = f.friend_id AND b1.blocked_userid = f.userid -- userid blocked by friend LEFT OUTER JOIN blocks b2 ON b2.blocker_userid = f.userid AND b2.blocked_userid = f.friend_id -- friend blocked by userid WHERE b1.blocker_userid IS NULL AND b2.blocker_userid IS NULL ```
Using `NOT EXISTS`: ``` SELECT a.* FROM TableA a WHERE a.userid = @userid AND NOT EXISTS( SELECT 1 FROM TableB b WHERE b.blocked_userid = a.userid AND b.blocker_userid = a.friend_id ) ``` [**SQL Fiddle**](http://sqlfiddle.com/#!6/4669d/1/0) --- Using `LEFT JOIN`: ``` SELECT a.* FROM TableA a LEFT JOIN TableB b ON b.blocked_userid = a.userid AND b.blocker_userid = a.friend_id WHERE a.userid = @userid AND b.blocked_userid IS NULL ```
``` select friend_id,friend_name from tableA a Left Join tableB b on a.friend_id=b.blocker_userid and b.blocked_userid = @userid where a.userid = @userid and b.blocker_userid is null ```
SELECT query with exclusions specified in other table - 1
[ "", "mysql", "sql", "database", "" ]
I have a table which has the below schema definition : ``` CREATE TABLE `currency` ( `id` int(11) NOT NULL AUTO_INCREMENT, `code` char(3) CHARACTER SET utf8 COLLATE utf8_unicode_ci NOT NULL, `name` varchar(255) CHARACTER SET utf8 COLLATE utf8_unicode_ci NOT NULL PRIMARY KEY (`id`), UNIQUE KEY `code_UNIQUE` (`code`) ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8; ``` What I want is to drop the `id` column and make `code` as the new primary key. And some of the other tables are having foreign keys to this table. I tried the below command but failed: ``` SET FOREIGN_KEY_CHECKS=0; ALTER TABLE `currency` CHANGE COLUMN `id` `id` INT(11) NOT NULL, DROP PRIMARY KEY; ALTER TABLE currency ADD PRIMARY KEY (code); SET FOREIGN_KEY_CHECKS=1; ``` MySQL throws the below exception: > [ERROR in query 2] Error on rename of './db/#sql-849\_1' to './db/currency' (errno: 150 - Foreign key constraint is incorrectly formed) > Execution stopped!
The error > Error on rename of ... errno: 150 - Foreign key constraint is incorrectly formed) happens because you are trying to drop a referenced primary key, even though you are disabling foreign key constraint checking with `SET FOREIGN_KEY_CHECKS=0;` Disabling foreign key checks would allow you to temporarily delete a row in the `currency` table or add an invalid `currencyId` in the foreign key tables, but not to drop the primary key. Changing a PRIMARY KEY which is already referenced by other tables isn't going to be simple, since you risk losing referential integrity between the tables and losing the relationship between data. In order to preserve the data, you'll need a process such as: * Add a new Foreign key column (`code`) to each FK table * Map the `code` foreign key from the previous `currencyId` via an update * Drop the existing foreign key * Drop the old `currencyId` foreign key column * Once all FK's have been dropped, change the primary key on the `currency` table * Reestablish the foreign keys based on the new `code` column The below would do this without needing to disable the `FOREIGN_KEY_CHECKS`, but the foreign key map / drop / recreate step would need to be repeated for all tables referencing `currency`: ``` -- Add new FK column ALTER TABLE FKTable ADD currencyCode char(3) CHARACTER SET utf8 COLLATE utf8_unicode_ci NOT NULL; -- Map FK column to the new Primary Key UPDATE FKTable SET currencyCode = (SELECT `code` FROM currency WHERE id = FKTable.currencyId); -- Drop the old foreign key + column ALTER TABLE FKTable DROP FOREIGN KEY FKTable_Currency; ALTER TABLE FKTable DROP COLUMN currencyId; -- Once the above is done for all FK tables, drop the PK on currency ALTER TABLE `currency` CHANGE COLUMN `id` `id` INT(11) NOT NULL, DROP PRIMARY KEY; ALTER TABLE currency ADD PRIMARY KEY (`code`); ALTER TABLE FKTable ADD CONSTRAINT FKTable_Currency2 FOREIGN KEY (currencyCode) REFERENCES currency(`code`); ``` [SqlFiddle here](http://sqlfiddle.com/#!9/2ab0b/1)
Running ``` ALTER TABLE myTable DROP PRIMARY KEY; ``` caused an error like ``` `Error Code: 1025. Error on rename of 'some_name' to 'another_name' (errno: 150 - Foreign key constraint is incorrectly formed)` ``` > Dropping, creating new column and adding it as a primary key **all > as a single command** works like a charm. Even though I don't know the root cause, this is the final solution I came up to: ``` -- Suppose c1 and c2 are a composite primary key and -- I want to add an incremental primary key named id ALTER TABLE myTable DROP PRIMARY KEY, ADD id INT(10) UNSIGNED NOT NULL PRIMARY KEY AUTO_INCREMENT FIRST, ADD INDEX `an_index_name_for_c1_c2` (`c1`, `c2`); ``` Note that for sake of legacy code performance, I add previous compound primary key columns as a new compound index.
Cannot change primary key because of "incorrectly formed foreign key constraint" error
[ "", "mysql", "sql", "" ]
I have a table like this: ``` +-------+--------------+--------------+-------------+-------------+-------------+-------------+ | Study | Point_number | Date_created | condition_A | condition_B | condition_C | condition D | +-------+--------------+--------------+-------------+-------------+-------------+-------------+ | 1 | 1 | 01-01-2001 | 1 | 1 | 0 | 1 | | 1 | 2 | 01-01-2001 | 0 | 1 | 1 | 0 | | 1 | 3 | 01-01-2001 | 0 | 1 | 0 | 0 | +-------+--------------+--------------+-------------+-------------+-------------+-------------+ ``` The condition\_A, B, C and D are used to classify data points into groups. So each unique combination of those columns would be a group. For each group, I want to retrieve the last 200 rows. At the moment I have something like this: ``` select * from my_table where point_number <= 200; ``` In order to do this for each group I could do: ``` select * from my_table where point_number <= 200 where condition_A = 1 and condition_B = 1 and condition_C = 1 and condition D = 1 union all select * from my_table where point_number <= 200 where condition_A = 1 and condition_B = 1 and condition_C = 1 and condition D = 0 union all...; ``` The problem with this approach is that there are many, many combinations, and it would be good to make the query as flexible as possible. How can I avoid doing UNION ALLs and have the query automatically retrieve 200 rows for each group?
This should help you work out what you need to do: ``` with sample_data as (select 1 id, 1 ca, 0 cb from dual union all select 2 id, 1 ca, 1 cb from dual union all select 3 id, 1 ca, 1 cb from dual union all select 4 id, 0 ca, 0 cb from dual union all select 5 id, 0 ca, 1 cb from dual union all select 6 id, 0 ca, 1 cb from dual union all select 7 id, 0 ca, 0 cb from dual union all select 8 id, 1 ca, 0 cb from dual union all select 9 id, 1 ca, 1 cb from dual union all select 10 id, 0 ca, 1 cb from dual union all select 11 id, 0 ca, 0 cb from dual union all select 12 id, 1 ca, 0 cb from dual union all select 13 id, 1 ca, 0 cb from dual union all select 14 id, 0 ca, 1 cb from dual union all select 15 id, 0 ca, 0 cb from dual union all select 16 id, 1 ca, 1 cb from dual union all select 17 id, 0 ca, 0 cb from dual) select id, ca, cb, row_number() over (partition by ca, cb order by id) rn from sample_data; ID CA CB RN ---------- ---------- ---------- ---------- 4 0 0 1 7 0 0 2 11 0 0 3 15 0 0 4 17 0 0 5 5 0 1 1 6 0 1 2 10 0 1 3 14 0 1 4 1 1 0 1 8 1 0 2 12 1 0 3 13 1 0 4 2 1 1 1 3 1 1 2 9 1 1 3 16 1 1 4 ``` Basically, you need to find out the row number of each row per each group - a job for analytic functions, specifically the `row_number()` analytic function. If you've not come across analytic functions before, basically they're similar to aggregate functions (so you can find results across groups, aka "partition by") without collapsing the rows. I would recommend you do some research on this, if you aren't already familiar with them! Anyway, once you've assigned your row numbers, you can then throw an outer query around the sql to filter on the row number, eg: ``` with sample_data as (select 1 id, 1 ca, 0 cb from dual union all select 2 id, 1 ca, 1 cb from dual union all select 3 id, 1 ca, 1 cb from dual union all select 4 id, 0 ca, 0 cb from dual union all select 5 id, 0 ca, 1 cb from dual union all select 6 id, 0 ca, 1 cb from dual union all select 7 id, 0 ca, 0 cb from dual union all select 8 id, 1 ca, 0 cb from dual union all select 9 id, 1 ca, 1 cb from dual union all select 10 id, 0 ca, 1 cb from dual union all select 11 id, 0 ca, 0 cb from dual union all select 12 id, 1 ca, 0 cb from dual union all select 13 id, 1 ca, 0 cb from dual union all select 14 id, 0 ca, 1 cb from dual union all select 15 id, 0 ca, 0 cb from dual union all select 16 id, 1 ca, 1 cb from dual union all select 17 id, 0 ca, 0 cb from dual), results as (select id, ca, cb, row_number() over (partition by ca, cb order by id) rn from sample_data) select * from results where rn <= 3; ID CA CB RN ---------- ---------- ---------- ---------- 4 0 0 1 7 0 0 2 11 0 0 3 5 0 1 1 6 0 1 2 10 0 1 3 1 1 0 1 8 1 0 2 12 1 0 3 2 1 1 1 3 1 1 2 9 1 1 3 ```
Your original query: ``` select * from my_table where point_number <= 200; ``` Should do what you want -- retrieve values of `point_number` less than 200. It should do this for each group. If you want *200* values in each group, then something like this might be what you really want: ``` select t.* from (select t.*, row_number() over (partition by a, b, c, d order by point_number desc) as seqnum from my_table ) t where seqnum <= 200; ``` This assumes that `point_number()` is increasing and larger values are "more recent". You might want to use `date_created` in the `order by` rather than `point_number`.
Any way to avoid a union in this Oracle query?
[ "", "sql", "oracle", "oracle11g", "" ]
have something kinda weird here. I have a database that's called FLDOC. In it has a column called SENTENCE that contains 7 numbers that represent a length of time. ``` example: 0050000 0750000 0000600 0040615 0000110 ``` In those 7 digits is a length of type since the digits represent YYYMMDD So what I'd like is a script that can convert it to something like this: ``` 5Y 00M 00D 75Y 00M 00D 6M (or 000Y 6M 00D is fine as well) 4Y 6M 15D etc etc ``` thanks in advance...
`CONCAT` is new to SQL Server 2012. If you have previous version of SQL Server, you could do something like this instead to achieve your desired output: ``` SELECT sentence ,( CASE WHEN cast(left(sentence, 3) AS INT) > 0 THEN cast(cast(left(sentence, 3) AS INT) AS VARCHAR(3)) + 'Y ' ELSE cast(left(sentence, 3) AS VARCHAR(3)) + 'Y ' END + CASE WHEN cast(substring(sentence, 4, 2) AS INT) > 0 THEN cast(cast(substring(sentence, 4, 2) AS INT) AS VARCHAR(2)) + 'M ' ELSE cast(substring(sentence, 4, 2) AS VARCHAR(2)) + 'M ' END + CASE WHEN cast(right(sentence, 2) AS INT) > 0 THEN cast(cast(right(sentence, 2) AS INT) AS VARCHAR(3)) + 'D' ELSE cast(right(sentence, 2) AS VARCHAR(3)) + 'D' END ) AS new_sentence FROM FLDOC; ``` [**SQL Fiddle Demo**](http://www.sqlfiddle.com/#!3/70aa9/7/0) **UPDATE** To answer your question below in the comments, you could maybe just write a update statement like this: ``` update FLDOC set sentence = ( CASE WHEN cast(left(sentence, 3) AS INT) > 0 THEN cast(cast(left(sentence, 3) AS INT) AS VARCHAR(3)) + 'Y ' ELSE cast(left(sentence, 3) AS VARCHAR(3)) + 'Y ' END + CASE WHEN cast(substring(sentence, 4, 2) AS INT) > 0 THEN cast(cast(substring(sentence, 4, 2) AS INT) AS VARCHAR(2)) + 'M ' ELSE cast(substring(sentence, 4, 2) AS VARCHAR(2)) + 'M ' END + CASE WHEN cast(right(sentence, 2) AS INT) > 0 THEN cast(cast(right(sentence, 2) AS INT) AS VARCHAR(3)) + 'D' ELSE cast(right(sentence, 2) AS VARCHAR(3)) + 'D' END ) ```
You can do this with `Concat` as well: ``` Select Concat ( Left(SENTENCE, 3), 'Y ', SubString(SENTENCE, 4, 2), 'M ', Right(SENTENCE, 2), 'D' ) From Table ``` To condense the expression as in your example, this can be used as well: ``` Select Concat ( Case When (IsNumeric(Left(SENTENCE, 3)) = 1 And Left(SENTENCE, 3) <> '000') Then Convert(Varchar (3), Convert(Int, Left(SENTENCE, 3))) + 'Y ' End, Case When (IsNumeric(SubString(SENTENCE, 4, 2)) = 1 And SubString(SENTENCE, 4, 2) <> '00') Then Convert(Varchar (2), Convert(Int, SubString(SENTENCE, 4, 2))) + 'M ' End, Case When (IsNumeric(Right(SENTENCE, 2)) = 1 And Right(SENTENCE, 2) <> '00') Then Convert(Varchar (2), Convert(Int, Right(SENTENCE, 2))) + 'D' End ) From Table ```
Reformatting data in column
[ "", "sql", "sql-server", "t-sql", "" ]
I have a column called "**SYS\_CREAT\_TS**". I want the query to fetch the REV data where Status code is 2 and from the latest timestamp where status code is 2. ``` SELECT RVSN FROM DATA_STUS WHERE DATA_STUS_CD = 2 AND SYS_CREAT_TS IN MAX(SYS_CREAT_TS); ``` **Some more detail** Without the latest timestamp comparison query. I'm getting Revision Number(RVSN) as 2446, 2442. But I want the latest timestamp between these two timestamps with their respective revision numbers. 1. 15-JUL-15 03.20.25.769000000 PM -> 2442 2. 15-JUL-15 03.23.03.940000000 PM -> 2446 The second one is the latest. Im using Oracle 12C. So, the result of the query should be **2446**.
This will work: ``` select rvsn from ( select rvsn, row_number() over (order by sys_creat_ts desc) as rn from data_stus where data_stus_cd = 2 ) where rn = 1; ```
``` SELECT RVSN FROM DATA_STUS WHERE DATA_STUS_CD = 2 AND SYS_CREAT_TS = (SELECT MAX(SYS_CREAT_TS) FROM RVSN WHERE DATA_STUS_CD = 2) ``` Would something like this work for your issue?
How to get the latest timestamp (Max) in a column
[ "", "sql", "oracle", "" ]
I am working on an sql query where i have to find last 10 years. Suppose this is 2015 then then the query should return 2015,2014,2013... and so on. For this i have used the following query- ``` select top 10 DATEPART(Year,getdate()) order by DATEPART(Year,getdate()) desc ``` But the above query is returning only single query which is the current year. Please help me someone here.
Try this: ``` with yearlist as ( select (DATEPART(Year,getdate())-10) as year union all select yl.year + 1 as year from yearlist yl where yl.year + 1 <= YEAR(GetDate()) ) select year from yearlist order by year desc; ```
You can do It in following: ``` DECLARE @YearsToPass INT SET @YearsToPass = 10 ;WITH cte AS ( SELECT DATEPART(YY, GETDATE())- @YearsToPass + 1 as Years UNION ALL SELECT Years + 1 as Years FROM cte WHERE Years + 1 <= YEAR(GETDATE()) ) SELECT Years FROM cte ORDER BY Years DESC ``` Simple set `@YearsToPass` to number of how many years you want to return.
How to find last 10 years using sql query
[ "", "sql", "sql-server", "" ]
Suppose I have 2 tables `T1` and `T2` as follows `T1`: ``` bag_id bag_type capacity ------|--------|-------- 1 A 500 2 A 300 3 A 100 4 B 200 5 B 100 ``` `T2`: ``` item_type item_amount ---------|----------- A 850 B 300 ``` Each record in table `T1` represents a bag and its capacity, here I have 5 bags. I want to write an SQL that allocate items in table `T2` into each bag with the same type, i.e. the result should be like this ``` bag_id bag_type capacity allocated_amount ------|--------|--------|---------------- 1 A 500 500 2 A 300 300 3 A 100 50 4 B 200 200 5 B 100 100 ``` Therefore, **I am finding some kind of aggregation function, let's call it `allocate()`, that can produce the column `allocated_amount` as above.** I have a guess that, if exists, it might be used like this ``` select t1.bag_id, t1.bag_type, t1.capacity, allocate(t2.item_amount, t1.capacity) over (partition by t1.bag_type order by t1.capacity desc) as allocatd_amount from t1, t2 where t2.item_type = t1.bag_type ``` My current solution is to use a temp table and PL/SQL loop for calculation, but I hope I can do it with one simple SQL.
You are looking for a cumulative sum. Something like this: ``` select t1.*, (case when cumecap <= t2.item_amount then t1.capacity when cumecap - t1.capacity <= t2.item_amount then t2.item_amount - (cumecap - t1.capacity) else 0 end) as allocated_capacity from (select t1.*, sum(t1.capacity) over (partition by bag_type order by bag_id) as cumecap from t1 ) t1 join t2 on t1.bag_type = t2.item_type; ```
This should do the trick: ``` select t1.bag_id , t1.bag_type , t1.capacity , least( t1.capacity -- don't over fill the bag , greatest( 0 -- don't under fill the bag , t2.item_amount -- to be allocated - nvl(sum(t1.capacity) -- less previous allocations over (partition by t1.bag_type order by t1.capacity desc rows between unbounded preceding and 1 preceding) , 0))) Allocated from t1 join t2 on t2.item_type = t1.bag_type; BAG_ID B CAPACITY ALLOCATED ---------- - ---------- ---------- 1 A 500 500 2 A 300 300 3 A 100 50 4 B 200 200 5 B 100 100 ```
Oracle aggregation function to allocate amount
[ "", "sql", "oracle", "oracle11g", "aggregate-functions", "data-warehouse", "" ]
I have table name `tblAttend` in which one column named `WorkHrs` is of datatype `varchar`. The result of simple select query is ![enter image description here](https://i.stack.imgur.com/EqK2H.jpg) I sum this column's value and get result in seconds my query is ``` select sum(DATEDIFF(SECOND, '0:00:00', WorkHrs )) from tblAttend ``` and it shows this output: ![enter image description here](https://i.stack.imgur.com/cKgYU.jpg) Now the issue is, when sum of `WorkHrs` is greater than 24 hours it will throw an error: ![enter image description here](https://i.stack.imgur.com/vVNgq.jpg) What can you suggest to get around this problem? Thanks in advance
Try splitting each time into its component parts by converting the time to a string and then multiplying by the number of seconds relevant to each part. Data conversion to integer is implicit ``` select Sum(Left(WorkHrs,2) * 3600 + substring(WorkHrs, 4,2) * 60 + substring(WorkHrs, 7,2)) from tblAttend ```
Try: ``` DECLARE @DURATION TIME = '01:43:24' SELECT DATEDIFF(SECOND, '1/1/1900', CONVERT(DATETIME, @DURATION)) ```
How to convert hh:mm:ss to seconds in SQL Server with more than 24 hours
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I can do something like this: ``` select case a.x > 1 then a.bananas else a.pajamas end from my_table a; ``` But when I try to tweak it like this: ``` select case a.x > 1 then a.bananas, a.pajamas else a.strawberries, a.peaches end from my_table; ``` it says I have an error in my syntax. How can I select arbitrary different subsets of my columns based on a condition?
You need a separate `CASE` for each column: ``` SELECT case a.x > 1 then a.bananas else a.pajamas end as bananas_or_pajamas, case a.x > 1 then a.strawberries else a.peaches end as strawberries_or_peaches, .... ```
CASE is a function, and can only return a single value. You can only use it to switch the value of one field in a query. Another option would be to wrap the whole query with an IF and have two separate queries to return results. Without seeing the rest of the query, it's hard to say if that would work for you. Using case: ``` SELECT result1 = CASE WHEN a.x > 1 THEN a.bananas ELSE a.peaches END from my_table a, result2 = CASE WHEN a.x > 1 THEN a.pajamas ELSE a.strawberries END from my_table a ```
SQL multiple-select case statement
[ "", "mysql", "sql", "" ]
I use this method to encrypt my string: ``` RETURN RAWTOHEX(DBMS_CRYPTO.HASH(SRC=>to_hash, TYP=>dbms_crypto.HASH_MD5)); ``` Now I have the encrypted MD5 string like: > F267E16E70C2528280A487D5D13617A6 Is there a way to decrypt this code to get the start-string again?
MD5 is a hashing algorithm, not really intended for encryption or decryption. So, no, there is no way to get the start-string again. Actually, given a hash, there would be many potential start-strings it could come from.
The hash function is not [isomorphic](https://en.wikipedia.org/wiki/Isomorphism), i.e. it is not possible in general case to invert the function and get the unique original value. This is is something very different to beeing "safely decoded". If there is additional knowledge e.g. about the lenght of the string, it is very easy (with some CPU power) to get all candidate strings (i.e. the strings that map to the target hash value. So this is probably not the optimal way to decode password etc. Simple example for strings with length three: ``` select (DBMS_CRYPTO.HASH( RAWTOHEX('Scr'), 2 /* dbms_crypto.HASH_MD5*/ )) from DUAL; 93656D76795528C600E7BF7A17B09C8E with chr as ( select chr(ascii('A') -1 + rownum) chr from dual connect by level <= 26 union all select chr(ascii('a') -1 + rownum) chr from dual connect by level <= 26 union all select chr(ascii('0') -1 + rownum) chr from dual connect by level <= 10 ), chr2 as ( select a.chr||b.chr||c.chr str from chr a, chr b, chr c ) select * from chr2 where DBMS_CRYPTO.HASH( RAWTOHEX(str), 2 /* dbms_crypto.HASH_MD5*/ ) = '93656D76795528C600E7BF7A17B09C8E' ; Scr ```
Oracle Hash MD5 decryption
[ "", "sql", "oracle", "plsql", "md5", "" ]
I have a query in which the user can choose which columns they wish to search on (each column has a corresponding filter on the web). I've used the NULL method to try and ignore the parameter if it is passed into the DB as NULL. Having all fields NULL works as expected and returns all records, but when attempting to filter the information the results are the same and the query returns everything. I can't seem to find out why this might be happening, it could be something really small and obvious but I just can't see it. ``` ALTER PROCEDURE [dbo].[GetChatListFilter] @SiteKey int, @invited int = NULL, @starttime varchar(15), @finishtime varchar(15) = NULL, @visitor varchar(50) = NULL, @wait int = NULL, @operators varchar(max) = NULL, @department varchar(max) = NULL, @skills varchar(max) = NULL, @chattime int = NULL, @rating int = NULL, @email varchar(max) = NULL, @message varchar(max) = NULL, @dns varchar(max) = NULL, @visit varchar(max) = NULL, @city varchar(max) = NULL, @country varchar(max) = NULL AS SELECT UserChats.Invited, UserChats.StartTime AS Start, UserChats.FinishTime AS Finish, UserChats.VisitorName As Visitor, UserChats.WaitedForSecs AS Wait, UserChats.TakenByUser AS Operator, UserChats.TakenByDept AS [Dept.], dbo.SkillIDsToName(UserChats.SkillIDList) AS Skill, UserChats.ChattedForSecs AS [Time], UserChats.Rating AS Rate, UserChats.MessageEmail AS Email, UserChats.MessageText AS [Message], UserChats.VisitorIP AS DNS, UserChats.VisitorVisitNumber AS Visit, VisitDetail.City, VisitDetail.Country FROM UserChats INNER JOIN VisitDetail ON UserChats.VisitID = VisitDetail.VisitID WHERE UserChats.SiteKey = @SiteKey AND UserChats.Invited = @invited OR @invited is NULL AND UserChats.StartTime = @starttime OR @starttime = '01/01/1900' AND UserChats.FinishTime = @finishtime OR @finishtime = '01/01/1900' AND UserChats.VisitorName = @visitor OR @visitor is NULL AND UserChats.WaitedForSecs = @wait OR @wait is NULL AND UserChats.TakenByUser = @operators OR @operators is NULL AND UserChats.TakenByDept = @department OR @department is NULL AND dbo.SkillIDsToName(UserChats.SkillIDList) = @skills OR @skills is NULL AND UserChats.ChattedForSecs = @chattime OR @chattime is NULL AND UserChats.Rating = @rating OR @rating is NULL AND UserChats.MessageEmail = @email OR @email is NULL AND UserChats.MessageText = @message OR @message is NULL AND UserChats.VisitorIP = @dns OR @dns is NULL AND UserChats.VisitorVisitNumber = @visit OR @visit is NULL AND VisitDetail.City = @city OR @city is NULL AND VisitDetail.Country = @country or @country is NULL ```
As I told above, I perhaps would go for dynamic code here, since your query contains a lot of `OR` statements. I've rewritten your procedure like that. Please leave a comment or ideas. What I can think of all records not comming back - you hardcoded `@starttime` and `@finishtime`, just deal with them. I tried to do it in this code, but it's quite hard when there's no sample data. ``` ALTER PROCEDURE [dbo].[GetChatListFilter] ( @SiteKey INT , @invited INT = NULL , @starttime VARCHAR(15) , @finishtime VARCHAR(15) = NULL , @visitor VARCHAR(50) = NULL , @wait INT = NULL , @operators VARCHAR(MAX) = NULL , @department VARCHAR(MAX) = NULL , @skills VARCHAR(MAX) = NULL , @chattime INT = NULL , @rating INT = NULL , @email VARCHAR(MAX) = NULL , @message VARCHAR(MAX) = NULL , @dns VARCHAR(MAX) = NULL , @visit VARCHAR(MAX) = NULL , @city VARCHAR(MAX) = NULL , @country VARCHAR(MAX) = NULL ) AS BEGIN SET NOCOUNT ON; BEGIN TRY DECLARE @SQL NVARCHAR(MAX) , @SQLParams NVARCHAR(MAX); SET @SQL = N' SELECT UC.Invited , UC.StartTime AS Start , UC.FinishTime AS Finish , UC.VisitorName AS Visitor , UC.WaitedForSecs AS Wait , UC.TakenByUser AS Operator , UC.TakenByDept AS [Dept.] , dbo.SkillIDsToName(UC.SkillIDList) AS Skill , UC.ChattedForSecs AS [Time] , UC.Rating AS Rate , UC.MessageEmail AS Email , UC.MessageText AS [Message] , UC.VisitorIP AS DNS , UC.VisitorVisitNumber AS Visit , VD.City , VD.Country FROM dbo.UserChats AS UC INNER JOIN dbo.VisitDetail AS VD ON UC.VisitID = VD.VisitID WHERE UC.SiteKey = @p0'; IF NULLIF(@invited, '') IS NOT NULL SET @SQL += N' AND UC.Invited = @p1'; IF NULLIF(@starttime, '01/01/1900') IS NOT NULL SET @SQL += N' AND UC.StartTime = @p2'; IF NULLIF(@finishtime, '01/01/1900') IS NOT NULL SET @SQL += N' AND UC.FinishTime = @p3'; IF NULLIF(@visitor, '') IS NOT NULL SET @SQL += N' AND UC.VisitorName = @p4'; IF NULLIF(@wait, '') IS NOT NULL SET @SQL += N' AND UC.WaitedForSecs = @p5'; IF NULLIF(@operators, '') IS NOT NULL SET @SQL += N' AND UC.TakenByUser = @p6'; IF NULLIF(@department, '') IS NOT NULL SET @SQL += N' AND UC.TakenByDept = @p7'; IF NULLIF(@skills, '') IS NOT NULL SET @SQL += N' AND dbo.SkillIDsToName = @p8'; IF NULLIF(@chattime, '') IS NOT NULL SET @SQL += N' AND UC.ChattedForSecs = @p9'; IF NULLIF(@rating, '') IS NOT NULL SET @SQL += N' AND UC.Rating = @p10'; IF NULLIF(@email, '') IS NOT NULL SET @SQL += N' AND UC.MessageEmail = @p11'; IF NULLIF(@message, '') IS NOT NULL SET @SQL += N' AND UC.MessageText = @p12'; IF NULLIF(@dns, '') IS NOT NULL SET @SQL += N' AND UC.VisitorIP = @p13'; IF NULLIF(@visit, '') IS NOT NULL SET @SQL += N' AND UC.VisitorVisitNumber @p14'; IF NULLIF(@city, '') IS NOT NULL SET @SQL += N' AND VD.City = @p15'; IF NULLIF(@country, '') IS NOT NULL SET @SQL += N' AND VD.Country = @p16'; SET @SQLParams = N' @p0 INT , @p1 INT , @p2 VARCHAR(15) , @p3 VARCHAR(15) , @p4 VARCHAR(50) , @p5 INT , @p6 VARCHAR(MAX) , @p7 VARCHAR(MAX) , @p8 VARCHAR(MAX) , @p9 INT , @p10 INT , @p11 VARCHAR(MAX) , @p12 VARCHAR(MAX) , @p13 VARCHAR(MAX) , @p14 VARCHAR(MAX) , @p15 VARCHAR(MAX) , @p16 VARCHAR(MAX)'; EXECUTE sp_executesql @SQL , @SQLParams , @p0 = @SiteKey , @p1 = @invited , @p2 = @starttime , @p3 = @finishtime , @p4 = @visitor , @p5 = @wait , @p6 = @operators , @p7 = @department , @p8 = @skills , @p9 = @chattime , @p10 = @rating , @p11 = @email , @p12 = @message , @p13 = @dns , @p14 = @visit , @p15 = @city , @p16 = @country; END TRY BEGIN CATCH SELECT ERROR_MESSAGE(); END CATCH END ```
You need parentheses: ``` WHERE UserChats.SiteKey = @SiteKey AND (UserChats.Invited = @invited OR @invited is NULL) AND (UserChats.StartTime = @starttime OR @starttime = '1900-01-01') AND (UserChats.FinishTime = @finishtime OR @finishtime = '1900-01-01') AND . . . ```
Stored Procedure not returning filtered results when using NULL to ignore parameter if empty
[ "", "sql", "sql-server", "stored-procedures", "" ]
In a table, I have a column that contains a few records with accented characters. I want a query to find the records with accented characters. If we have records like as below: ``` 2ème édition Natália sravanth ``` query should pick these records: ``` 2ème édition Natália ```
You can use the REGEXP\_LIKE function along with a list of all the accented characters you're interested in: ``` with t1(data) as ( select '2ème édition' from dual union all select 'Natália' from dual union all select 'sravanth' from dual ) select * from t1 where regexp_like(data,'[àèìòùÀÈÌÒÙáéíóúýÁÉÍÓÚÝâêîôûÂÊÎÔÛãñõÃÑÕäëïöüÿÄËÏÖÜŸçÇߨøÅ寿œ]'); DATA -------------- 2ème édition Natália ```
The [ASCIISTR function](https://docs.oracle.com/en/database/oracle/oracle-database/12.2/sqlrf/ASCIISTR.html) would be another way to find accented characters > ASCIISTR takes as its argument a string, or an expression that > resolves to a string, in any character set and returns an ASCII > version of the string in the database character set. Non-ASCII > characters are converted to the form \xxxx, where xxxx represents a > UTF-16 code unit. So you can do something like ``` SELECT my_field FROM my_table WHERE NOT my_field = ASCIISTR(my_field) ``` Or to re-use the demo from the accepted answer: ``` with t1(data) as ( select '2ème édition' from dual union all select 'Natália' from dual union all select 'sravanth' from dual ) select * from t1 where data != asciistr(data) ``` which would output the 2 rows with accents.
Find the accent data in table records
[ "", "sql", "oracle", "" ]
I'm looking to find the average number of employees for the first half of 2015. Thats the head count of each month, Jan-Jun / 6 (months). This number is the desired result. For example, lets just do 3 months for simplicity's sake. Jan had 100, Feb had 105, and Mar had 103. 308/3 = 102.7 average employees. Unfortunately I've been left with only a few columns and I'd like to generate some clean code to make it simple to complete my task. Not sure how to complete this task though with the information I have. Code: ``` SELECT distinct a.personidno as 'PersonId', a.[LastHireDate], a.[TerminationDate], --COUNT(distinct a.PersonIdNo) CASE WHEN a.EmploymentStatus = 'Regular Full Time' THEN 'RFT' WHEN a.EmploymentStatus = 'PRN' THEN 'PRN' WHEN a.EmploymentStatus = 'Regular Part Time' THEN 'RPT' ELSE a.EmploymentStatus END as 'EmpStatus' --into #tmp_ytd_hc_avg FROM [EmployeeTable] a where a.OrgCodeIdNo = '69' and (a.[TerminationDate] >= '2015-01-01 00:00:00' and a.[TerminationDate] <= '2015-06-30 23:59:59') OR (a.[TerminationDate] is null and a.employeestatus = 'Active') ``` Sample Data: ``` PersonId LastHireDate TerminationDate EmpStatus 19 2012-07-30 00:00:00.000 NULL RFT 20 2010-01-01 00:00:00.000 NULL RFT 21 2010-10-01 00:00:00.000 NULL RFT 24 1994-06-28 00:00:00.000 NULL RFT 25 2002-12-11 00:00:00.000 NULL RFT 26 2011-03-21 00:00:00.000 NULL RFT 27 2010-01-01 00:00:00.000 NULL RFT 30 2010-06-29 00:00:00.000 NULL PRN 34 2008-12-16 00:00:00.000 NULL RFT 35 2010-01-01 00:00:00.000 NULL RFT 36 2014-02-27 00:00:00.000 NULL RFT 37 2009-03-01 00:00:00.000 NULL PRN 39 2012-06-25 00:00:00.000 NULL RFT 40 2012-01-01 00:00:00.000 NULL RFT 42 2011-08-01 00:00:00.000 NULL RFT 44 2014-02-27 00:00:00.000 2014-09-27 00:00:00.000 RFT --hired before 2015-01-01 and leaves before 2015-01-01 54 2014-02-27 00:00:00.000 2015-05-15 00:00:00.000 RFT --hired before 2015-01-01 and leaves before 2015-06-30 676 2015-02-27 00:00:00.000 2015-06-15 00:00:00.000 RFT --hired after 2015-01-01 and leaves before 2015-06-30 3012 2015-03-20 00:00:00.000 2015-07-03 00:00:00.000 RFT --hired after 2015-01-01 and leaves after 2015-06-30 5125 2015-07-11 00:00:00.000 NULL RPT 5127 2015-07-07 00:00:00.000 NULL RFT 5129 2015-07-09 00:00:00.000 NULL PRN 5131 2015-07-07 00:00:00.000 NULL PRN 5133 2015-07-09 00:00:00.000 NULL PRN 5136 2015-07-13 00:00:00.000 NULL RFT ```
Here is [SQL Fiddle](http://sqlfiddle.com/#!6/0796f/3/0) with your updated sample data. There are two queries there: first returns just one average number, second returns daily numbers to help understand how it works. Follow the dates and you can see how the number changes as people come and go. --- For each person you need to know two dates: when he was hired and when he left. I hope this is what `LastHireDate` and `TerminationDate` mean. I assume that `NULL` `TerminationDate` means that the person has not left yet, is still employed. When I calculate similar reports I calculate the number of people employed for each day in the given range (rather than month). Then you can average daily numbers further as needed. I use a `Calendar` table. This table simply has a list of dates for several decades. ``` CREATE TABLE [dbo].[Calendar]( [dt] [date] NOT NULL, CONSTRAINT [PK_Calendar] PRIMARY KEY CLUSTERED ( [dt] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] ``` In my system it has few extra columns, such as `[IsLastDayOfMonth]`, `[IsLastDayOfQuarter]`, which are useful in some reports, but in your case you need just the date column. There are many ways to [populate such table](http://sqlperformance.com/2013/01/t-sql-queries/generate-a-set-1). For example, 100K rows (~270 years) from 1900-01-01: ``` INSERT INTO dbo.Calendar (dt) SELECT TOP (100000) DATEADD(day, ROW_NUMBER() OVER (ORDER BY s1.[object_id])-1, '19000101') AS dt FROM sys.all_objects AS s1 CROSS JOIN sys.all_objects AS s2 OPTION (MAXDOP 1); ``` Once you have `Calendar` table, here is how to use it: ``` WITH CTE_EmployedPeople -- this is how many people were employed on each day in the given period AS ( SELECT dbo.Calendar.dt ,CAST(COUNT(*) as float) AS People -- without this cast the final average is int FROM dbo.Calendar CROSS JOIN EmployeeTable WHERE (dbo.Calendar.dt >= '2015-01-01') AND (dbo.Calendar.dt <= '2015-06-30') AND (dbo.Calendar.dt >= EmployeeTable.LastHireDate) AND (dbo.Calendar.dt <= EmployeeTable.TerminationDate OR EmployeeTable.TerminationDate IS NULL) GROUP BY dbo.Calendar.dt ) ,CTE_Daily -- if it is possible that nobody was employed on a certain day -- left join previous results to the Calendar table again to get 0 for such days AS ( SELECT dbo.Calendar.dt ,ISNULL(CTE_EmployedPeople.People, 0) AS People FROM dbo.Calendar LEFT JOIN CTE_EmployedPeople ON dbo.Calendar.dt = CTE_EmployedPeople.dt WHERE (dbo.Calendar.dt >= '2015-01-01') AND (dbo.Calendar.dt <= '2015-06-30') ) -- simple average of daily numbers SELECT AVG(People) AS AvgPeople FROM CTE_Daily; ```
Go with @VladimirBaranov's answer for a generic solution. But in your special case you might not need to calculate the employees per month and then average it. Simply summing the number of months employed within the requested range and then dividing it by 6 returns the same result. ``` SELECT -- approximate monthly average SUM(datediff(month, start_dt, end_dt)+1) / 6.00, -- exact daily average SUM(datediff(day, start_dt, end_dt)+1) / cast(datediff(day, '2015-01-01', '2015-07-01') as float) FROM ( SELECT LastHireDate ,TerminationDate -- fixing start date to match the requested range ,CASE WHEN LastHireDate < '2015-01-01 00:00:00' THEN '2015-01-01 00:00:00' ELSE LastHireDate END AS start_dt -- fixing end date to match the requested range ,CASE WHEN TerminationDate <= '2015-06-30 23:59:59' THEN TerminationDate ELSE '2015-06-30 23:59:59' END AS end_dt FROM EmployeeTable AS a WHERE a.OrgCodeIdNo = '69' -- As @Turophile mentioned, your logic seems to be wrong, -- your sample result shows employees hired after june 2015 AND (TerminationDate >= '2015-01-01 00:00:00' OR (TerminationDate IS NULL AND a.employeestatus = 'Active'))) AND LastHireDate <= '2015-06-30 23:59:59' ) AS dt ``` This solution will count an employee if it was hired only a single day within a month, of course this might not be a correct average based on how you define "number of employees per month". Edit: Added a calculation like VladimirBaranov's to get a daily average: [fiddle](http://sqlfiddle.com/#!6/0796f/11)
Find a YTD Employee Average
[ "", "sql", "sql-server", "sql-server-2012", "" ]
I have a table `Contacts`, parent to table `Activity`. I would like to select the latest activity for each contact, but getting more than one row. This is my query: ``` select top 30 * from Contacts o, Activity d where o.ID = d.contact and d.ID > 401061 and Last_Action is null order by d.activity_date desc ``` I think I need Top? but not sure how to implement here. Any help would be appreciated.
You can use `row_number()` to number each contact's activities. In an outer query, you can filter down to only the latest activity per contact: ``` select top 30 * from ( select row_number() over ( partition by o.ID order by d.activity_date desc) as rn , * from Contacts o join Activity d on o.ID = d.contact where d.ID > 401061 and Last_Action is null ) as SubQueryAlias where rn = 1 -- Only last activity per contact order by activity_date desc ```
Here's a way using `not exists` that will work on most dbs. You're basically selecting each activity per contact where a newer activity does not exist (therefore it's the latest activity). ``` select top 30 * from activity a join contact c on c.id = a.contact where not exists ( select 1 from activity b where b.contact = a.contact and b.activity_date > a.activity_date ) and last_action is null and a.id > 401061 order by a.activity_date desc ```
Using Top in SQL Server 2012
[ "", "sql", "sql-server", "" ]
I am trying to create an app where users can create and take part in challenges. The idea is that one user can create many challenges but each challenge can only be created by one user. Many users can enter one challenge and each entry can only be part of one challenge. Users can enter a challenge more than once My confusion lies in trying to structure the table relationships. Issue 1) User:Challenge should be one to many but do I really need a separate Entry table (which would be 1:many for Challenge:Entry. If so, then do I need to associated Entry and User? Would I in fact be able to get away with a more simple many:many User:Challenge so that a user can create and be part of a challenge , and any particular challenge can have many users. If so then does this need a join table between User and Challenge? Issue 2: If I do need a separate entry table as I suspect because a user can enter a challenge more than once, then can I keep this as a 1 challenge:many entries. In which case I will have User Many:Many Challenge 1:Many Entry but how do I then keep track of which entry belongs to which user?
I would see the schema looking something like this: ``` User ---- userId PK Challenge ---- ChallengeId PK creatingUserId FK to user.UserId Entry ---- EntryId PK ChallengeId FK to challenge.challengeId UserId FK to user.UserId ``` this would allow only a single author of a challenge, but multiple participants (`Entry`) in a challenge. `User` would be related to both `Challenge` and `Entry`. I think this is pretty close to (one of the options of) what you're describing.
I agree to @Kritner answer and mine is an addition. To track which entry belongs to which user, you can write something like: ``` select u.userName, e.EntryId from T_Users u inner join T_Entries e on e.userId = u.userId ```
How to create join where a table is associated with more than one other table
[ "", "sql", "database-design", "" ]
I have a query that groups easily. I need to get the groups that have exactly the same records to another table (relationship). I'm using ANSI-SQL under SQL Server, but I accept an answer of any implementation. For example: Table1: ``` Id | Value ---+------ 1 | 1 1 | 2 1 | 3 2 | 4 3 | 2 4 | 3 ``` Table2: ``` Value | ... ------+------ 1 | ... 2 | ... 3 | ... ``` In my example, the result is: ``` Id | ---+ 1 | ``` How imagined that it could be the code: ``` SELECT Table1.Id FROM Table1 GROUP BY Table1.Id HAVING ...? -- The group that has exactly the same elements of Table2 ``` Thanks in advance!
You can try the following: ``` select t1.Id from Table2 t2 join Table1 t1 on t1.value = t2.value group by t1.Id having count(distinct t1.value) = (select count(*) from Table2) ``` [**SQLFiddle**](http://sqlfiddle.com/#!9/03045/5)
To get the same sets use an inner join: ``` SELECT Table1.Id FROM Table1 INNER JOIN table2 ON table1.id=table2.id GROUP BY Table1.Id HAVING ...? -- ```
Get groups that are exactly equal to a table
[ "", "sql", "sql-server", "ansi-sql", "" ]
While practicing in SQL here <http://www.w3schools.com/sql/trysql.asp?filename=trysql_select_join> i made up a task for myself - need to compose a table using INNER JOIN that will contain CustomerID, EmployeeID, ContactName of a customer, and employee's last and first names WHERE Employee's first name is contained in Customer's contact name. The following doesn't give an output: ``` SELECT DISTINCT Customers.CustomerID, Customers.ContactName AS CustomerContactName, Employees.EmployeeID, Employees.FirstName AS EmployeeFirstName, Employees.LastName AS EmployeeLastName FROM Customers JOIN Employees ON Customers.ContactName LIKE "%"+Employees.FirstName+"%" ``` Though this one: ``` SELECT DISTINCT Customers.CustomerID, Customers.ContactName AS CustomerContactName, Employees.EmployeeID, Employees.FirstName AS EmployeeFirstName, Employees.LastName AS EmployeeLastName FROM Customers JOIN Employees ON Customers.ContactName LIKE "%Janet%" AND Employees.FirstName LIKE "%Janet%" ``` gives correct output for just one case. Do i miss something or it is w3schools' issue?
This works ``` SELECT DISTINCT Customers.CustomerID, Customers.ContactName AS CustomerContactName, Employees.EmployeeID, Employees.FirstName AS EmployeeFirstName, Employees.LastName AS EmployeeLastName FROM Customers INNER JOIN Employees ON Customers.ContactName LIKE "%"+Employees.FirstName+"%" ``` Results in: ``` Number of Records: 3 CustomerID | CustomerContactName | EmployeeID | EmployeeFirstName | EmployeeLastName 41 | Annette Roulet | 9 | Anne | Dodsworth 67 | Janete Limeira | 3 | Janet | Leverling 68 | Michael Holz | 6 | Michael | Suyama ``` Looks like you've got to explicitly tell it that it is an inner join. though this should not be necessary. thats w3schools for you!
The first thing to note is that the two queries are not comparable, If first name was `Janette`, and contact name was `Janet Jackson`, then the both values statisfy `LIKE '%Janet%'`, but `Janet Jackson` does not contain `Janette`, so the join condition is not met. Secondly, I think you should be using single quotes for literals, not double, e.g. '%' + Employees.FirstName + '%'. I am not sure what engine w3schools is running, but with the above change, and changing `JOIN` to `INNER JOIN` which are equivalent I got results: ``` SELECT DISTINCT Customers.CustomerID, Customers.ContactName AS CustomerContactName, Employees.EmployeeID, Employees.FirstName AS EmployeeFirstName, Employees.LastName AS EmployeeLastName FROM Customers INNER JOIN Employees ON Customers.ContactName LIKE '%'+Employees.FirstName+'%'; ``` ![enter image description here](https://i.stack.imgur.com/WQXaB.png)
LIKE in INNER JOIN doesn't work
[ "", "sql", "" ]
I want to write a TSQL query that independently checks a set of columns in a table to see which ones contain at least one non-null value. Each column's check should return T/F (1/0) accordingly. The first thing that came to mind was to use the `COUNT` aggregate function. Since `COUNT(expression)` excludes nulls from the resulting total, if the `COUNT` is > 0, there's non-null data. This seems a bit heavy-handed in that it has to count all data. I really just need to know if there's at least one non-null value in each column: ``` SELECT CAST(CASE WHEN COUNT(t.Column1) > 0 THEN 1 ELSE 0 END AS BIT) AS HasColumn1Data, CAST(CASE WHEN COUNT(t.Column2) > 0 THEN 1 ELSE 0 END AS BIT) AS HasColumn2Data, CAST(CASE WHEN COUNT(t.Column3) > 0 THEN 1 ELSE 0 END AS BIT) AS HasColumn3Data, CAST(CASE WHEN COUNT(t.Column4) > 0 THEN 1 ELSE 0 END AS BIT) AS HasColumn4Data FROM dbo.Table AS t WHERE t.TimeStamp BETWEEN @StartTimeStamp AND @EndTimeStamp ``` Any ideas that might be more optimal?
You could try something like this: ``` ;WITH cte AS ( SELECT * FROM dbo.Table WHERE TimeStamp BETWEEN @StartTimeStamp AND @EndTimeStamp ) SELECT COUNT(s1.Col1) as Col1, COUNT(s2.Col2) as Col2, COUNT(s3.Col3) as Col3, COUNT(s4.Col4) as Col4 FROM (SELECT TOP 1 Col1 FROM cte WHERE Col1 IS NOT NULL) s1 CROSS JOIN (SELECT TOP 1 Col2 FROM cte WHERE Col2 IS NOT NULL) s2 CROSS JOIN (SELECT TOP 1 Col3 FROM cte WHERE Col3 IS NOT NULL) s3 CROSS JOIN (SELECT TOP 1 Col4 FROM cte WHERE Col4 IS NOT NULL) s4 ``` This has a potential advantage IF all columns are not null. In such case, the table is only scanned till the first non null row (But doing so 4 times...). If any (or worse, all) column is null for all rows, you'll get a full scan per column. To summarize, this might be useful if your expected data does have values.
If you have indexes on the columns, the following may be faster: ``` select (case when exists (select 1 from table t where t.TimeStamp BETWEEN @StartTimeStamp and @EndTimeStamp and column1 is not null ) then 1 else 0 end) as HasColumn1Data, (case when exists (select 1 from table t where t.TimeStamp BETWEEN @StartTimeStamp and @EndTimeStamp and column2 is not null ) then 1 else 0 end) as HasColumn2Data, (case when exists (select 1 from table t where t.TimeStamp BETWEEN @StartTimeStamp and @EndTimeStamp and column3 is not null ) then 1 else 0 end) as HasColumn3Data, (case when exists (select 1 from table t where t.TimeStamp BETWEEN @StartTimeStamp and @EndTimeStamp and column4 is not null ) then 1 else 0 end) as HasColumn4Data; ``` Without indexes, this would be about 4 full-table scans (admittedly, truncated at the first non-NULL value), so it would probably be slower than a `group by`
TSQL: Check columns for at least one non-null value
[ "", "sql", "sql-server", "t-sql", "" ]
I have following code - ``` SELECT BR_CSNO AS [PARTY_KEY], 'W' AS [PHONE_TYPE_CD], ISNULL( LTRIM( RTRIM( FAC_TELNO ) ), '0' ) AS [PHONE_NUM], NULL AS [UPDATE_DT], GETDATE() [BATCH_DT] FROM BASE B ``` I get following result - ![enter image description here](https://i.stack.imgur.com/vdDqT.png) It is not replacing the `NULL`. Why? Is it because of wrong placement or the usage of `LTRIM(RTRIM())` ? EDIT Sorry for my limited knowledge of SQL. I want to trim `FAC_TELNO` and replace it with `NULL` if it is a `0`
I think this is what I wanted. Sorry for causing confusion! ``` SELECT BR_CSNO AS [PARTY_KEY], 'W' AS [PHONE_TYPE_CD], NULLIF( LTRIM( RTRIM( FAC_TELNO ) ), '0' ) AS [PHONE_NUM], NULL AS [UPDATE_DT], GETDATE() [BATCH_DT] FROM BASE B ```
SQL has two functions related to nulls. 1. IsNull, which takes two parameters, a potentially null value and, and a replacement value if the original value is null. IsNull(null, 0) = 0 2. NullIf, which takes two parameters and COMPARES THEM. If the comparison returns true, then null is returned, if the comparision is false OR null, then th first value is returned. NullIf(null, 0) is null, NullIf(0,0) is null, NullIf(1, null) = 1. It looks like you want to use `NullIf(ltrim(rtrim(fac_telno)), 0) Phone_Num`.
ISNULL not replacing with NULL
[ "", "sql", "sql-server", "sql-server-2008", "isnull", "" ]
I am working on queries on a large table in Postgres 9.3.9. It is a spatial dataset and it is spatially indexed. Say, I have need to find 3 types of objects: A, B and C. The criteria is that B and C are both within certain distance of A, say 500 meters. My query is like this: ``` select school.osm_id as school_osm_id, school.name as school_name, school.way as school_way, restaurant.osm_id as restaurant_osm_id, restaurant.name as restaurant_name, restaurant.way as restaurant_way, bar.osm_id as bar_osm_id, bar.name as bar_name, bar.way as bar_way from ( select osm_id, name, amenity, way, way_geo from planet_osm_point where amenity = 'school') as school, (select osm_id, name, amenity, way, way_geo from planet_osm_point where amenity = 'restaurant') as restaurant, (select osm_id, name, amenity, way, way_geo from planet_osm_point where amenity = 'bar') as bar where ST_DWithin(school.way_geo, restaurant.way_geo, 500, false) and ST_DWithin(school.way_geo, bar.way_geo, 500, false); ``` This query gives me what I want, but it takes really long time, like 13 seconds to execute. I'm wondering if there is another way to write the query and make it more efficient. **Query plan:** ``` Nested Loop (cost=74.43..28618.65 rows=1 width=177) (actual time=33.513..11235.212 rows=10591 loops=1) Buffers: shared hit=530967 read=8733 -> Nested Loop (cost=46.52..28586.46 rows=1 width=174) (actual time=31.998..9595.212 rows=4235 loops=1) Buffers: shared hit=389863 read=8707 -> Bitmap Heap Scan on planet_osm_point (cost=18.61..2897.83 rows=798 width=115) (actual time=7.862..150.607 rows=8811 loops=1) Recheck Cond: (amenity = 'school'::text) Buffers: shared hit=859 read=5204 -> Bitmap Index Scan on idx_planet_osm_point_amenity (cost=0.00..18.41 rows=798 width=0) (actual time=5.416..5.416 rows=8811 loops=1) Index Cond: (amenity = 'school'::text) Buffers: shared hit=3 read=24 -> Bitmap Heap Scan on planet_osm_point planet_osm_point_1 (cost=27.91..32.18 rows=1 width=115) (actual time=1.064..1.069 rows=0 loops=8811) Recheck Cond: ((way_geo && _st_expand(planet_osm_point.way_geo, 500::double precision)) AND (amenity = 'restaurant'::text)) Filter: ((planet_osm_point.way_geo && _st_expand(way_geo, 500::double precision)) AND _st_dwithin(planet_osm_point.way_geo, way_geo, 500::double precision, false)) Rows Removed by Filter: 0 Buffers: shared hit=389004 read=3503 -> BitmapAnd (cost=27.91..27.91 rows=1 width=0) (actual time=1.058..1.058 rows=0 loops=8811) Buffers: shared hit=384528 read=2841 -> Bitmap Index Scan on idx_planet_osm_point_waygeo (cost=0.00..9.05 rows=137 width=0) (actual time=0.193..0.193 rows=64 loops=8811) Index Cond: (way_geo && _st_expand(planet_osm_point.way_geo, 500::double precision)) Buffers: shared hit=146631 read=2841 -> Bitmap Index Scan on idx_planet_osm_point_amenity (cost=0.00..18.41 rows=798 width=0) (actual time=0.843..0.843 rows=6291 loops=8811) Index Cond: (amenity = 'restaurant'::text) Buffers: shared hit=237897 -> Bitmap Heap Scan on planet_osm_point planet_osm_point_2 (cost=27.91..32.18 rows=1 width=115) (actual time=0.375..0.383 rows=3 loops=4235) Recheck Cond: ((way_geo && _st_expand(planet_osm_point.way_geo, 500::double precision)) AND (amenity = 'bar'::text)) Filter: ((planet_osm_point.way_geo && _st_expand(way_geo, 500::double precision)) AND _st_dwithin(planet_osm_point.way_geo, way_geo, 500::double precision, false)) Rows Removed by Filter: 1 Buffers: shared hit=141104 read=26 -> BitmapAnd (cost=27.91..27.91 rows=1 width=0) (actual time=0.368..0.368 rows=0 loops=4235) Buffers: shared hit=127019 -> Bitmap Index Scan on idx_planet_osm_point_waygeo (cost=0.00..9.05 rows=137 width=0) (actual time=0.252..0.252 rows=363 loops=4235) Index Cond: (way_geo && _st_expand(planet_osm_point.way_geo, 500::double precision)) Buffers: shared hit=101609 -> Bitmap Index Scan on idx_planet_osm_point_amenity (cost=0.00..18.41 rows=798 width=0) (actual time=0.104..0.104 rows=779 loops=4235) Index Cond: (amenity = 'bar'::text) Buffers: shared hit=25410 Total runtime: 11238.605 ms ``` I'm only using one table at the moment with **1,372,711 rows**. It has **73 columns**: ``` Column | Type | Modifiers --------------------+----------------------+--------------------------- osm_id | bigint | access | text | addr:housename | text | addr:housenumber | text | addr:interpolation | text | admin_level | text | aerialway | text | aeroway | text | amenity | text | area | text | barrier | text | bicycle | text | brand | text | bridge | text | boundary | text | building | text | capital | text | construction | text | covered | text | culvert | text | cutting | text | denomination | text | disused | text | ele | text | embankment | text | foot | text | generator:source | text | harbour | text | highway | text | historic | text | horse | text | intermittent | text | junction | text | landuse | text | layer | text | leisure | text | lock | text | man_made | text | military | text | motorcar | text | name | text | natural | text | office | text | oneway | text | operator | text | place | text | poi | text | population | text | power | text | power_source | text | public_transport | text | railway | text | ref | text | religion | text | route | text | service | text | shop | text | sport | text | surface | text | toll | text | tourism | text | tower:type | text | tunnel | text | water | text | waterway | text | wetland | text | width | text | wood | text | z_order | integer | tags | hstore | way | geometry(Point,4326) | way_geo | geography | gid | integer | not null default nextval('... Indexes: "planet_osm_point_pkey1" PRIMARY KEY, btree (gid) "idx_planet_osm_point_amenity" btree (amenity) "idx_planet_osm_point_waygeo" gist (way_geo) "planet_osm_point_index" gist (way) "planet_osm_point_pkey" btree (osm_id) ``` There are 8811, 6291, 779 rows in amenity school, restaurant and bar respectively.
This query should go a long way (be *much* faster): ``` WITH school AS ( SELECT s.osm_id AS school_id, text 'school' AS type, s.osm_id, s.name, s.way_geo FROM planet_osm_point s , LATERAL ( SELECT 1 FROM planet_osm_point WHERE ST_DWithin(way_geo, s.way_geo, 500, false) AND amenity = 'bar' LIMIT 1 -- bar exists -- most selective first if possible ) b , LATERAL ( SELECT 1 FROM planet_osm_point WHERE ST_DWithin(way_geo, s.way_geo, 500, false) AND amenity = 'restaurant' LIMIT 1 -- restaurant exists ) r WHERE s.amenity = 'school' ) SELECT * FROM ( TABLE school -- schools UNION ALL -- bars SELECT s.school_id, 'bar', x.* FROM school s , LATERAL ( SELECT osm_id, name, way_geo FROM planet_osm_point WHERE ST_DWithin(way_geo, s.way_geo, 500, false) AND amenity = 'bar' ) x UNION ALL -- restaurants SELECT s.school_id, 'rest.', x.* FROM school s , LATERAL ( SELECT osm_id, name, way_geo FROM planet_osm_point WHERE ST_DWithin(way_geo, s.way_geo, 500, false) AND amenity = 'restaurant' ) x ) sub ORDER BY school_id, (type <> 'school'), type, osm_id; ``` This is ***not*** the same as your original query, but rather what you actually want, [as per discussion in comments](https://stackoverflow.com/questions/31466837/large-table-self-join-multiple-times-in-postgres-sql/31509453#comment50980645_31466837): > I want a list of schools that have restaurants and bars within 500 > meters and I need the coordinates of each school and its corresponding > restaurants and bars. So this query returns a list of those schools, followed by bars and restaurants nearby. Each set of rows is held together by the `osm_id` of the school in the column `school_id`. Now using `LATERAL` joins, to make use of the spatial GiST index. `TABLE school` is just shorthand for `SELECT * FROM school`: * [Is there a shortcut for SELECT \* FROM in psql?](https://stackoverflow.com/questions/30275979/is-there-a-shortcut-for-select-from-in-psql/30276023#30276023) The expression `(type <> 'school')` orders the school in each set first, because: * [SQL select query order by day and month](https://stackoverflow.com/questions/14650705/sql-select-query-order-by-day-and-month/14651597#14651597) The subquery `sub` in the final `SELECT` is only needed to order by this expression. A `UNION` query limits an attached `ORDER BY` list to only columns, no expressions. I focus on the query you presented for the purpose of this answer - *ignoring* the extended requirement to filter on any of the other 70 text columns. That's really a design flaw. The search criteria should be concentrated in *few* columns. Or you'll have to index all 70 columns, and multicolumn indexes like I am going to propose are hardly an option. Still *possible* though ... ### Index In addition to the existing: ``` "idx_planet_osm_point_waygeo" gist (way_geo) ``` If always filtering on the same column, you could create a **[multicolumn index](http://www.postgresql.org/docs/current/interactive/indexes-multicolumn.html)** covering the few columns you are interested in, so **[index-only scans](http://www.postgresql.org/docs/9.2/static/index-scanning.html)** become possible: ``` CREATE INDEX planet_osm_point_bar_idx ON planet_osm_point (amenity, name, osm_id) ``` ### Postgres 9.5 The upcoming Postgres **9.5** introduces **major improvements** that happen to address your case exactly: > * Allow queries to perform accurate distance filtering of bounding-box-indexed objects (polygons, circles) using GiST indexes > (Alexander Korotkov, Heikki Linnakangas) > > Previously, a common table expression was required to return a large > number of rows ordered by bounding-box distance, and then filtered > further with a more accurate non-bounding-box distance calculation. > * Allow GiST indexes to perform index-only scans (Anastasia Lubennikova, Heikki Linnakangas, Andreas Karlsson) That's of particular interest for you. Now you can have a *single* multicolumn (covering) GiST index: ``` CREATE INDEX reservations_range_idx ON reservations USING gist(amenity, way_geo, name, osm_id) ``` And: > * Improve bitmap index scan performance (Teodor Sigaev, Tom Lane) And: > * Add GROUP BY analysis functions `GROUPING SETS`, `CUBE` and `ROLLUP` (Andrew Gierth, Atri Sharma) Why? Because [`ROLLUP`](http://www.postgresql.org/docs/devel/static/queries-table-expressions.html#QUERIES-GROUPING-SETS) would simplify the query I suggested. Related answer: * [Grouping() equivalent in PostgreSQL?](https://dba.stackexchange.com/a/94822/3684) The first alpha version has been released on July 2, 2015. [The expected timeline for the release:](http://www.postgresql.org/about/news/1595/) > This is the alpha release of version 9.5, indicating that some changes > to features are still possible before release. The PostgreSQL Project > will release 9.5 beta 1 in August, and then periodically release > additional betas as required for testing until the final release in > late 2015. ### Basics Of course, be sure not to overlook the basics: * [Slow Query Questions page on the PostgreSQL Wiki](https://wiki.postgresql.org/wiki/Slow_Query_Questions)
The 3 sub-selects that you use are very inefficient. Write them as `LEFT JOIN` clauses and the query should be much more efficient: ``` SELECT school.osm_id AS school_osm_id, school.name AS school_name, school.way AS school_way, restaurant.osm_id AS restaurant_osm_id, restaurant.name AS restaurant_name, restaurant.way AS restaurant_way, bar.osm_id AS bar_osm_id, bar.name AS bar_name, bar.way AS bar_way FROM planet_osm_point school LEFT JOIN planet_osm_point restaurant ON restaurant.amenity = 'restaurant' AND ST_DWithin(school.way_geo, restaurant.way_geo, 500, false) LEFT JOIN planet_osm_point bar ON bar.amenity = 'bar' AND ST_DWithin(school.way_geo, bar.way_geo, 500, false) WHERE school.amenity = 'school' AND (restaurant.osm_id IS NOT NULL OR bar.osm_id IS NOT NULL); ``` But this will give too many results if you have multiple restaurants and bars per school. You can simplify the query like this: ``` SELECT school.osm_id AS school_osm_id, school.name AS school_name, school.way AS school_way, a.osm_id AS amenity_osm_id, a.amenity AS amenity_type, a.name AS amenity_name, a.way AS amenity_way, FROM planet_osm_point school JOIN planet_osm_point a ON ST_DWithin(school.way_geo, a.way_geo, 500, false) WHERE school.amenity = 'school' AND a.amenity IN ('bar', 'restaurant'); ``` This will give every bar and restaurant for each school. Schools without either restaurant or bar within 500m are not listed.
Spatial query on large table with multiple self joins performing slow
[ "", "sql", "postgresql", "postgis", "spatial", "postgresql-performance", "" ]
I'm writing a procedure and I need to compare dates against a specific date in the future. I want to default to the last day of February. So if I'm running the report in January or February, the date will be Feb 28 (or 29 if a leap year) of the same year. If I'm running the report in March or later, the date will be Feb 28 (or 29) of the following year. Is there an easier way to do that besides parsing the month and year, then creating a date by setting the month and day to March 1 minus 1 day, and the year to year+1 if the month is >= 3?
You're essentially using March 1st as the start of the year, so you could use the `add_months()` function to adjust the date forward 10 months, find the start of that (actual calendar) year, and then find the first day of March in that adjusted year, and find the day before that: ``` add_months(trunc(add_months(sysdate, 10), 'YYYY'), 2) - 1 ``` [SQL Fiddle demo](http://sqlfiddle.com/#!4/9eecb7d/1487) with a CTE to generate a large range of dates so you can see those and which date is calculated for each.
You can use `last_day` to get the last day of the month and `add_months` to add a number of months to a date. There is probably a cute mathemetical way to avoid the `case` statement as well that I'm not thinking of ``` -- Showing the results that you would get if you ran on various dates. -- In reality, you'd get rid of the CTE and just use sysdate in the query with x as ( select date '2015-01-01' dt from dual union all select date '2015-04-01' from dual union all select date '2015-02-28' from dual ) select dt, last_day( add_months(dt, (case when extract(month from dt) <= 2 then 2-extract(month from dt) else 14-extract(month from dt) end) ) ) from x ```
In Oracle, find the next specific date
[ "", "sql", "oracle", "date", "oracle11g", "" ]
As I add a View to SQL Server I get this error: ``` Incorrect syntax near the keyword 'DECLARE' ``` because I have to Declare first a tempTable `Currency` then insert some values in it from `Currecy` then retrieved it and `Join` it to `Items` table where the `Select` Statement start My Question is, does Creating a view disallows the creating of TempTable or Inserting ? Thanks a lot.
SQL Server defines CREATE VIEW as following: ``` CREATE VIEW [ schema_name . ] view_name [ (column [ ,...n ] ) ] [ WITH <view_attribute> [ ,...n ] ] AS select_statement ... ``` That means after AS you have to use a single select statement. If you like, you can use various techniques to avoid filling your temporary table. CTE is a relatively simple technique that can ACT like a temp table. So, instead of this: ``` select * into #tmp_currency from Currency ...(alter table #tmp_currency)... select * from othertable ot join #tmp_currency tc on tc.your_pk_col= ot.your_fk_col ``` ...you can use this... ``` ;with tmp_currency as ( select *, new_col = (whatever value you want to calculate) from Currency) select * from othertable ot join tmp_currency tc on tc.your_pk_col= ot.your_fk_col ``` View creation is then a trivial task: ``` create view yourviewname as with tmp_currency as ( select *, new_col = (whatever value you want to calculate) from Currency) select * from othertable ot join tmp_currency tc on tc.your_pk_col= ot.your_fk_col ```
Views do not allow creation of temporary tables. You need to use a Stored procedure
Adding a SQL View Error
[ "", "sql", "sql-server", "view", "" ]
Is there a way I can view the progress of a query? For example, SELECT queries that have to fetch large amount of data. If a table contains 100 rows, can SQL report which row is processing at the moment? That would be 1 to 100 progress. I'm not wanting to view the progress in time left or something related with time because I find it impossible or I am wrong?.
It is possible! By the way, note, that e.g. mytop - monitoring tool from CLI for MySQL and MariaDB has already implemented that function. **(1) In MariaDB** ``` SHOW FULL PROCESSLIST; +-----+------+-----------+------+---------+------+-------+-----------------------+----------+ | Id | User | Host | db | Command | Time | State | Info | Progress | +-----+------+-----------+------+---------+------+-------+-----------------------+----------+ | 126 | root | localhost | NULL | Query | 0 | NULL | SHOW FULL PROCESSLIST | 0.000 | +-----+------+-----------+------+---------+------+-------+-----------------------+----------+ ``` Progress: The total progress of the process (0-100%) (How to enable that? see [Progress Reporting](https://mariadb.com/kb/en/library/progress-reporting/) - page of MariaDB manual). This function is introduced from **MariaDB 5.3**. **(2) in MySQL** Default list of query's attributes is e.g.: ``` Id: 3123 User: stefan Host: localhost db: apollon Command: Query Time: 0 State: NULL Info: SHOW FULL PROCESSLIST ``` If you would like to monitor e.g. `ALTER TABLE` command progress in **InnoDB**, you can use: ``` SELECT EVENT_NAME, WORK_COMPLETED, WORK_ESTIMATED FROM events_stages_current; ``` Which will produce something like that: ``` +------------------------------------------------------+----------------+----------------+ | EVENT_NAME | WORK_COMPLETED | WORK_ESTIMATED | +------------------------------------------------------+----------------+----------------+ | stage/innodb/alter table (read PK and internal sort) | 280 | 1245 | +------------------------------------------------------+----------------+----------------+ ``` BUT first you have to enable this mechanism: ``` UPDATE setup_instruments SET ENABLED = 'YES' WHERE NAME LIKE 'stage/innodb/alter%'; UPDATE setup_consumers SET ENABLED = 'YES' WHERE NAME LIKE '%stages%'; ``` and of course you will have to have already enabled **performance\_schema**. Whole procedure is described [here](https://dev.mysql.com/doc/refman/8.0/en/monitor-alter-table-performance-schema.html). Ouch! I forgot this important link: <https://dev.mysql.com/doc/refman/5.7/en/sys-schema-progress-reporting.html>, as of MySQL 5.7.9 several addons are available. How to enable in cfg file: ``` [mysqld] performance-schema-instrument='stage/%=ON' performance-schema-consumer-events-stages-current=ON performance-schema-consumer-events-stages-history=ON performance-schema-consumer-events-stages-history-long=ON ``` How to check current state of process monitoring tools: ``` SELECT * FROM setup_instruments WHERE NAME RLIKE 'stage/sql/[a-c]'; SELECT * FROM setup_instruments WHERE ENABLED='YES' AND NAME LIKE "stage/%"; SELECT * FROM setup_consumers WHERE NAME LIKE '%stages%'; ``` Notes: If you get the error ``` ERROR 1227 (42000): Access denied; you need the PROCESS privilege for this operation ``` then you are probably connected as the anonymous user. Try running "select current\_user" to see.
You can try `show processlist;` in mysql, it will give you the time query has taken as well as the current step query is in like fetching,sending etc.
SQL view progress of query
[ "", "mysql", "sql", "progress", "" ]
I use a table in PostGres for providing authorization in a SpringBoot web app that connects to a PostGres database. I want to initialize this table with a default admin entry if the table is empty. I am doing this from within schema.sql file Here is the code I have attempted so far: ``` CREATE TABLE IF NOT EXISTS public.jdbcauth ( username VARCHAR(32), password VARCHAR(32) default NULL, role VARCHAR(32) default NULL, enabled INT default 0, PRIMARY KEY(username) ); IF EXISTS (SELECT 1 FROM public.jdbcauth) THEN INSERT public.jdbcauth (username, password, role, enabled) values ('admin', 'admin', 'ADMIN', '1'); ``` This always fails with the following error: ``` Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'dataSourceInitializer': Invocation of init method failed; nested exception is org.springframework.jdbc.datasource.init.ScriptStatementFailedException: Failed to execute SQL script statement #2 of resource URL [file:/home/balajeerc/Projects/IGVDS_WebApp/src/main/resources/schema.sql]: IF EXISTS (SELECT 1 FROM public.jdbcauth) THEN INSERT public.jdbcauth (username, password, role, enabled) values ('admin', 'admin', 'ADMIN', '1'); nested exception is org.postgresql.util.PSQLException: ERROR: syntax error at or near "IF" ``` What am I doing wrong?
This is how I finally managed to do what I want: ``` CREATE TABLE IF NOT EXISTS public.jdbcauth ( username VARCHAR(32), password VARCHAR(32) default NULL, role VARCHAR(32) default NULL, enabled INT default 0, PRIMARY KEY(username) ); INSERT INTO public.jdbcauth (username, password, role, enabled) SELECT 'admin', 'admin', 'ADMIN', 1 WHERE NOT EXISTS ( SELECT username FROM public.jdbcauth WHERE username = 'admin' ); ```
Don't you need a "THEN"? ``` IF EXISTS (SELECT 1 FROM public.jdbcauth) THEN INSERT public.jdbcauth (username, password, role, enabled) VALUES ('admin', 'admin', 'ADMIN', '1'); ```
Initializing an Empty Table using schema.sql in Spring Boot
[ "", "sql", "postgresql", "spring-boot", "" ]
I have a few tables 1. clubs -> contains information on different clubs 2. clubs-leagues -> contains information about different leagues hosted by the clubs 3. codes -> contains description of short hand codes ive used to make data entry a little easier clubs-leagues has a field that contains the id of the clubs to form an association with eachother. i have the following statement ``` SELECT club.*, sport.name AS sport, code.description AS statusText, code1.description AS ageGroup, code2.description AS gender, leagues.name AS leagueName, leagues.ageGroup AS age, leagues.division, code3.description AS leagueGender, leagues.season FROM clubs as club JOIN sports AS sport ON sport.id = club.sport JOIN codes AS code ON club.status = code.code JOIN codes AS code1 ON code1.code = club.ageGroup JOIN codes AS code2 ON code2.code = club.gender JOIN `clubs-leagues` AS leagues ON leagues.clubId = club.id JOIN codes AS code3 ON leagues.gender = code3.code WHERE club.id=(:n) ``` it works if the club has a league but if there is no leagues (club-leagues table) then it returns no results(i want it to show the results from the club table even if there is none in the clubs-leagues table) currently i have tried moving"JOIN `clubs-leagues` AS leagues ON leagues.clubId = club.id " up right after i declare clubs AS club and using a left join but i got no resuls thanks
As indicated in comments, you need to use `LEFT JOIN` from clubs to leagues to get all clubs regardless of league membership. Your SQL could look like this: ``` SELECT club.*, sport.name AS sport, code.description AS statusText, code1.description AS ageGroup, code2.description AS gender, leagues.name AS leagueName, leagues.ageGroup AS age, leagues.division, code3.description AS leagueGender, leagues.season FROM clubs as club LEFT JOIN `clubs-leagues` AS leagues ON leagues.clubId = club.id JOIN sports AS sport ON sport.id = club.sport JOIN codes AS code ON club.status = code.code JOIN codes AS code1 ON code1.code = club.ageGroup JOIN codes AS code2 ON code2.code = club.gender JOIN codes AS code3 ON leagues.gender = code3.code WHERE club.id=(:n) ``` Note that I haven't checked the rest of the SQL, it's up to you to ensure it's correct.
Use `LEFT JOIN` or `RIGHT JOIN` instead of `JOIN` for including all values from left|right tables, even if right|left tables have not matching rows.
Join table if table has results
[ "", "mysql", "sql", "join", "" ]
I found SQL function which get second as input parameter and return seconds in dd:hh:mm:ss format e-g for `93600` seconds it returns `1:02:00:00` it means 1 day 2 hours 0 minutes and 0 seconds. Function that i used is : ``` FUNCTION [dbo].[udfTimeSpanFromSeconds] ( @Seconds int ) RETURNS varchar(15) AS BEGIN DECLARE --Variable to hold our result @DHMS varchar(15) --Integers for doing the math , @Days int --Integer days , @Hours int --Integer hours , @Minutes int --Integer minutes --Strings for providing the display , @sDays varchar(5) --String days , @sHours varchar(2) --String hours , @sMinutes varchar(2) --String minutes , @sSeconds varchar(2) --String seconds --Get the values using modulos where appropriate SET @Hours = @Seconds/3600 SET @Minutes = (@Seconds % 3600) /60 SET @Seconds = (@Seconds % 3600) % 60 --If we have 24 or more hours, split the @Hours value into days and hours IF @Hours > 23 BEGIN SET @Days = @Hours/24 SET @Hours = (@Hours % 24) END ELSE BEGIN SET @Days = 0 END --Now render the whole thing as string values for display SET @sDays = convert(varchar, @Days) SET @sHours = RIGHT('0' + convert(varchar, @Hours), 2) SET @sMinutes = RIGHT('0' + convert(varchar, @Minutes), 2) SET @sSeconds = RIGHT('0' + convert(varchar, @Seconds), 2) --Concatenate, concatenate, concatenate SET @DHMS = @sDays + ':' + @sHours + ':' + @sMinutes + ':' + @sSeconds RETURN @DHMS END ``` and select command that will retrieve output is ``` select dbo.udfTimeSpanFromSeconds('93600' ) ``` it shows me result as: ![enter image description here](https://i.stack.imgur.com/gLAJL.jpg) Now i need this output in `hh:mm:ss` format e-g for current example `26:00:00` it means 26 hours 0 minutes and 0 seconds. I am using SQL server 2008. Thanks in advance.
You can do this with math ``` DECLARE @sec INT = 93600 SELECT CONVERT(VARCHAR(10), (@sec / 3600)) + ':' + RIGHT('0' + CONVERT(VARCHAR(2), ((@sec % 3600) / 60)), 2) + ':' + RIGHT('0' + CONVERT(VARCHAR(2), (@sec % 60)), 2) ``` --- Written as a function: ``` CREATE FUNCTION udfTimeSpanFromSeconds( @sec INT ) RETURNS VARCHAR(15) AS BEGIN RETURN CONVERT(VARCHAR(10), (@sec / 3600)) + ':' + RIGHT('0' + CONVERT(VARCHAR(2), ((@sec % 3600) / 60)), 2) + ':' + RIGHT('0' + CONVERT(VARCHAR(2), (@sec % 60)), 2) END ``` Sample call: ``` SELECT dbo.udfTimeSpanFromSeconds(360000) ``` RESULT: ``` 100:00:00 ```
If you want you function to return hh:mm:ss then it needs to be as below. This does however limit the total time to be less than 100 hours. You can fix this by increasing the length of the Hours String by changing the right clause and increasing the length of the string returned, as I have now done to illustrate. (Normally once you have summed your time you usually divide the total by 3600.00 to produce a decimal value for use in further calculations, for example if you are paying by the hour) ``` FUNCTION [dbo].[udfTimeSpanFromSeconds] ( @Seconds int ) RETURNS varchar(10) AS BEGIN DECLARE --Variable to hold our result @HMS varchar(15) --Integers for doing the math , @Hours int --Integer hours , @Minutes int --Integer minutes --Strings for providing the display , @sHours varchar(2) --String hours , @sMinutes varchar(2) --String minutes , @sSeconds varchar(2) --String seconds --Get the values using modulos where appropriate SET @Hours = @Seconds/3600 SET @Minutes = (@Seconds % 3600) /60 SET @Seconds = (@Seconds % 3600) % 60 --Now render the whole thing as string values for display SET @sHours = RIGHT('0' + convert(varchar(5), @Hours), 3) SET @sMinutes = RIGHT('0' + convert(varchar(3), @Minutes), 2) SET @sSeconds = RIGHT('0' + convert(varchar(3), @Seconds), 2) --Concatenate, concatenate, concatenate SET @HMS = @sHours + ':' + @sMinutes + ':' + @sSeconds RETURN @HMS END ```
How to format time from dd:hh:mm:ss to only hh:mm:ss in SQL server?
[ "", "sql", "sql-server", "sql-server-2008", "" ]
This seems like it would be a question that has been answered a million times before, but searching for a bit only brought up replacing single quotes with double quotes. The problem is that I have a file with a huge number of single quotes. Is there any way to insert a string as is, without the single quotes throwing an error. Any wrapper? This is for pure SQL, I don't need to save this string from the backend or anything. I need a SQL command. Edit: Just to clarify - the string I am trying to post is a document, in html format, with a large number of both double and single quotes, escaping each one individually would take enough time to warrant saving data in another format, which I would rather not do.
If you have a file accessible to the SQL Server, you can read the contents in to a variable with `OPENROWSET`. For example, to read a Unicode text file at `C:\drop\demo.html`: ``` DECLARE @DocumentText nvarchar(MAX); SELECT @DocumentText = BulkColumn FROM OPENROWSET(BULK 'C:\drop\demo.html', SINGLE_NCLOB) file; INSERT INTO Files (Column) VALUES (@DocumentText); ```
Can you replace the single quotes with two single quotes? That is how SQL Server escapes the single quote in a string. ``` declare @test_string nvarchar(max) = 'string with a single '' quote in it'; print @test_string string with a single ' quote in it ```
How to Post a SQL String With Quotes?
[ "", "sql", "t-sql", "" ]
I have query like this: ``` select empid,transDateTime from Trans_Details where empId='16510' and transDateTime >='2015-05-07 00:00:00.000' and transDateTime< ='2015-05-07 23:59:59.000' ``` I'm getting output like this: ``` empid transDateTime --------------- ----------------------- 16510 2015-05-07 08:51:56.000 ``` I have same query without time, but that one doesn't return any result: ``` select empid, transDateTime from Trans_Details Td where td.empId='16510' and Td.transDateTime='2015-05-07' ``` What's the issue? This time I want to get same result.
This is because `transDateTime` is of `DATETIME` type. On your `WHERE` clause, `2015-05-07` is converted into `DATETIME` and thus results in `2015-05-07 00:00:00.000`. You can use this instead: ``` WHERE CAST(transDateTime AS DATE) = '20150507' ``` Note that you should use `YYYYMMDD` format for date literals. --- Another method way so that you avoid using a function in the left-side of your `WHERE` clause is: ``` WHERE transDateTime >= CAST('20150507' AS DATETIME) AND transDateTime < DATEADD(DAY, 1, CAST('20150507' AS DATETIME)) ```
Try this ``` select empid,transDateTime from Trans_Details Td where td.empId='16510' and convert(date,Td.transDateTime)='2015-05-07' ```
While filtering date time to date am not getting any result
[ "", "sql", "sql-server", "" ]
I am trying to write a query that would only generate one row per `employerid` instead of generating multiple rows per employer and only filling one of the columns at a time. I am getting the below with my query: ![enter image description here](https://i.stack.imgur.com/Zh6Bm.png) The Query that I am using is this: ``` SELECT Employer.employerid, CASE WHEN Service.xxserviceid = '1' THEN 'Number 1' ELSE 'NULL' END AS Service1, CASE WHEN Service.xxserviceid = '2' THEN 'Number 2' ELSE 'NULL' END AS Service2, CASE WHEN Service.xxserviceid = '3' THEN 'Number 2' ELSE 'NULL' END AS Service3 FROM Employer INNER JOIN Service ON Service.employerid = Employer.employerid; ``` So i want the Columns `Service1`, `Service2`, `Service3` to be filled in one line per employer rather than multiple lines per employer.
You can use `GROUP BY` + `MAX` to reduce the results to one line per employee: ``` SELECT Employer.employerid, max(CASE WHEN Service.xxserviceid = '1' THEN 'Number 1' ELSE 'NULL' END) AS Service1, max(CASE WHEN Service.xxserviceid = '2' THEN 'Number 2' ELSE 'NULL' END) AS Service2, max(CASE WHEN Service.xxserviceid = '3' THEN 'Number 3' ELSE 'NULL' END) AS Service3 FROM Employer INNER JOIN Service ON Service.employerid = Employer.employerid GROUP BY Employer.employerid; ``` If you think about it, though, you don't even need the join with the `Employer` table, because all you use from it is the `employerid` column, which is already present in `Service`. So, the query can be reduced to: ``` SELECT employerid, max(CASE WHEN xxserviceid = '1' THEN 'Number 1' ELSE 'NULL' END) AS Service1, max(CASE WHEN xxserviceid = '2' THEN 'Number 2' ELSE 'NULL' END) AS Service2, max(CASE WHEN xxserviceid = '3' THEN 'Number 3' ELSE 'NULL' END) AS Service3 FROM Service GROUP BY employerid; ``` SQLFiddle: <http://www.sqlfiddle.com/#!6/004a9/2>
``` SELECT Employer.employerid, MAX(CASE WHEN Service.xxserviceid = '1' THEN 'Number 1' ELSE 'NULL' END) AS Service1, MAX(CASE WHEN Service.xxserviceid = '2' THEN 'Number 2' ELSE 'NULL' END) AS Service2, MAX(CASE WHEN Service.xxserviceid = '3' THEN 'Number 2' ELSE 'NULL' END) AS Service3 FROM Employer INNER JOIN Service ON Service.employerid = Employer.employerid; GROUP BY Employer.employerID ```
SQL Server Case Statements
[ "", "sql", "sql-server-2014", "" ]
How do i update table structured like this: ``` id[pkey] | parent_id | position 1 1 2 1 3 1 4 1 5 1 6 2 7 2 8 2 9 2 10 3 11 3 12 3 ...and so on ``` to achieve this result: ``` id[pkey] | parent_id | position 1 1 1 2 1 2 3 1 3 4 1 4 5 1 5 6 2 1 7 2 2 8 2 3 9 2 4 10 3 1 11 3 2 12 3 3 ...and so on ``` I was thinking about somehow mixing ``` SELECT DISTINCT parent_id FROM cats AS t; ``` with ``` CREATE SEQUENCE dpos; UPDATE cats t1 SET position = nextval('dpos') WHERE t.parent_id = t1.parent_id; DROP SEQUENCE dpos; ``` although im not really experienced with postgres, and not sure how to use some kind of FOREACH. I appreciate any help
You can get the incremental number using `row_number()`. The question is how to assign it to a particular row. Here is one method using a `join`: ``` update cats set position = c2.newpos from (select c2.*, c2.ctid as c_ctid, row_number() over (partition by c2.parent_id order by NULL) as seqnum from cats c2 ) c2 where cats.parent_id = c2.parent_id and cats.ctid = c2.c_ctid; ```
Try this: ``` UPDATE table_name set table_name.dataID = v_table_name.rn FROM ( SELECT row_number() over (partition by your_primaryKey order by your_primaryKey) AS rn, id FROM table_name ) AS v_table_name WHERE v_table_name.your_primaryKey = v_table_name.your_primaryKey; ```
Updating column based on another column's value
[ "", "sql", "postgresql", "" ]
I have a database table populated with 10k rows of data. A great deal of those rows have null values in specific columns. How do I replace this null value with for instance the empty string for a string column? I'm looking for the *easiest* way to do this since this operation will only need to be performed *once*. **What I have tried:** ``` UPDATE tablename set mycolumn='' where mycolumn is null; ``` This gives me the following error: > Error Code: 1175. You are using safe update mode and you tried to update a table without a WHERE that uses a KEY column To disable safe mode, toggle the option in Preferences -> SQL Editor and reconnect. I am not the administrator of the database, so I cannot disable safe mode. ``` UPDATE tablename set mycolumn='' where Id in (SELECT Id FROM tablename where mycolumn is null); ``` This gives me the following error: > Error Code: 1093. You can't specify target table 'tablename' for update in FROM clause. --- **Note:** In the examples above I have substituted the real tablename and column-name with placeholders.
You could try ``` UPDATE tablename set mycolumn = '' where Id IN (select Id from (select Id from tablename where mycolumn IS NULL) as x) ``` But why do you want to replace `NULL` values with empty strings at all? If you could disable safe mode [this](https://stackoverflow.com/questions/11448068/mysql-error-code-1175-during-update-in-mysql-workbench) would be a duplicate.
You can alter the table and set NOT NULL for the columns. And you can also simply do the following: > To disable safe mode, toggle the option in Preferences -> SQL Editor and reconnect.
How do I remove null from table columns?
[ "", "mysql", "sql", "null", "" ]
When I design a temporary table as follows, the manager growls at the name *#Temp* and marks it with red highlight. ``` drop table #Temp select * into #Temp from Donkeys ``` When I hover over the highlight, the reason is - as expected - that the name isn't recognized. > Cannot drop the table '#Temp', because it does not exist or you do not have permissions. Now, I'm not a SQL developer - I come from C# and I'm spoiled by intellisense, Resharper and what not, so I dislike when something is highlighted (even although it works). I installed Management Studio 11.x just to get the intellisense working and I want to get my money worth, if possible. The question is - can I do something about the highlight (purely visually, because the functionality is - as pointed out earlier - as it's supposed to)? Please note that the question is **not** about why it happens or if it's a problem. I do understand perfectly well why and I'm declaring it to be a problem (yeah, I admit it's not the biggest issue but it's big enough for me to actually invest time asking). Also, I'm human (i.e. lazy-ish) so a simple solution will do. :)
It can be achieved by the combination `ctrl``shift``R`. You are receiving this issue because #temp does not exist yet. You either need to check for it's existence like so: ``` if OBJECT_ID('tempdb..#temp') is not null begin drop table #temp end select * into #Temp from Donkeys ``` or you can just drop the table after you've used it: ``` select * into #Temp from Donkeys Drop table #Temp ```
use like this way :- ``` If Object_Id('tempdb.dbo.#Temp') Is Not Null Drop Table #Temp; select * into #Temp from Donkeys ```
How to calm the IDE about not declared temporary table?
[ "", "sql", "sql-server", "ide", "" ]
I have an EmpName Table. `Select * from EmpName` ``` NameID FirstName MiddleName LastName 1 Sam NULL NULL 2 NULL Todd Tarzan 3 NULL NULL Sare 4 Ben Parker NULL 5 James Nick Nancy ``` Now I write the following query to get the fullname as ``` Select FirstName + ' ' + MiddleName + ' ' + LastName As FullName from EmpName ``` But I am getting the following result - ``` FullName NULL NULL NULL NULL James Nick Nancy ``` But I want the following result - ``` FullName Sam Todd Tarzan Sare Ben Parker James Nick Nancy ``` Is it - `String` concat with `Null` returns the `Null` ? How can I get the `FullName` whose `MiddleName` or `LastName` has the value Null
When you concatenate strings, `NULL` takes precedence. So, use `COALESCE()`. This is a pain with separators, but the following should do what you want: ``` Select ltrim(coalesce(' ' + FirstName, '') + coalesce(' ' + MiddleName, '') + coalesce(' ' + LastName) ) as FullName From EmpName; ```
when you try to concate string with **NULL** it always give null ``` Select isnull(FirstName,'') + ' ' + isnull(MiddleName,'') + ' ' + isnull(LastName,'') As FullName from EmpName ``` if you use sql server 2012 or later version ``` Select concat(FirstName,MiddleName,LastName) as FullName As FullName from EmpName ```
Sql Query to get the Full name from the table Employee Eliminating Null Value
[ "", "sql", "sql-server", "" ]
I am using SQL Server 2012. I have a table which contains a column `MyCol` which can only be the four values below: ``` MN ABB BO BN ``` When I query the data from my table I want to change these values to a numeric value instead, so say MN = 1, ABB = 2, BO = 3 & BN = 4. How do I do this using a select statement?
Simple `CASE` statement will do: ``` SELECT CASE MyCol WHEN 'MN' THEN 1 WHEN 'ABB' THEN 2 WHEN 'BO' THEN 3 WHEN 'BN' THEN 4 END FROM dbo.Yourtable ```
``` SELECT Case When MyCol = 'MN' Then 1 WHEN MyCol = 'ABB' Then 2 When MyCol = 'BO' Then 3 When MyCol = 'BN' Then 4 End as MyCol From MyTable ```
Query a column and change value to numeric value
[ "", "sql", "sql-server", "" ]
I have the below structure(`' '` refers to empty spaces): ``` name description --------------------- a yes b ' ' c ' ' d null ``` I am searching for a query that give me the rows contain empty spaces, asked for the below result . ``` name description --------------------- b ' ' c ' ' ``` this query `select * from tab1 where description =' '`; will give me only c, in my query I have many values have long spaces.
You can user [REGEXP\_LIKE](http://docs.oracle.com/cd/B12037_01/server.101/b10759/conditions018.htm): ``` with src as (select 'a' as name,'yes' as description from dual union all select 'b',' ' from dual union all select 'c',' ' from dual union all select 'd',null from dual) select * from src where regexp_like(description,'^[ ]+$')) ``` > Edited: added regexp\_like(description,'^[ ]+$') to take into account only descriptions with spaces. If there is a description in the format ' s ', ' s' or 's ' it will not be selected.
Use TRIM function to trim the spaces. ``` select * from tab1 where TRIM(description) IS NULL; ``` I have not tested it but it should work.
how to get columns have long spaces ( multiple spaces)
[ "", "sql", "oracle", "" ]
Assuming t is a large table, and the following two queries ``` SELECT t1.value, t2.value FROM t as t1 JOIN t as t2 ON t1.id = t2.id WHERE t1.key = '123' ``` and ``` SELECT t1.value, t2.value FROM t as t1 JOIN t as t2 JOIN t as t3 ON t1.id = t2.id WHERE t1.key = '123' ``` the second one having a JOIN with a table that is not used in the SELECT. The second query executes much slower. I expected that MySQL would figure out that the third JOIN is not used and will just ignore it. But it does not?
Your second query doesn't have an `ON` clause for the second join: ``` SELECT t1.value, t2.value FROM t as t1 JOIN t as t2 JOIN t as t3 ON t1.id = t2.id WHERE t1.key = '123'; ``` This means that every matching record in t1 will be joined onto every record in t2. This is, perhaps, what you meant: ``` SELECT t1.value, t2.value FROM t as t1 JOIN t as t2 ON t1.id = t2.id JOIN t as t3 ON t1.id = t3.id WHERE t1.key = '123'; ``` This will perform much more reasonably because it isn't creating a huge number of results. If you intended to do a full join onto t3: ``` SELECT t1.value, t2.value FROM t as t1 JOIN t as t2 ON t1.id = t2.id JOIN t as t3 WHERE t1.key = '123'; ``` Then this will be slower because, even though you are not SELECTing a field from t3 it does change the output because it produces extra rows. See here for examples <http://sqlfiddle.com/#!9/e86c9/3>
It is not that the MySQL optimizer isn't smart enough to remove the unused query, it is just that you are using the wrong syntax here. As the documentation states, your query will be performed as: ``` JOIN t as t2 JOIN t as t3 --> t2 CROSS JOIN t3 ``` The syntax you are using isn't standard SQL and cannot be used in any SQL standard compliant database. Take a look at the specific MySQL JOIN documentation [here](https://dev.mysql.com/doc/refman/5.0/en/join.html) .
MySQL: join with an unused table increases execution time?
[ "", "mysql", "sql", "" ]
I have the sql using Oracle SQL Developer below that works, but I am concerned about performance over a larger live database. Please excuse all the join tables, this is showing the connections to the \_aud table and revision\_table that keeps the audit history. ``` select cust.forename, cust.surname from customer cust join section_details sd on cust.section = sd.section where -- this value is substituted programatically sd.type = 5 and ( (select max(rt.timestamp) from customer cust_tmp1 join section_details sd on cust_tmp1.section = sd.section join measure msr on sd.measure = msr.id join measure_aud msr_a on msr.id = msr_a.id join revision_table rt on msr_a.rev = rt.id where cust_tmp1.id = cust.id and msr.measure = 'Last Contact Date') > (select max(rt.timestamp) from customer cust_tmp2 join section_details sd on cust_tmp2.section = sd.section join measure msr on sd.measure = msr.id join measure_aud msr_a on msr.id = msr_a.id join revision_table rt on msr_a.rev = rt.id where cust_tmp2.id = cust.id and msr.measure = 'Last Order Date') ); ``` In short I only want to retrieve the customer details if the 'Last Contact Date' is more recent than the 'Last Order Date'. I end up checking the max(timestamp) from one select is greater than the max(timestamp) from the other select. It all works fine and I get the results I am expecting. Each side of the timestamp comparison is duplicated apart from the msr.measure = 'Last Contact Date' or 'Last Order Date'. I have tried a few alternatives that never actually worked as they resulted in multi level nested sub queries and I couldn't pass in the outer cust record. Any further ideas would be greatly appreciated.
I tried the solutions posted here and they all seem to work, many thanks for the responses - I hadn't investigated the having clause before. I added any required indexes and ran explain plans on all the options, my original query had the lowest cost. So I think I will use that option moving forward.
Are you sure you need all those joins? The same query with different WHERE-conditions usually translates to a CASE: ``` select cust.forename, cust.surname from customer cust where -- this value is substituted programatically sd.type = 5 and exists ( select * from customer cust_tmp1 join section_details sd on cust_tmp1.section = sd.section join measure msr on sd.measure = msr.id join measure_aud msr_a on msr.id = msr_a.id join revision_table rt on msr_a.rev = rt.id where cust_tmp1.id = cust.id having max(case when msr.measure = 'Last Contact Date' then rt.timestamp end) > max(case when msr.measure = 'Last Order Date' then rt.timestamp end) ) ``` Or simplified removing the subqueries: ``` select cust.forename, cust.surname from customer cust join section_details sd on cust_tmp1.section = sd.section join measure msr on sd.measure = msr.id join measure_aud msr_a on msr.id = msr_a.id join revision_table rt on msr_a.rev = rt.id where -- this value is substituted programatically sd.type = 5 group by cust.forename, cust.surname having max(case when msr.measure = 'Last Contact Date' then rt.timestamp end) > max(case when msr.measure = 'Last Order Date' then rt.timestamp end) ```
Optimising SQL sub query in where clause
[ "", "sql", "oracle", "oracle11g", "query-optimization", "" ]
I have values being returned with 255 comma separated values. Is there an easy way to split those into columns without having 255 substr? ``` ROW | VAL ----------- 1 | 1.25, 3.87, 2, ... 2 | 5, 4, 3.3, .... ``` to ``` ROW | VAL | VAL | VAL ... --------------------- 1 |1.25 |3.87 | 2 ... 2 | 5 | 4 | 3.3 ... ```
You can use `regexp_substr()`: ``` select regexp_substr(val, '[^,]+', 1, 1) as val1, regexp_substr(val, '[^,]+', 1, 2) as val2, regexp_substr(val, '[^,]+', 1, 3) as val3, . . . ``` I would suggest that you generate a column of 255 numbers in Excel (or another spreadsheet), and use the spreadsheet to generate the SQL code.
Beware! The regexp\_substr expression of the format `'[^,]+'` will not return the expected value if there is a null element in the list and you want that item or one after it. Consider this example where the 4th element is NULL and I want the 5th element and thus expect the '5' to be returned: ``` SQL> select regexp_substr('1,2,3,,5,6', '[^,]+', 1, 5) from dual; R - 6 ``` Surprise! It returns the 5th NON-NULL element, not the actual 5th element! Incorrect data returned and you may not even catch it. Try this instead: ``` SQL> select regexp_substr('1,2,3,,5,6', '(.*?)(,|$)', 1, 5, NULL, 1) from dual; R - 5 ``` So, the above corrected REGEXP\_SUBSTR says to look for the 5th occurrence of 0 or more comma-delimited characters followed by a comma or the end of the line (allows for the next separator, be it a comma or the end of the line) and when found return the 1st subgroup (the data NOT including the comma or end of the line). The search match pattern `'(.*?)(,|$)'` explained: ``` ( = Start a group . = match any character * = 0 or more matches of the preceding character ? = Match 0 or 1 occurrences of the preceding pattern ) = End the 1st group ( = Start a new group (also used for logical OR) , = comma | = OR $ = End of the line ) = End the 2nd group ``` EDIT: More info added and simplified the regex. See this post for more info and a suggestion to encapsulate this in a function for easy reuse: [REGEX to select nth value from a list, allowing for nulls](https://stackoverflow.com/questions/25648653/regex-to-select-nth-value-from-a-list-allowing-for-nulls/25652018#25652018) It's the post where I discovered the format `'[^,]+'` has the problem. Unfortunately it's the regex format you will most commonly see as the answer for questions regarding how to parse a list. I shudder to think of all the incorrect data being returned by `'[^,]+'`!
Split comma separated values to columns in Oracle
[ "", "sql", "oracle", "split", "" ]
I need to calculate how many orderlines there are based on the orderlineNo being distinct. Each OrderNo is different BUT the OrderLineNo is the same for each order. i.e. 9 lines on a order then order lines number will go from 1 - 9. The same if on another order there are 3 orderlines they will go from 1 - 3 But in orderlineno there could be orderline numbers that are the same - for this I only want to count it once Example: ``` OrderNo OrderLineNo 987654 1 987654 2 987654 2 987654 3 987654 4 987654 5 987654 6 987654 7 ``` The total order lines here is 7. There are two order lines with 2 and I want them to only be counted once. Is this possible using SQL Server 2014.
You can add DISTINCT to a COUNT: ``` select OrderNo, count(distinct OrderLineNo) from tab group by OrderNo; ``` Or if OrderLineNo always starts with 1 and increases without gaps: ``` select OrderNo, max(OrderLineNo) from tab group by OrderNo; ``` Edit: Based on the comment it's not a count per OrderNo, but a global count. You need to use a Derived Table: ``` select count(*) from (select distinct OrderNo, OrderLineNo from tab ) as dt; ``` or ``` select sum(n) from (select OrderNo, max(OrderLineNo) as n from tab group by OrderNo ) as dt; ``` or ``` select sum(Dist_count) from ( select OrderNo,count(distinct OrderLineNo) as Dist_count from Table1 group by OrderNo ) as dt ```
I guess you want this: ``` SELECT OrderNo, COUNT(distinct OrderLineNo) as CntDistOrderLineNoPerOrderNo FROM Table1 GROUP BY OrderNo ``` `demo` So for every `OrderNo` the count of dictinct `OrderLineNo` which is 7 for 987654. If you instead want the sum of all distinct OrderLineNo as commented. ``` WITH CTE AS ( SELECT OrderNo, MAX(OrderLineNo) as MaxOrderLineNoPerOrderNo FROM Table1 GROUP BY OrderNo ) SELECT SUM(MaxOrderLineNoPerOrderNo) AS SumOrderLineNoPerOrderNo FROM CTE ``` `Demo`
Counting Values based on distinct values from another Column
[ "", "sql", "sql-server", "sql-server-2014", "" ]
Please take a look at the below sql fiddle. <http://sqlfiddle.com/#!6/26b91/1> [Latest SQL Fiddle is here](http://sqlfiddle.com/#!6/b1e79/2) I will try to describe the output I require first. I am expecting two rows since there are two conditions in the `PullPointDate` table. The multiple rows that are in the `PullPoint` table are there due to audit data, the audit values start from 1 to 3 for condition 1 and from 1 to 2 for condition 2. As you can see it is possible for there to be many audits of the data. Other columns in the full data set are obviously changing however I have not included them here as they are not relevant. Needless to say it is possible for their to be (n) conditions and associated pullpoint audit change data. Columns ``` CondNumber, StudyCode, PullPeriod, PullUnit 1 , SS3105 , 52 , Weeks 2 , SS3105 , 24 , Weeks ``` The other rows in the `PullPoint` table should not feature in the results as they have older i.e less `AuditNumber` than the latest `AuditNumber`. I have struggled with this for quite some time. My mind struggles to think in a set based fashion. Nested Row\_Over, Partition By etc has me breaking out in cold sweats. How do I achieve this the closest I could get was one row looking perfect yet the other value was null owing to me doing a where clause on the audit number and owing to the two conditions being different only one rows details were returned. Thank you Any more information I will try to provide.
In addition to Vladimir Baranov's answer you can also use the CTE with Row\_Number: [SQLFiddler](http://sqlfiddle.com/#!6/b1e79/19/0) ``` WITH CTE AS ( SELECT ROW_NUMBER () OVER (PARTITION BY P.BatchCode ORDER BY P.AuditNumber Desc) RN , P.StudyCode , P.BatchCode , PP.condNumber ,P.pullunit , P.PullPeriod FROM pullpoint P JOIN PullPointDate PP ON P.BatchCode = PP.BatchCode ) SELECT StudyCode , BatchCode , condNumber , pullunit , PullPeriod FROM CTE WHERE RN = 1 ```
This appears to give me what I desire... SELECT o.\* FROM pullpoint o LEFT JOIN pullpoint b ON o.studyCode = b.studyCode and o.batchCode = b.batchcode AND o.auditNumber < b.AuditNumber WHERE b.AuditNumber is NULL I am sure I will get to work tomorrow and the thing will not work though :(
Select latest row value for multiple rows
[ "", "sql", "sql-server", "sql-server-2014", "" ]
I have a problem in Oracle SQL that I'm trying to get my head around. I'll illustrate with an example. I have three tables that I am querying: ``` Employees __________________________________________ | EmployeeID | Name | | 1 | John Smith | | 2 | Douglas Hoppalot | | 3 | Harry Holiday | ... InternalCosts ________________________________ | IntID | Amount | EmployeeID | | 1 | 10 | 1 | | 2 | 20 | 2 | | 3 | 30 | 1 | ... ExternalCosts ________________________________ | ExtID | Amount | EmployeeID | | 1 | 40 | 1 | | 2 | 50 | 2 | | 3 | 60 | 1 | ... ``` What I want to achieve is a result of one row per employee, with sums of each of their internal and external costs, i.e. ``` ____________________________________________________________ | Name | InternalCostTotal | ExternalCostTotal | | John Smith | 40 | 100 | | Douglas Hoppalot | 20 | 50 | ... ``` The problem I have is that when I query both the InternalCosts and ExternalCosts tables, I get each permutation of costs, not just one per employee. When I group by employee Name and sum the amount fields the values are too high. What I have tried: ``` SELECT emp.Name, sum(int.Amount), sum(ext.Amount) FROM Employees emp, InternalCosts int, ExternalCosts ext WHERE emp.EmployeeId = int.EmployeeID and emp.EmployeeID = ext.EmployeeID GROUP BY emp.Name ``` The examples above would return: ``` ____________________________________________________________ | Name | InternalCostTotal | ExternalCostTotal | | John Smith | 80 | 200 | <- too high! | Douglas Hoppalot | 20 | 50 | ... ``` Grateful for any help/advice/thoughts!
You should use subqueries on int and ext to do the summing, and join to the subqueries. I also suggest using explicit JOINs rather than table, table, table e.g. ``` SELECT emp.Name, int.Amount AS InternalCostTotal, ext.Amount AS ExternalCostTotal FROM Employees emp JOIN ( SELECT EmployeeID, SUM(Amount) AS Amount FROM InternalCosts GROUP BY EmployeeID ) int ON emp.EmployeeId = int.EmployeeID JOIN ( SELECT EmployeeID, SUM(Amount) AS Amount FROM ExternalCosts GROUP BY EmployeeID ) ext ON emp.EmployeeId = ext.EmployeeID ```
[SQL Fiddle](http://sqlfiddle.com/#!4/827e9/1) **Oracle 11g R2 Schema Setup**: ``` CREATE TABLE Employees ( EmployeeID, Name ) AS SELECT 1, 'John Smith' FROM DUAL UNION ALL SELECT 2, 'Douglas Hoppalot' FROM DUAL UNION ALL SELECT 3, 'Harry Holiday' FROM DUAL; CREATE TABLE InternalCosts ( IntID, Amount, EmployeeID ) AS SELECT 1, 10, 1 FROM DUAL UNION ALL SELECT 2, 20, 2 FROM DUAL UNION ALL SELECT 3, 30, 1 FROM DUAL; CREATE TABLE ExternalCosts ( ExtID, Amount, EmployeeID ) AS SELECT 1, 40, 1 FROM DUAL UNION ALL SELECT 2, 50, 2 FROM DUAL UNION ALL SELECT 3, 60, 1 FROM DUAL; ``` **Query 1**: ``` SELECT e.*, ( SELECT SUM( Amount ) FROM InternalCosts i WHERE e.EmployeeID = i.EmployeeID ) AS InternalCostTotal, ( SELECT SUM( Amount ) FROM ExternalCosts x WHERE e.EmployeeID = x.EmployeeID ) AS ExternalCostTotal FROM Employees e ``` **[Results](http://sqlfiddle.com/#!4/827e9/1/0)**: ``` | EMPLOYEEID | NAME | INTERNALCOSTTOTAL | EXTERNALCOSTTOTAL | |------------|------------------|-------------------|-------------------| | 1 | John Smith | 40 | 100 | | 2 | Douglas Hoppalot | 20 | 50 | | 3 | Harry Holiday | (null) | (null) | ``` Or (using joins): ``` WITH InternalTotals AS ( SELECT EmployeeID, SUM( Amount ) AS InternalCostTotal FROM InternalCosts GROUP BY EmployeeID ), ExternalTotals AS ( SELECT EmployeeID, SUM( Amount ) AS ExternalCostTotal FROM ExternalCosts GROUP BY EmployeeID ) SELECT e.EmployeeID, i.InternalCostTotal, x.ExternalCostTotal FROM Employees e LEFT OUTER JOIN InternalTotals i ON ( e.EmployeeID = i.EmployeeID ) LEFT OUTER JOIN ExternalTotals x ON ( e.EmployeeID = x.EmployeeID ); ```
SQL SELECT Sum values without including duplicates
[ "", "sql", "oracle", "select", "group-by", "sum", "" ]
I have two variables `@date` of type `datetime` and `@time` of type `time`. I want to add both to get another `datetime` variable. And I want to perform further calculations on it. Ex: ``` Declare @date datetime Declare @time time ``` I want something like this ``` @date = @date + @time (but not concatenation) SELECT @Startdate = DATEADD(DAY, -1, @date ) ``` Is there any way?
You can tranform your time to seconds and add them to your datetime value: ``` DECLARE @datetime DATETIME = GETDATE(), @time TIME = '01:16:24', @timeinseconds INT PRINT 'we add ' + CAST(@time AS VARCHAR(8)) + ' to ' + CONVERT(VARCHAR,@datetime,120)+ ':' SELECT @timeinseconds = DATEPART(SECOND, @time) + DATEPART(MINUTE, @time) * 60 + DATEPART(HOUR, @time) * 3600 SET @datetime = DATEADD(SECOND,@timeinseconds,@datetime) PRINT 'The result is: ' + CONVERT(VARCHAR,@datetime,120) ``` Output: ``` we add 01:16:24 to 2015-07-17 09:58:45: The result is: 2015-07-17 11:15:09 ```
The only thing you are missing is that @time needs to be cast back to a datetime before adding to @date. ``` declare @date datetime = '2022-05-26' declare @time time = '09:52:14' declare @Startdate datetime set @date = @date + convert(datetime,@time) SELECT @Startdate = DATEADD(DAY, -1, @date) ``` Produces: [![enter image description here](https://i.stack.imgur.com/gp3eh.png)](https://i.stack.imgur.com/gp3eh.png)
How to add date and time in SQL Server
[ "", "sql", "sql-server", "" ]
Hard to make a good title for this (feel free to edit), but hope it will make more sense .. Say I have the following tables: t1: ``` i1 v1 _________ 1 bob 2 NULL 3 sam 4 NULL 5 kenny 5 NULL ``` t2: ``` i2 v2 item ______________ 1 bob prod_1 2 nick prod_2 3 sam prod_3 4 jj prod_4 5 kenny prod_5 5 cartman prod_6 ``` I need to JOIN the tables on `t2.i2 = t1.i1` but only where `t2.v2` does not exist in `t1.v1`. So I'm trying to get the following results: Goal: ``` i2 v2 item __________________ 2 nick prod_2 4 jj prod_4 5 cartman prod_6 ``` This query below was my first attempt, and it's not working, so I'm trying to find a working and more efficient solution with JOINs. ``` SELECT * FROM t2 WHERE v2 NOT IN ( SELECT v1 FROM t1 WHERE t2.i2 = t1.i1 ) ```
Your query is fine, although I would use `NOT EXISTS`: ``` SELECT dm2.* FROM web.delete_me2 dm2 WHERE NOT EXISTS (SELECT 1 FROM web.delete_me1 dm1 WHERE dm2.some_int2 = dm1.some_int1 and dm2.some_var2 = dm1.some_var1 ); ``` Although you can write this as a `join`, this version should be at least as good performance wise. You want an index on `web.delete_me1(some_int1, some_var1)`.
``` SELECT DISTINCT T1.*, T2.* FROM T1 JOIN T2 ON T1.ID = T2.ID LEFT JOIN T2 AS T2NO ON T2NO.NAME = T1.NAME WHERE T2NO.NAME IS NULL ``` or try this ``` SELECT DISTINCT T2.* FROM T2 JOIN T1 ON T1.ID = T2.ID LEFT JOIN T1 AS T1NO ON T1NO.NAME = T2.NAME WHERE T1NO.NAME IS NULL ```
Query a table using conditions with another table
[ "", "sql", "sql-server", "t-sql", "join", "" ]
I try to get a query without parameters to obtain the list of dates from the current month. Something like this: > SYSDATE = 16/07/15 I want the next list: > 01/07/15 > 02/07/15 > ... > 30/07/15 > 31/07/15
Here's what I got to work: ``` SELECT TRUNC(SYSDATE, 'MM') + LEVEL - 1 AS day FROM dual CONNECT BY TRUNC(TRUNC(SYSDATE, 'MM') + LEVEL - 1, 'MM') = TRUNC(SYSDATE, 'MM') ; ``` The key in this query is `TRUNC(SYSDATE, 'MONTH')` which is the first day of the current month. We use hierarchical queries to keep adding one day to the first day of the month until the value is no longer in the current month. We use `LEVEL - 1` because `LEVEL` starts from 1 and we need it to start from zero. Here's a pseudo-query for what the above query does: ``` SELECT (start_of_month + days) AS day FROM dual WHILE MONTH_OF(start_of_month + days) = current_month ``` --- This query be a bit easier to understand: ``` SELECT * FROM ( SELECT TRUNC(SYSDATE, 'MM') + LEVEL - 1 AS day FROM dual CONNECT BY LEVEL <= 32 ) WHERE EXTRACT(MONTH FROM day) = EXTRACT(MONTH FROM SYSDATE) ```
Selects all the days for current month ``` SELECT TO_CHAR (TRUNC (SYSDATE, 'MM'), 'YYYYMMDD')+(LEVEL - 1) each_date FROM DUAL a CONNECT BY LEVEL < (TO_NUMBER (TO_CHAR (TRUNC (SYSDATE, 'MM') - 1, 'DD'))+1) ```
How do I get all dates from sysdate's month with Oracle SQL?
[ "", "sql", "oracle", "" ]
I am dealing with tables which (for the purposes of displaying here) look like the following: A ``` A_ID | Clob_Col 1 | value 2 | value 3 | null 4 | value 5 | null 6 | value 7 | value 8 | null 9 | value 10 | value ``` B ``` B_ID |A_ID | C_ID 10 | 1 | 20 11 | 2 | 20 12 | 6 | 21 13 | 7 | 22 14 | 8 | 22 15 | 9 | 23 ``` C ``` C_ID 20 21 22 23 24 25 ``` (All tables have more columns) I wish to write a query which will return values from all three tables but exclude records where certain values match those in Table C (NOT IN). My problem is to return the values from Table A which *do not* link to Table B when using a NOT IN clause on Table C. Example: ``` SELECT a.A_ID, a.Clob_Col, b.B_ID, c.C_ID from A a LEFT JOIN B b on a.A_ID=b.A_ID LEFT JOIN C c on b.C_ID=c.C_ID WHERE a.AID >= 2 AND a.AID <= 7 AND c.C_ID NOT IN (22, 23, 24) ``` The last line - `c.C_ID NOT IN (22, 23, 24)` - will leave us with the following records in the B table: b\_BID = 10,11 or 12 In turn, these link to the following records in Table A: a.ID = 1,2 and 6. The first clause - `a.AID >= 2 AND a.AID <= 7` - further restricts our final result to a.ID = 2 and 6. The final result will look like: ``` A_ID |Clob_Col |B_ID |C_ID 2 |value |11 |20 6 |value |12 |21 ``` **But** I also want to return the A table records which don't link to Table B - a.ID 3, 4 and 5 So I expect my results to be ``` A_ID |Clob_Col |B_ID |C_ID 2 |value |11 |20 3 |null |null |null 4 |value |null |null 5 |null |null |null 6 |value |12 |21 ``` Note: The reason I included the Clob value is that someone suggested selecting all records and then performing a MINUS operation on the records where `c.C_ID IN (22, 23, 24)`. This seemed reasonable however Oracle does not allow MINUS operation where Clob columns are involved.
I think this does what you're after: ``` with a as (select 1 a_id, 'val1' clob_col from dual union all select 2 a_id, 'val2' clob_col from dual union all select 3 a_id, null clob_col from dual union all select 4 a_id, 'val4' clob_col from dual union all select 5 a_id, null clob_col from dual union all select 6 a_id, 'val6' clob_col from dual union all select 7 a_id, 'val7' clob_col from dual union all select 8 a_id, null clob_col from dual union all select 9 a_id, 'val9' clob_col from dual union all select 10 a_id, 'val10' clob_col from dual), b as (select 10 b_id, 1 a_id, 20 c_id from dual union all select 11 b_id, 2 a_id, 20 c_id from dual union all select 12 b_id, 6 a_id, 21 c_id from dual union all select 13 b_id, 7 a_id, 22 c_id from dual union all select 14 b_id, 8 a_id, 22 c_id from dual union all select 15 b_id, 9 a_id, 23 c_id from dual), c as (select 20 c_id from dual union all select 21 c_id from dual union all select 22 c_id from dual union all select 23 c_id from dual union all select 24 c_id from dual union all select 25 c_id from dual) select a.a_id, a.clob_col, b.b_id, c.c_id from a left outer join b on (a.a_id = b.a_id) left outer join c on (b.c_id = c.c_id) where a.a_id between 2 and 7 and (c.c_id not in (22, 23, 24) or c.c_id is null) order by a.a_id; A_ID CLOB_COL B_ID C_ID ---------- -------- ---------- ---------- 2 val2 11 20 3 4 val4 5 6 val6 12 21 and if c_id is 27 for a_id = 6 in the b table: A_ID CLOB_COL B_ID C_ID ---------- -------- ---------- ---------- 2 val2 11 20 3 4 val4 5 6 val6 12 ``` You have to take account of the fact that c\_id could be null, as well as not being in the set of values being excluded. ETA: Thanks to Ponder Stibbons' suggestion in the comments, if you didn't want the row to be displayed where a.a\_id = b.a\_id matches but there isn't a match on b.c\_id = c.c\_id, then changing the `or c.c_id is null` to `or b.c_id is null` removes that row: ``` with a as (select 1 a_id, 'val1' clob_col from dual union all select 2 a_id, 'val2' clob_col from dual union all select 3 a_id, null clob_col from dual union all select 4 a_id, 'val4' clob_col from dual union all select 5 a_id, null clob_col from dual union all select 6 a_id, 'val6' clob_col from dual union all select 7 a_id, 'val7' clob_col from dual union all select 8 a_id, null clob_col from dual union all select 9 a_id, 'val9' clob_col from dual union all select 10 a_id, 'val10' clob_col from dual), b as (select 10 b_id, 1 a_id, 20 c_id from dual union all select 11 b_id, 2 a_id, 20 c_id from dual union all select 12 b_id, 6 a_id, 27 c_id from dual union all select 13 b_id, 7 a_id, 22 c_id from dual union all select 14 b_id, 8 a_id, 22 c_id from dual union all select 15 b_id, 9 a_id, 23 c_id from dual), c as (select 20 c_id from dual union all select 21 c_id from dual union all select 22 c_id from dual union all select 23 c_id from dual union all select 24 c_id from dual union all select 25 c_id from dual) select a.a_id, a.clob_col, b.b_id, c.c_id from a left outer join b on (a.a_id = b.a_id) left outer join c on (b.c_id = c.c_id) where a.a_id between 2 and 7 and (c.c_id not in (22, 23, 24) or b.c_id is null) order by a.a_id; ```
I think you forgot to use "on" clause for join. You can try this : ``` SELECT a.A_ID, a.Clob_Col, b.B_ID, c.C_ID from A a LEFT JOIN B b on a.A_ID=b.A_ID LEFT JOIN C c on b.C_ID=c.C_ID WHERE a.A_ID between 2 and 7 AND c.C_ID NOT IN (22, 23, 24) ``` Hope it will work.
Oracle sql query
[ "", "sql", "oracle", "oracle-sqldeveloper", "" ]
A database table can only have one primary key not two or more .. why is that so?
The major reason is because that is the definition of the primary key. A table can have multiple unique keys that identify each row, but only one primary key. In databases such as MySQL, the primary key is also a clustered index. That provides a more direct reason. The data is sorted on the pages according to the clustered index. A table can only have one sort order.
A (relational) table's "superkeys" are the sets of columns for which each row has a subrow unique in the table. (Note that every superset of a superkey is a superkey too.) (What unadorned SQL `KEY` declares, and supersets of those.) A superkey that contains no smaller superkey is a "candidate key". Normalization and other relational theory cares about candidate keys and does not care about primary keys. As far as the meanings of queries, updates and constraints go, there is no need or basis for choosing one candidate key and calling it "primary" (and the others "alternate"). It's just a tradition carried over from pre-relational systems from the early days of the relational model when it wasn't understood to be unnecessary. It isn't necessary for purposes of indexing either (which has to with performance, another important observable of expressions). Then, because there was a tradition of having primary keys, other things (like automatic indexing) got attached to them. But those things didn't need to be attached to primary keys, and primary keys are not necessary for those other things. SQL only lets you declare one `PRIMARY KEY`, because there's only "supposed" to be one primary key, but that doesn't mean there's a good reason to declare any outside of the attached functionality. Anyway, SQL `PRIMARY KEY` actually means `UNIQUE NOT NULL`, ie superkey, not candidate key, so only if no `UNIQUE NOT NULL` is declared on a proper subset of a `PRIMARY KEY`'s columns is it declaring a primary key. So the fact that SQL `PRIMARY KEY`s aren't necessarily primary keys shows how empty that claimed need for primary keys is. (And SQL `FOREIGN KEY`s aren't foreign keys, because they don't reference any but only candidate keys (as they should), or even any but only primary keys, or even any but only `PRIMARY KEY`s, they reference any but only superkeys. So again, such claims for the necessity of primary keys are empty.) Most SQL DBMSs automatically and specially index `PRIMARY KEY`s. But that's just a certain way of exposing to the user certain ways of implementing. It is sometimes claimed that having a single way of referring to core business entities justifies having base table primary keys. However, any superkey of any table expression, ie any superset of any candidate key of one, identifies everything that any contained superkey does (including a primary key). So even if an entity's primary key columns are absent, a query can still have columns identifying it. Moreover, any superkey of any table expression identifies some entity whether or not it is identified in some base table (let alone by a primary key). Moreover, even if columns are projected/`SELECT`ed away by a query, the meaning of its rows is still in terms of the meaning of the tables that held those columns. So again a query, update or constraint can involve a core business entity without its designated primary key columns being present. And it can involve derived entities without associated base primary key columns. So claims that primary keys are needed or fundamental for or to uniquely identifying are unfounded.
Why can a database table have only one primary key?
[ "", "mysql", "sql", "database", "oracle", "" ]
I got a value in a field called it (EmployeeDetailKey - varchar(10)) with sequential values such as > 00001, 00002, 00003.... It is in a table Employeedetail. When ever a new Employee detail has to be inserted I have to get the max(EmployeeDetailKey) and increment by 1 and store it back. If I have 10 employeedetail records that need to be inserted then the same procedure has to follow. > If the max(EmployeeDetailKey) = 00003 then after inserting 10 records > it has to be 00013. Later on after inserting let us say 100 records it has to > be 00113. How can I do it in the form of MS-SQL statement. **Please note the column cannot be identity type.**
If you prefer a solution without changing the table structure, then: 1. Cast your zero-padded string value to int. This is easy in SQl server, as it will easily convert such strings to numbers: `SELECT CAST('00003' AS int)` This will return integer value of 3. 2. Find MAX() Just perform `MAX()` on column you've just converted to string, like... `SELECT MAX(CAST(mycolumn AS int)) FROM mytable` Actually, you don't have to do a conversion, as SQL server will sort the values correctly in original string representation. 3. Increment This is easy, since you now have the integer value, so... `SELECT MAX(CAST(mycolumn AS int)) + 1 FROM mytable` 4. Convert it back to zero-padded string SQL Server 2008 is a bit tricky to tame here, since left-padding is not his speciality. However, starting from in SQL Server 2012, there is a FORMAT function available, so, you can use... `SELECT FORMAT(MAX(CAST(mycolumn AS int)) + 1, '00000') FROM mytable` If you have only SQL Server 2005 or 2008 available, you can use `REPLICATE()` combined with `LEN()` to get what you need (disclaimer: UGLY CODE): `SELECT REPLICATE('0', 5 - LEN(MAX(CAST(mycolumn AS int)) + 1)) + CAST((MAX(CAST(mycolumn AS int)) + 1) AS nvarchar(5)) FROM mytable` **EDIT** As Luaan hinted, you can use another padding option (shorter and more readable code): `SELECT RIGHT('00000' + CAST(MAX(CAST(mycolumn AS int) + 1) as nvarchar(5)), 5) FROM mytable`
Just add an identity column to your table. I would suggest something like: ``` IntEmployeeDetailKey int not null identity(1, 1) primary key, . . . ``` Then add a computed column: ``` EmployeeDetailKey as (right(('00000' + cast(IntEmployeeDetailKey as varchar(10)), 5) ``` Then SQL Server will do the incrementing automatically. And you can get the value out as a zero-padded string.
Reg: Auto incrementing a value in SQL Table
[ "", "sql", "sql-server", "auto-increment", "" ]
I have searched a lot, but most of solutions are for concatenation option and not what I really want. I have a table called `X` (in a Postgres database): ``` anm_id anm_category anm_sales 1 a_dog 100 2 b_dog 50 3 c_dog 60 4 a_cat 70 5 b_cat 80 6 c_cat 40 ``` I want to get total sales by grouping 'a\_dog', 'b\_dog', 'c\_dog' as dogs and 'a\_cat', 'b\_cat', 'c\_cat' as cats. I cannot change the data in the table as it is an external data base from which I am supposed to get information only. How to do this using an SQL query? It does not need to be specific to Postgres.
Use `case` statement to group the animals of same categories together ``` SELECT CASE WHEN anm_category LIKE '%dog' THEN 'Dogs' WHEN anm_category LIKE '%cat' THEN 'cats' ELSE 'Others' END AS Animals_category, Sum(anm_sales) AS total_sales FROM yourtables GROUP BY CASE WHEN anm_category LIKE '%dog' THEN 'Dogs' WHEN anm_category LIKE '%cat' THEN 'cats' ELSE 'Others' END ``` Also this query should work with most of the databases.
By using **PostgreSQL's** [**split\_part()**](http://www.postgresql.org/docs/current/interactive/functions-string.html) ``` select animal||'s' animal_cat,count(*) total_sales,sum(anm_sales) sales_sum from( select split_part(anm_cat,'_',2) animal,anm_sales from x )t group by animal ``` [**sqlfiddle**](http://sqlfiddle.com/#!15/fa516/8/0) By creating [**split\_str()**](http://blog.fedecarg.com/2009/02/22/mysql-split-string-function/) in **MySQL** ``` select animal||'s' animal_cat,count(*) total_sales,sum(anm_sales) sales_sum from( select split_str(anm_cat,'_',2) animal,anm_sales from x )t group by animal ``` [**sqlfiddle**](http://sqlfiddle.com/#!2/be33ab/6/0)
Group rows with similar strings
[ "", "sql", "postgresql", "aggregate", "" ]
I'm having a difficult time writing a query for a personal project. I have some data for a housing community that lists all the historical statuses of each unit. Each status, uniquely defined by the column "HMY", represents a period of time that a resident stayed in the unit. You can see in dtStart when a resident began living in a unit and dtEnd when a resident left the unit. If dtEnd is NULL it means that the resident is currently living there. Since these are historical statuses, there are multiple rows for each unit. I'm hoping to write a query that lists all of the most recent units. Here is a snapshot of what my current table looks like: ![Table: [Unit_Status]](https://i.stack.imgur.com/xGgwd.png) In pseudo-code I'd like to group by units that have the most recent record (so the highest value of hMy or even when dtEnd is NULL). Is there anyone with the wisdom to help me out with this? Thank you!!
``` WITH t1 (property, hunit) AS ( SELECT DISTINCT property, hunit FROM table ) SELECT t1.property, t1.hunit, highest.hmy, highest.dtstart, highest.dtend FROM t1 CROSS APPLY ( SELECT TOP(1) hmy, dtstart, dtend FROM table t WHERE t.property = t1.property AND t.hunit = t1.hunit ORDER BY hmy DESC ) highest ``` This sql first gets a distinct list of all properties and units (the WITH clause). Then it uses CROSS Apply to join the record with the largest hmy back onto the property and hunit.
You could use ROW\_NUMBER() to partition by hUnit and order by hMy in descending order. Then just select the first row, which contains the highest hMy, from each partition. ``` SELECT hMy,Property,hUnit,dtStart,dtEnd FROM ( SELECT ROW_NUMBER() OVER(PARTITION BY hUnit ORDER BY hMy DESC) RowID,hMy,Property,hUnit,dtStart,dtEnd FROM MyTable ) a WHERE RowID = 1 ```
Group by unit and show the unit that has the most recent date
[ "", "sql", "sql-server", "group-by", "" ]
The code below is correct as it show me ``` SELECT TOP 3 P.CategoryID,SUM(OD.Quantity) From "Order Details" AS OD INNER JOIN Products AS P ON OD.ProductID=P.ProductID GROUP BY P.CategoryID ORDER BY SUM(OD.Quantity) DESC ``` The result is ``` CategoryID sum 1 9532 4 9149 3 7906 ``` but I encountered this error when I try to subquery it. The code below is the current problem that cause the error. The error is "Only one expression can be specified in the select list when the subquery is not introduced with EXISTS." ``` SELECT CategoryName FROM Categories WHERE CategoryID = ( SELECT TOP 3 P.CategoryID,SUM(OD.Quantity) From "Order Details" AS OD INNER JOIN Products AS P ON OD.ProductID=P.ProductID GROUP BY P.CategoryID ORDER BY SUM(OD.Quantity) DESC ) ``` I am not sure how to solve it or change the code in any way and I did try to not to group the categoryID which was a failure as it did not even show any result. The desired result should be ``` CategoryName Beverages Dairy Products Confections ```
MySQL uses LIMIT, not TOP. This Issue is tagged with MySQL, not SQL Server, so I'm answering accordingly. ``` SELECT P.CategoryID, SUM( OD.Quantity ) FROM OrderDetails AS OD INNER JOIN Products AS P ON OD.ProductID=P.ProductID GROUP BY P.CategoryID ORDER BY SUM( OD.Quantity ) DESC LIMIT 3; ``` In the spirit of what @Kjell suggests, MySQL doesn't allow the IN Predicate this way. Use INNER JOIN instead: ``` SELECT CategoryName FROM Categories INNER JOIN ( SELECT P.CategoryID, SUM( OD.Quantity ) FROM OrderDetails AS OD INNER JOIN Products AS P ON OD.ProductID=P.ProductID GROUP BY P.CategoryID ORDER BY SUM( OD.Quantity ) DESC LIMIT 3 ) AS Quants ON Quants.CategoryID = Categories.CategoryID; ``` If you are using SQL Server, please update your question and either use an IN predicate or use the INNER JOIN example with TOP instead of LIMIT: ``` SELECT CategoryName FROM Categories WHERE CategoryID IN ( SELECT TOP 3 P.CategoryID FROM OrderDetails AS OD INNER JOIN Products AS P ON OD.ProductID=P.ProductID GROUP BY P.CategoryID ORDER BY SUM( OD.Quantity ) ) ``` -- OR ``` SELECT CategoryName FROM Categories INNER JOIN ( SELECT TOP 3 P.CategoryID, SUM( OD.Quantity ) AS q FROM OrderDetails AS OD INNER JOIN Products AS P ON OD.ProductID=P.ProductID GROUP BY P.CategoryID ORDER BY SUM( OD.Quantity ) ) AS Quants ON Quants.CategoryID = Categories.CategoryID ```
You should not select SUM(OD.Quantity) in the inner query. ``` SELECT CategoryName FROM Categories WHERE CategoryID IN ( SELECT TOP 3 P.CategoryID From "Order Details" AS OD INNER JOIN Products AS P ON OD.ProductID=P.ProductID GROUP BY P.CategoryID ORDER BY SUM(OD.Quantity) DESC ) ```
Subquery Executing Problems
[ "", "sql", "" ]
I have a dynamic SQL query ``` DECLARE @ItemAreaCode NVARCHAR(MAX) = 'A062', @SQLStringDropTable NVARCHAR(MAX); SET @SQLStringDropTable= 'DROP TABLE' + ' ' + '[@ItemAreaCode].[ChangedRMAllocation]' PRINT @ItemAreaCode EXEC sp_executesql @SQLStringDropTable , N'@ItemAreaCode NVARCHAR(MAX)', @ItemAreaCode; ``` But when I execute this, I get this error: > Msg 3701, Level 11, State 5, Line 1 > Cannot drop the table '@ItemAreaCode.ChangedRMAllocation', because it does not exist or you do not have permission. And the printed query is.. ``` DROP TABLE [@ItemAreaCode].[ChangedRMAllocation]; ``` What I need is: ``` DROP TABLE [A062].[ChangedRMAllocation]; ```
You probably meant to expand the variable: ``` DECLARE @ItemAreaCode sysname = 'A062', @SQLStringDropTable NVARCHAR(MAX); SET @SQLStringDropTable = 'DROP TABLE ' + quotename(@ItemAreaCode) + '.[ChangedRMAllocation]'; EXEC (@SQLStringDropTable); ```
You are setting you parameter as a string within the dynamic SQL. `@ItemAreaCode` should not be included. This should work : ``` DECLARE @ItemAreaCode NVARCHAR(MAX) = 'A062', @SQLStringDropTable NVARCHAR(MAX); SET @SQLStringDropTable= 'DROP TABLE' + ' [' + @ItemAreaCode + '].[ChangedRMAllocation]' PRINT @ItemAreaCode EXEC (@SQLStringDropTable); ``` Hope this helps
SQL Server Dynamic SQL Execution
[ "", "sql", "sql-server", "dynamic-sql", "" ]
I am trying to get the data associated with the most recent `curve_date` corresponding to each `tenor_years` value and am using the query below to do this. However, I am not getting the data as I would like it to be. ``` SELECT tenor_years, yield_pct, MAX(curve_date) AS "MostRecentDate" FROM yc_node_hist where fk_yc_update = 12 GROUP BY tenor_years, yield_pct order by tenor_years ``` `SELECT * FROM yc_node_hist where fk_yc_update = 12` gives the data below: ``` id fk_yc_update curve_date tenor_years yield_pct 353443 12 2013-07-26 1 0.1436 353444 12 2013-07-29 1 0.1389 353445 12 2013-07-30 1 0.133 ``` The data comes out as follows: ``` tenor_years yield_pct curve_date 1 0.0828 2014-05-14 1 0.0832 2014-05-19 ``` I want to get something like: ``` tenor_years yield_pct curve_date 1 0.0828 2014-05-14 2 0.3232 2015-06-17 .. 30 ``` Thank You
SQL Server offers `PARTITION`/`OVER` functionality for situations like that. ``` SELECT tenor_years, yield_pct, MostRecentDate FROM ( SELECT tenor_years, yield_pct, curve_date AS "MostRecentDate", RANK() OVER (PARTITION BY tenor_years ORDER BY curve_date DESC) N FROM yc_node_hist where fk_yc_update = 12 )M WHERE N = 1 ORDER BY tenor_years ``` This produces a fast query with a projection, avoiding the need to join back to the original. [Demo.](http://sqlfiddle.com/#!3/6d129/5)
You have to remove the yield\_pct from the group by: ``` SELECT tenor_years, MAX(curve_date) AS "MostRecentDate" FROM yc_node_hist where fk_yc_update = 12 GROUP BY tenor_years; ``` And then join back on itself: ``` SELECT a.tenor_years, a.curve_date,a.yield_pct FROM yc_node_hist a INNER JOIN ( SELECT tenor_years, MAX(curve_date) AS "MostRecentDate" FROM yc_node_hist where fk_yc_update = 12) b ON a.tenor_years=b.tenor_years AND a.curve_date=b.MostRecentDate ORDER BY tenor_years ASC; GROUP BY tenor_years; ```
Getting the most recent data for each value in another column
[ "", "sql", "sql-server", "" ]
Is there any possible way to execute something like this in T-SQL? ``` CASE @@VERSION WHEN 'SQL Server 2005' THEN Command_A ELSE Command_B END ``` This case block should pick Command\_A if the server version is 2005. If not Command\_B should get executed.
Actually the case is a "subcommand" of SELECT you could achieve what you want with something like this: ``` declare @s varchar(255) select @s = case @@VERSION when 'SQL Server 2005' THEN 'command 1' ELSE 'command 2' END exec (@s) ```
You can use this `CASE` to get the sql-server version: ``` SELECT CASE SUBSTRING(CAST(SERVERPROPERTY('productversion')AS nvarchar(128)), 1, CHARINDEX('.', CAST(SERVERPROPERTY('productversion')AS nvarchar(128))) - 1) WHEN 7 THEN 'SQL Server 7' WHEN 8 THEN 'SQL Server 2000' WHEN 9 THEN 'SQL Server 2005' WHEN 10 THEN 'SQL Server 2008/2008 R2' WHEN 11 THEN 'SQL Server 2012' WHEN 12 THEN 'SQL Server 2014' ELSE 'Unsupported' END AS DB_Version ``` Then you just need to execute dynamic sql according to the result. `Demo`
T-SQL: Execute command in case block
[ "", "sql", "sql-server", "t-sql", "" ]
In my MySQL database, I have a table with different questions in different categories. I would like to write a SQL statement that returns 3 RANDOM questions of EACH category. **Here is an example of database records:** ``` id question category 1 Question A 1 2 Question B 1 3 Question C 1 4 Question D 1 5 Question D 1 6 Question F 2 7 Question G 2 8 Question H 2 9 Question I 2 10 Question J 2 11 Question K 3 12 Question L 3 13 Question M 3 14 Question N 3 15 Question O 3 16 Question P 3 ``` **Here is output/results of 3 Random selected and shuffled from all questions of each category from the above list:** ``` 2 Question B 1 4 Question D 1 3 Question C 1 10 Question J 2 7 Question G 2 9 Question I 2 11 Question K 3 15 Question P 3 13 Question M 3 ``` I have so far played with the following statement for testing: ``` SELECT * FROM `random` ORDER BY RAND() LIMIT 0,3; ``` This return only 3 RANDOM questions from all categories. And I have afterwards looked for example at this link: [MYSQL select random of each of the categories](https://stackoverflow.com/questions/16626622/mysql-select-random-of-each-of-the-categories) And tried this: ``` (SELECT * FROM `random` WHERE category = 1 ORDER BY RAND() LIMIT 3) UNION ALL (SELECT * FROM `random` WHERE category = 2 ORDER BY RAND() LIMIT 3) UNION ALL (SELECT * FROM `random` WHERE category = 3 ORDER BY RAND() LIMIT 3) ``` But here I need to add each category manually. **My Question:** I was a wonder if it is at all possible to fetch 3 RANDOM records/rows from each category of all categories (automatically)? --- **EDIT** This is not part of the question but help. **Dummy data creator** The query code will table called `random` and created a stored procedure called `create_random` and when you run the stored procedure, it will create random dummy data inside a random table: ``` DELIMITER $$ DROP TABLE IF EXISTS `random`; DROP PROCEDURE IF EXISTS `create_random` $$ CREATE TABLE `random` ( `id` INT(11) NOT NULL AUTO_INCREMENT, `question` VARCHAR(50) NULL DEFAULT NULL, `category` VARCHAR(50) NULL DEFAULT NULL, PRIMARY KEY (`id`) ) COLLATE='latin1_swedish_ci' ENGINE=InnoDB AUTO_INCREMENT=401 ; CREATE DEFINER=`root`@`localhost` PROCEDURE `create_random`() LANGUAGE SQL NOT DETERMINISTIC CONTAINS SQL SQL SECURITY DEFINER COMMENT '' BEGIN DECLARE v_max int unsigned DEFAULT 100; DECLARE v_counter int unsigned DEFAULT 0; DECLARE cat_counter int unsigned DEFAULT 0; TRUNCATE TABLE `random`; START TRANSACTION; WHILE v_counter < v_max DO IF v_counter %10=0 THEN SET cat_counter=cat_counter+1; END IF; INSERT INTO `random` (question, category) VALUES ( CONCAT('Question', FLOOR(0 + (RAND() * 65535))), cat_counter ); SET v_counter=v_counter+1; END WHILE; COMMIT; END ``` **Note:** I tried all answers and all works fine. Gordon Linoff and pjanaway answer select RANDOM only from top 3 or bottom 3 questions, I have checked Gordon answer because he answered first, but that does not mean other answers is not good, all of them are good and it is up to users to pick the right answer or combination of answers. I love all the answers and vote them up. Drew Pierce answer newly to this question, it is more interesting right now and almost near the goal. Thanks to all.
Yes, you can do this by enumerating the rows and then fetching the top three: ``` select r.id, r.question, r.category from (select r.*, (@rn := if(@c = category, @rn + 1, if(@c := category, 1, 1) ) ) as seqnum from `random` r cross join (select @rn := 0, @c := -1) params order by category, rand() ) r where seqnum <= 3; ```
In addition to the other answer, this is also another way to do it. ``` SELECT r.* FROM random r WHERE ( SELECT COUNT(*) FROM random r1 WHERE r.category = r1.category AND r.id < r1.id ) <= 2 ORDER BY r.category ASC, RAND() ```
How to get RANDOM records from each category in MySQL?
[ "", "mysql", "sql", "" ]
I have this table structure: ``` TABLE: PERSON TABLE: CAR PersonID PersonID | CarID ------ ---------|--------- 1 1 | 51 1 | 52 TABLE: PET TABLE: AGE PersonID | PetID Person | AgeID ---------|---- -------|---- 1 | 81 1 | 20 1 | 82 1 | 81 ``` One person can have many cars and pets, but only one age. I want to count the number of cars someone has, count the number of pets someone has, and list their age. This is what I have so far: ``` select car.personid as person, count(car.carid) as cars, null as pets from car where car.personid = 1 group by car.personid union all select pet.personid as person, null as cars, count(pet.petid) as pets from pet where pet.personid = 1 group by pet.personid ``` This produces: ``` Person | Cars | Pets -------|------|----- 1 | 2 | null 1 | null | 3 ``` But I'd like the results to look like this: ``` Person | Cars | Pets | Age -------|------|------|---- 1 | 2 | 3 | 20 ``` There's a fiddle here: <http://sqlfiddle.com/#!3/f584a/1/0> I'm completely stuck on how to bring the records into one row and add the age column.
[SQL Fiddle](http://sqlfiddle.com/#!3/f584a/45) **Query 1**: ``` SELECT p.PersonID, ( SELECT COUNT(1) FROM CAR c WHERE c.PersonID = p.PersonID ) AS Cars, ( SELECT COUNT(1) FROM PET t WHERE t.PersonID = p.PersonID ) AS Pets, a.AgeID AS Age FROM PERSON p LEFT OUTER JOIN AGE a ON ( p.PersonID = a.PersonID ) ``` **[Results](http://sqlfiddle.com/#!3/f584a/45/0)**: ``` | PersonID | Cars | Pets | Age | |----------|------|------|-----| | 1 | 2 | 3 | 20 | ``` **Query 2**: ``` WITH numberOfPets AS ( SELECT PersonID, COUNT(1) AS numberOfPets FROM PET GROUP BY PersonID ), numberOfCars AS ( SELECT PersonID, COUNT(1) AS numberOfCars FROM CAR GROUP BY PersonID ) SELECT p.PersonID, COALESCE( numberOfCars, 0 ) AS Cars, COALESCE( numberOfPets, 0 ) AS Pets, AgeID AS Age FROM PERSON p LEFT OUTER JOIN AGE a ON ( p.PersonID = a.PersonID ) LEFT OUTER JOIN numberOfPets t ON ( p.PersonID = t.PersonID ) LEFT OUTER JOIN numberOfCars c ON ( p.PersonID = c.PersonID ) ``` **[Results](http://sqlfiddle.com/#!3/f584a/45/1)**: ``` | PersonID | Cars | Pets | Age | |----------|------|------|-----| | 1 | 2 | 3 | 20 | ```
Should work with duplicate `Petid` or duplicate `carid` [**SqlFiddle Demo**](http://sqlfiddle.com/#!3/f584a/37) ``` WITH person_cte AS (SELECT * FROM person), car_count AS (SELECT Count(1) AS car, p.personid FROM person_cte p LEFT OUTER JOIN car c ON p.personid = c.personid GROUP BY p.personid), pet_count AS (SELECT Count(1) AS Pet, p.personid FROM person_cte p LEFT OUTER JOIN pet c ON p.personid = c.personid GROUP BY p.personid) SELECT c.personid, c.car, p.pet, a.ageid FROM car_count c INNER JOIN age a ON c.personid = a.personid INNER JOIN pet_count p ON p.personid = c.personid; ``` If there wont be any duplicates in `Carid` or `Petid` then use this [**SqlFiddle Demo**](http://sqlfiddle.com/#!3/f584a/10) ``` SELECT p.personid, a.ageid, Count(DISTINCT carid) as carid, Count(DISTINCT petid) as petid FROM person p INNER JOIN age a ON p.personid = a.personid LEFT OUTER JOIN car c ON p.personid = c.personid LEFT OUTER JOIN pet pe ON p.personid = pe.personid GROUP BY p.personid, a.ageid ```
How to use UNION with COUNT
[ "", "sql", "oracle", "" ]
I've read about this problem on a few different sites, but I still don't understand the solution. From what I understand, SQL will optimize the query in the function and sometimes the Order By clause will be ignored. How can you sort results? How can I sort results in a simple table valued function like this? ``` Create function [dbo].fTest --Input Parameters (@competitionID int) --Returns a table RETURNS @table TABLE (CompetitionID int ) as BEGIN Insert Into @table (CompetitionID) select CompetitionID from Competition order by CompetitionID desc RETURN END ``` **UPDATE** I found inserting a primary key identity field seems to help (as mentioned in the answer posted Martin Smith). Is this a good solution? ``` --Returns a table RETURNS @table TABLE ( SortID int IDENTITY(1,1) PRIMARY KEY, CompetitionID int ) ``` In reference to Martin's answer below, sorting outside of the select statement isn't that easy in my situation. My posted example is a stripped down version, but my real-life issue involves a more complicated order by case clause for custom sorting. In addition to that, I'm calling this function in an MVC controller with a LINQ query, which means that custom sorting would have to be added to the LINQ query. That's beyond my ability at this point. If adding the identity field is a safe solution, I'm happy to go with that. It's simple and easy.
The `order by` needs to be in the statement that selects from the function. ``` SELECT CompetitionId FROM [dbo].fTest() ORDER BY CompetitionId ``` This is the only way to get reliable results that are assured to [not suddenly break in the future](https://stackoverflow.com/a/11231935/73226).
This is just a bit of an ingenious workaround and only useful in some scenarios but it worked beautiful for what I needed: you can create a stored procedure that does a `select *` from the function with the desired order by clause and use that instead...
Order By In a SQL Table Valued Function
[ "", "sql", "sql-server", "" ]
I want to find a word in a string with SQL. I currently use: ``` SELECT * FROM dreams WHERE title LIKE '%lo%' ``` But I want also to find other spellings like "Lo" or "LO" and so on.. Any Ideas ?
Convert it to upper case before comparing ``` SELECT * FROM dreams WHERE upper(title) LIKE '%LO%' ```
``` SELECT * FROM dreams WHERE title LIKE '%lo%' union SELECT * FROM dreams WHERE title LIKE '%LO%' union SELECT * FROM dreams WHERE title LIKE '%Lo%'; ```
SQL find Word in String (any Spelling)
[ "", "sql", "" ]
I have a log with fingerprint timestamps as follows: ``` Usr TimeStamp ------------------------- 1 2015-07-01 08:01:00 2 2015-07-01 08:05:00 3 2015-07-01 08:07:00 1 2015-07-01 10:05:00 3 2015-07-01 11:00:00 1 2015-07-01 12:01:00 2 2015-07-01 13:03:00 2 2015-07-01 14:02:00 1 2015-07-01 16:03:00 2 2015-07-01 18:04:00 ``` And I wish an output of workers per hour (rounding to nearest hour) The theoretical output should be: ``` 7:00 0 8:00 3 9:00 3 10:00 2 11:00 1 12:00 2 13:00 1 14:00 2 15:00 2 16:00 1 17:00 1 18:00 0 19:00 0 ``` Can anyone think on how to approach this as SQL or if no other way, through TSQL? Edit: The timestamps are logins and logouts of the different users. So at 8am 3 users logged in and the same 3 are still working at 9am. One of them leaves at 10am. etc
Here is my final working code: ``` create table tsts(id int, dates datetime) insert tsts values (1 , '2015-07-01 08:01:00'), (2 , '2015-07-01 08:05:00'), (3 , '2015-07-01 08:07:00'), (1 , '2015-07-01 10:05:00'), (3 , '2015-07-01 11:00:00'), (1 , '2015-07-01 12:01:00'), (2 , '2015-07-01 13:03:00'), (2 , '2015-07-01 14:02:00'), (1 , '2015-07-01 16:03:00'), (2 , '2015-07-01 18:04:00') select horas.hora, isnull(sum(math) over(order by horas.hora rows unbounded preceding),0) as Employees from ( select 0 as hora union all select 1 as hora union all select 2 as hora union all select 3 as hora union all select 4 as hora union all select 5 as hora union all select 6 as hora union all select 7 as hora union all select 8 as hora union all select 9 as hora union all select 10 as hora union all select 11 as hora union all select 12 as hora union all select 13 as hora union all select 14 as hora union all select 15 as hora union all select 16 as hora union all select 17 as hora union all select 18 as hora union all select 19 as hora union all select 20 as hora union all select 21 as hora union all select 22 as hora union all select 23 ) as horas left outer join ( select hora, sum(math) as math from ( select id, hora, iif(rowid%2 = 1,1,-1) math from ( select row_number() over (partition by id order by id, dates) as rowid, id, datepart(hh,dateadd(mi, 30, dates)) as hora from tsts ) as Q1 ) as Q2 group by hora ) as Q3 on horas.hora = Q3.hora ``` [SQL Fiddle](http://sqlfiddle.com/#!6/a9d6f/1/0)
To start with you can use datepart to get hours for the days as following and then use group by user ``` SELECT DATEPART(HOUR, GETDATE()); ``` [**SQL Fiddle**](http://sqlfiddle.com/#!3/04e00/1) ``` SELECT Convert(varchar(5),DATEPART(HOUR, timestamp)) + ':00' as time, count(usr) as users from tbl group by DATEPART(HOUR, timestamp) ```
SQL TSQL for Workers per Hour
[ "", "sql", "sql-server", "t-sql", "sql-server-2014-express", "" ]
I have a table called `quiz` and it has 3 columns: ``` enterYear enterMonth quizMark 2013 7 9.5 2013 8 8.5 2013 9 9.75 2013 10 10 2013 11 7.75 2013 12 8.5 2014 1 5 2014 2 8.75 2014 3 10 ``` Now, I want to select the entries that were enter after Sept. of 2013 (including Sept). I tried this which was wrong: ``` select * from quiz q where q.year>=2013 and q.month>=9 ``` This omitted all the entries in 2014 since their months are less than 9, but I do want them in the result since they are after Sept. of 2013. So then, I tried this ``` select * from quiz q where Convert(date, CAST(q.year as varchar(4))+'/'+CAST(q.month as varchar(2))+'/01')>='2013/9/01' ``` which showed up the result for like a half second and then quickly disappeared by giving an error message: > Msg 241, Level 16, State 1, Line 1 > Conversion failed when converting date and/or time from character string." Can someone help with the correct code to achieve the result, very appreciated!
``` select * from quiz q where q.year>2013 or(q.year=2013 and q.month>=9) ```
``` select * from quiz q where (q.year>=2013 and q.month>=9) OR q.year>=2014 ``` This will capture all your dates from September 2013 upwards.
SQL - select data that have dates greater than a month in a year
[ "", "sql", "sql-server", "" ]
I am using Oracle SQL. Why is this insert statement throwing an expression missing error? ``` INSERT INTO T_DATA(id, object_type, object_id, measurement_id, derived_ind, no_hist_ind, display_order, stored_precision, display_precision, required_ind, mod_user, mod_dtime) VALUES(SELECT MAX(id) + 1 FROM t_data_point, 'Object', 'ObjectName', 'ID', 3, 0, 1, 2, 2, 0, 'TEST', SYSDATE); ```
Your usage can be just `INSERT.. INTO.. SELECT..` ``` INSERT INTO T_DATA(id, object_type, object_id, measurement_id, derived_ind, no_hist_ind, display_order, stored_precision, display_precision, required_ind, mod_user, mod_dtime) (SELECT MAX(id) + 1, 'Object', 'ObjectName', 'ID', 3, 0, 1, 2, 2, 0, 'TEST', SYSDATE FROM t_data_point); ``` To use the `SELECT` inside `VALUES`... embed them in brackets. ``` VALUES( (SELECT MAX(id) + 1 FROM t_data_point), ... ) ``` **EDIT:** Make sure, the expression in `INSERT` columns and `VALUES` match.
Just as an alternative to MahMaheswaran's answer. You can still achieve the query whilst using VALUES. You just needed to wrap the SELECT statement in parenthesis. ``` INSERT INTO T_DATA (Id, object_type, object_id, measurement_id, derived_ind, no_hist_ind, display_order, stored_precision, display_precision, required_ind, mod_user, mod_dtime) VALUES ( ( SELECT MAX(Id) + 1 FROM t_data_point ) , 'Object', 'ObjectName', 'ID', 3, 0, 1, 2, 2, 0, 'TEST', SYSDATE); ``` However, I would still use the first answer.
Why is this insert statement throwing an expression missing error?
[ "", "sql", "oracle", "sql-insert", "" ]
I have a database like the following: ``` id | col_1 | col_2 ------------------ 1 | a | x 2 | a | x 3 | b | x 4 | b | z 5 | c | x ``` I'm trying to get all rows that match col\_2 = x plus the frequency of col\_1, ordered by the frequency. For example, the output would be: ``` id | col_1 | col_2 | freq ------------------------- 1 | a | x | 2 2 | a | x | 2 3 | b | x | 1 5 | c | x | 1 ``` I've tried various queries, but because I'm using a GROUP BY to get the frequency, I'm unable to get the individual rows (since I want each id). For example: ``` SELECT *, COUNT(col_1) AS freq FROM mytable WHERE col_2 = x GROUP BY col_1 ORDER BY freq DESC ``` Unfortunately, this does not give me all the rows. It leaves out id = 2. Any help would be greatly appreciated! Thank you!
Your `freq` column looks like an independent, table-wide count of rows where `col_2 = 'x'`, grouped by `id`. You can get that using this query: Here is SQL FIDDLE [DEMO](http://sqlfiddle.com/#!9/7358e/4) ``` SELECT col_1, COUNT(*) AS freq FROM myTable WHERE col_2 = 'x' GROUP BY col_1 ``` Join it to a query for individual `id` values and you should get the results you're after: ``` SELECT id, col_1, col_2, col2Summary.freq FROM myTable INNER JOIN ( SELECT col_1, COUNT(*) AS freq FROM myTable WHERE col_2 = 'x' GROUP BY col_1 ) col2Summary ON myTable.col_1 = col2Summary.col_1 WHERE col_2 = 'x' ORDER BY freq DESC ```
This is @EdGibbs solution rewritten using a Scalar Subquery. MySQL creates a different plan, you should test which is more efficient ([fiddle](http://sqlfiddle.com/#!9/2dbf1/1)): ``` SELECT id, col_1, col_2, (SELECT COUNT(*) FROM myTable AS t2 WHERE t.col_1 = t2.col_1 AND col_2 = 'x') AS freq FROM myTable AS t WHERE col_2 = 'x' ORDER BY freq DESC; ``` Btw, almost every other DBMS supports Windowed Aggregate Functions and then it would be a simple: ``` COUNT(*) OVER (PARTITION BY col_1) AS freq ```
How to get all rows when using GROUP BY?
[ "", "mysql", "sql", "" ]
I have a users, questions and answers table, What I wish to do is select from questions and users table based on their which is their username and then count the number of rows in the answers table based on the relationship between the questions and answers table. Bear in mind the current state of the tables are : > Questions table has four columns (question\_id, topic\_id, username, question) > Answers table has two columns (question\_id, answer) > Users table has two columns (username, user\_mail) > The query i tried ``` SELECT questions.question_id, questions.username, questions.question, userlog.user_mail, COUNT(answers.answer) as answerCount FROM questions LEFT JOIN answers ON answers.question_id = questions.question_id, userlog WHERE questions.topic_id = '0d3fb89c012b5af12e1e0' AND userlog.username = questions.username ``` The problem with the above is that, it return only one row instead of three rows which are in the database.
There are a few issues with your query. First, the presence of `COUNT()` makes the query into an aggregate query. Without `GROUP BY`, aggregate queries can't generate more than a single row. Another issue: you've got some JOIN confusion with USERLOG. Hopefully there's only one row in USERLOG for each user, or you may end up double-counting answers. Try this query ``` SELECT questions.question_id, questions.username, questions.question, userlog.user_mail, COUNT(answers.answer) as answerCount FROM questions LEFT JOIN userlog ON questions.username = userlog.username LEFT JOIN answers ON answers.question_id = questions.question_id WHERE questions.topic_id = '0d3fb89c012b5af12e1e0' GROUP BY questions.question_id, questions.username, questions.question, userlog.user_mail ORDER BY questions.username, questions.question_id ``` That should yield the multirow result set you need.
Try this: ``` SELECT questions.question_id, questions.username, questions.question, userlog.user_mail, (Select COUNT(answers.answer) where answers.question_id = questions.question_id) as answerCount FROM questions INNER JOIN userlog ON userlog.username = questions.username WHERE questions.topic_id = '0d3fb89c012b5af12e1e0' ```
Selecting from two tables and counting a third
[ "", "mysql", "sql", "database", "mysql-workbench", "" ]
Hello I'm currently in a Intro to SQL in college. We are using Murach SQL Sever 2012 for Developers. I'm currently in Chapter 4 and I'm not understanding what a Join Condition is. I understand that is indicates how two tables should be compared, but what I can't understand is the syntax. ``` SELECT InvoiceNumber, Vendor name FROM Vendors JOIN Invoices ON Vendors.VendorID = Invoices.VendorID; ``` Why is it named `.VendorID`? Sorry if this is vague.
Implement the join condition in sql by a linq query: ``` var result =(from e in employee join v in vendor where e.EmployeeId equals v.EmployeeId select new { EmployeeName = e.employeeName, EmployeeSalary =e.employeeSalary, VendorName = v.vendorName, VendorDate =v.VendorDate, }).ToList(); return (result); ```
Join clause combines records from two or more tables in a relational database. **Example:** If you have two table called `Vendors` and `Invoices`. Now, you are looking for common data between both table on the basis of id i.e. `VendorId`. But, first of all, you need to access column of a table. So, you need to specify **which table** and **which column**. Then, it goes like `mytable.thiscolumn`. Similarly, in your case you were trying to access `VendorId` column, which exist in both tables. So, you are explicitly telling, I need `VendorId` from the `Vendors` and `Invoices`.
Join Conditions in SQL
[ "", "sql", "join", "conditional-statements", "" ]
I have a table: ``` year val ---- ---- 2013 4 2014 6 2014 2 2014 6 2015 1 2015 3 2016 7 ``` is there a way to get the sum of the previous years for every year in the table? the result should be like this: ``` 2013 0 2014 4 2015 18 2016 22 2017 29 ``` I tried something like this: ``` select year, sum(val) from (select year, val from table ?????) group by year ``` There should be an inner join somewhere maybe?
If you want old years only then use this query ``` SELECT DISTINCT year , ( SELECT SUM(val) FROM table as temp2 WHERE temp2.year < temp1.year ) as v FROM table as temp1 ``` If you want to include year too then change temp2.year < temp1.year to <= , if you want to filter by years , then use comparison = so ``` SELECT DISTINCT year , ( SELECT SUM(val) FROM table as temp2 WHERE temp2.year <= temp1.year ) as v FROM table as temp1 ``` and ``` SELECT DISTINCT year , ( SELECT SUM(val) FROM table as temp2 WHERE temp2.year = temp1.year ) as v FROM table as temp1 ``` but the latest could be easily done without subquery , just selecting year and sum(val) then group by year
Your question is a bit challenging because you want `0` for the first year: ``` select year, ( (@p := @p + sumval) - sumval) as cumeprev from (select year, sum(val) as sumval from table t group by year ) t cross join (select @p := 0) params order by year; ```
SQL sum of previous entries
[ "", "mysql", "sql", "sum", "" ]
I need to get the Supplier ID from the SQL Select statement inside Case statement. Once I put the `A.SUPPLIER_ID` to Select Statement I get an Error. How to do this? ``` Select CASE WHEN TYPE = 1 THEN ( SELECT A.name from BIZZXE_V2_SCH.SUPPLIERS A where A.SUPPLIER_ID = 30 ) ELSE ( SELECT A.name from BIZZXE_V2_SCH.STOCK_SUPPLIER A where A.SUPPLIER_ID = 31 ) END name from DUAL; ```
You can't put complete queries into a `case` statement. But this should work ``` SELECT name from BIZZXE_V2_SCH.SUPPLIERS where SUPPLIER_ID = 30 and TYPE = 1 union all SELECT name from BIZZXE_V2_SCH.STOCK_SUPPLIER where SUPPLIER_ID = 31 and TYPE <> 1 ```
You should be able to handle this in your WHERE statement, like this. ``` SELECT A.name FROM BIZZXE_V2_SCH.STOCK_SUPPLIER A WHERE (A.SUPPLIER_ID = 30 AND TYPE = 1) OR A.SUPPLIER_ID = 31 ```
Sql Case Statement Error Cannot select More Than one Column
[ "", "sql", "oracle", "case", "oracle-apex", "" ]
Untested waters,I'm trying to find a way to ease things up. Instead of having a nested select in a cursor in PL-SQL. I'm trying to make a case when in the cursor. Since the cursor can have different inparameters and I need to sort things accordingly. Is this possible? My code looks like this. It's easy to read ao\_in is the Inparam. ``` PROCEDURE theProcedure (ao_in IN VARCHAR2) CURSOR order_cur IS Case when ao_in = 'NIEC' then SELECT t1.nr, t1.status$nr FROM eh_order_t@NGEXT_DBLINK t1 WHERE t1.status$nr IN (3, 6) AND ao_in = DECODE (ao_in, 'NIEC', ao_in, get_ehorder_ao (t1.nr)) AND t1.nr = DECODE (order_in, NULL, t1.nr, order_in) ORDER BY t1.skapad_dat ASC; when ao_in = 'DSSP' then SELECT t1.nr, t1.status$nr FROM eh_order_t@NGEXT_DBLINK t1 WHERE t1.status$nr IN (3, 6) AND ao_in = DECODE (ao_in, 'DSSP', ao_in, get_ehorder_ao (t1.nr)) AND t1.nr = DECODE (order_in, NULL, t1.nr, order_in) ORDER BY t1.skapad_dat ASC; when ao_in = 'ALLA' SELECT t1.nr, t1.status$nr FROM eh_order_t@NGEXT_DBLINK t1 WHERE t1.status$nr IN (3, 6) AND ao_in = DECODE (ao_in, 'ALLA', ao_in, get_ehorder_ao (t1.nr)) AND t1.nr = DECODE (order_in, NULL, t1.nr, order_in) ORDER BY t1.skapad_dat ASC; end ```
What you want could be done with the [OPEN-FOR](http://docs.oracle.com/cd/B12037_01/appdev.101/b10807/13_elems033.htm) statement, like this: ``` PROCEDURE theProcedure (ao_in IN VARCHAR2) TYPE t_cur IS REF CURSOR; order_cur t_cur; begin Case when ao_in = 'NIEC' then open order_cur for SELECT t1.nr, t1.status$nr FROM eh_order_t@NGEXT_DBLINK t1 WHERE t1.status$nr IN (3, 6) AND ao_in = 'NIEC' AND t1.nr = DECODE (order_in, NULL, t1.nr, order_in) ORDER BY t1.skapad_dat ASC; when ao_in = 'DSSP' then open order_cur for SELECT t1.nr, t1.status$nr FROM eh_order_t@NGEXT_DBLINK t1 WHERE t1.status$nr IN (3, 6) AND ao_in = 'DSSP' AND t1.nr = DECODE (order_in, NULL, t1.nr, order_in) ORDER BY t1.skapad_dat ASC; when ao_in = 'ALLA' open order_cur for SELECT t1.nr, t1.status$nr FROM eh_order_t@NGEXT_DBLINK t1 WHERE t1.status$nr IN (3, 6) AND ao_in ='ALLA' AND t1.nr = DECODE (order_in, NULL, t1.nr, order_in) ORDER BY t1.skapad_dat ASC; end; end; ``` Notice that the decode evaluation for the ao\_in parameter is not necessary, since is already been evaluated in the case statement. I assume that order\_in is declared somewhere in the original code and not posted here for simplification, otherwise it should be declared.
Assuming that you have a column called "ao\_in" in your eh\_order\_t@ngext\_dblink table, I think what you're after is something like this: ``` PROCEDURE theProcedure (p_ao_in IN VARCHAR2) IS CURSOR order_cur IS SELECT t1.nr, t1.status$nr FROM eh_order_t@NGEXT_DBLINK t1 WHERE t1.status$nr IN (3, 6) AND (p_ao_in in ('NIEC', 'DSSP', 'ALLA') OR (p_ao_in not in ('NIEC', 'DSSP', 'ALLA') and ao_in = get_ehorder_ao (t1.nr))) AND t1.nr = COALESCE(order_in, t1.nr) ORDER BY t1.skapad_dat ASC; BEGIN FOR order_rec in order_cur LOOP -- do the things END LOOP; END; / ``` Alternatively, you could have two cursors, one that selects everything, and one that filters on the ao\_in column, and then call the relevant one depending on which parameter is passed in. I believe that Oracle should be able to optimise the above combined cursor based on the parameter passed in, but if you found it couldn't or wanted to make sure, splitting the cursor into two separate ones would help the optimzer. It is important that you shouldn't use the same parameter or variable name in your cursors within PL/SQL, as the optimizer could get very confused, and is more than likely going to take your `and column_name = variable_of_same_name_as_column_name` to mean `and column_name = column_name`, which is probably not what you want! ETA: If the work being done in the loop is DML, it's possible that you might be able to do away with the cursor entirely and just have a single DML statement. That would be the best scenario to use, as row-by-row processing is not the fastest thing to do in PL/SQL. Set-based processing ftw! \*{:-)
Case when in Cursor
[ "", "sql", "oracle", "plsql", "" ]
I have table called PhoneNumbers with columns Phone and Range as below ![enter image description here](https://i.stack.imgur.com/MOSbM.png) here in the phone column i have a phone numbers and in range column i have a range of values i need the phone numbers to be included.For the first phone number 9125678463 I need to include the phone numbers till the range 9125678465 ie (9125678463,9125678464,9125678465).Similarly for other phone numbers too.here is the sample destination table should look like ![enter image description here](https://i.stack.imgur.com/xNwRy.png) How can i write the sql to get this? Thanks in advance
I have a solution which goes a classic way BUT: it does not need recursions and it does not need any loops! And it works even if your range has length of 3 or 5, or whatever... first i create a table with numbers (from 1 to 1 million in this example - you can adopt this in TOP () clause): ``` SELECT TOP (1000000) n = CONVERT(INT, ROW_NUMBER() OVER (ORDER BY s1.[object_id])) INTO dbo.Numbers FROM sys.all_objects AS s1 CROSS JOIN sys.all_objects AS s2 OPTION (MAXDOP 1); CREATE UNIQUE CLUSTERED INDEX idx_numbers ON dbo.Numbers(n) ; ``` if you have that table it's pretty simple: ``` ;WITH phonenumbers AS ( SELECT phone, [range], CAST(RIGHT(phone,LEN([range])) AS INT) AS number_to_increase, CAST(LEFT(phone,LEN(phone)-LEN([range])) + REPLICATE('0',LEN([range])) AS BIGINT) AS base_number FROM PhoneNumbers ) SELECT p.base_number + num.n FROM phonenumbers p INNER JOIN dbo.Numbers num ON num.n BETWEEN p.number_to_increase AND p.[range] ``` You don't have to use a CTE like here - it's just to see a bit clearer what the idea behind this approach is. Maybe this suits for you
You can use CTE like this: ``` ;WITH CTE (PhoneNumbers, [Range], i) AS ( SELECT CAST(Phone AS bigint), [Range], CAST(1 AS bigint) FROM yourTable UNION ALL SELECT CAST(PhoneNumbers + 1 AS bigint), [Range], i + 1 FROM CTE WHERE (PhoneNumbers + 1) % 10000 <= [Range] ) SELECT PhoneNumbers FROM CTE ORDER BY PhoneNumbers ```
SQL to get sequence of phone numbers
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have a Data as follows: ``` Order_id Created_on Comment 1 20-07-2015 18:35 Order Placed by User 1 20-07-2015 18:45 Order Reviewed by the Agent 1 20-07-2015 18:50 Order Dispatched 2 20-07-2015 18:36 Order Placed by User ``` And I am trying to find the difference between the 1. first and second Date 2. Second and third Date for each Order. How Do i Obtain this through a SQL query?
SQL is about horizontal relations - vertical relations do not exist. To a relational database they're just 2 rows, stored somewhere on a disk, and until you apply ordering to a result set the 'first and second' are just 2 randomly picked rows. In specific cases it's possible to calculate the time difference within SQL, but rarely a good idea for performance reason, as it requires costly self-joins or subqueries. Just selecting the right data in the right order and then calculating the differences during postprocessing in C#/PHP/whatever is far more practical and faster.
I think you can use a query like this: ``` SELECT t1.Order_id, t1.Created_on, TIMEDIFF(mi, t1.Created_on, COALESCE(MIN(t2.Created_on), t1.Created_on)) AS toNextTime FROM yourTable t1 LEFT JOIN yourTable t2 ON t1.Order_id = t2.Order_id AND t1.Created_on < t2.Created_on GROUP BY t1.Order_id, t1.Created_on ```
Calculate difference in Dates for each cell with next cell
[ "", "mysql", "sql", "date", "difference", "" ]
Say I have the following rows ``` DATE 2016-04 2015-11 2009-08 ``` And I want them to appear like this, with the date always set to the first of the month ``` DATE 01-APR-2016 01-NOV-2015 01-AUG-2009 ```
You need to first convert your "month date" into a real date using `to_date()`: ``` to_date(the_column||'-01', 'yyyy-mm-dd') ``` Note that you need to add a day (`||'-01'`) in order to have a valid input for the `to_date()` function. and then you can format the resulting date as you like using `to_char()` ``` select to_char(to_date(the_column||'-01', 'yyyy-mm-dd'), 'dd-mon-yyyy') from the_table; ```
to answer your immediate question: ``` drop table junk; create table junk ( notadate varchar2(7) ); insert into junk values ( '2016-04' ); insert into junk values ( '2015-11' ); insert into junk values ( '2009-08' ); commit; alter session set nls_date_format='dd-mon-yyyy'; select to_date(notadate,'yyyy-mm') from junk; TO_DATE(NOT ----------- 01-apr-2016 01-nov-2015 01-aug-2009 ``` However, I'd probably recommend doing the following: ``` alter table junk add ( mydate date ); update junk set mydate = to_date(notadate,'yyyy-mm') / alter table junk drop ( notadate ); ``` and use that new, DATE column .. properly ... ``` select * from junk; MYDATE ----------- 01-apr-2016 01-nov-2015 01-aug-2009 select to_char(mydate, 'yyyy-mm') from junk; (if needed) TO_CHAR ------- 2016-04 2015-11 2009-08 ```
Oracle SQL | How would I change a text field to a date field with a specific format?
[ "", "sql", "oracle", "date", "" ]
When executing any one of the following commands: ``` osql -E -S ComputerName\InstanceName osql -E -S ComputerName\InstanceName -i MyScript.sql -o MyOutput.rpt osql -E -q "SELECT * FROM Northwind.dbo.Shippers" osql -E -Q "SELECT * FROM Northwind.dbo.Shippers" -o MyOutput.rpt ``` I am getting the following error: ``` [SQL Server Native Client 10.0]SQL Server Network Interfaces: Connection string is not valid [87]. [SQL Server Native Client 10.0]Login timeout expired [SQL Server Native Client 10.0]A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online. ``` However, I am able, without issue to login and run `SELECT` queries from SSMS. **How do I run queries against SQL Server 2008 using osql?**
Do you have your logged in account set up as a user in SQL Server? I usually work with specific accounts and SQL Server logins instead of Trusted Logins, and then just specify the database coordinates on the command line with the -S, -D, -U, and -P options: ``` osql -S %SERVERNAME% -U %USERNAME% -P %PASSWORD% -d %DBNAME% ``` For instance, if your server name is MyServer\SQL2008 and your user name is Foo and your password is Bar and your database is MyDB, then you'd use this: ``` osql -S MyServer\SQL2008 -U Foo -P Bar -d MyDB ``` And then continue on with the rest of your options after that. If you really want to use your Trusted connection, you need to go to SQL Server Management Studio, and ensure your current Widows Login is added as a user and given appropriate permissions to your database, etc. In SSMS, connect to your server manually (the 'sa' user and password perhaps), and then expand the "Security" node and look at the logins. If your currently logged in Windows User isn't listed, you'll want to right-click, add new Login, and add your current user. Then you should be able to run with a Trusted Connection.
You have to run all command in a single line like this ``` osql -E -S ComputerName\InstanceName -i MyScript.sql -o MyOutput.rpt ``` or `osql -E -S ComputerName\InstanceName -Q "SELECT * FROM Northwind.dbo.Shippers" -o MyOutput.rpt` Now you have to see if you can log in SQL Server or if the service is up or even if the TCP/IP protocol is enable
Running queries using osql
[ "", "sql", "sql-server", "database", "sql-server-2008", "osql", "" ]
please help, this is my table ``` date column1 column2 trx 2015-07-01 **side1 Internet** 777903315 2015-07-01 **side1 Internet** 41426210 2015-07-01 side1 Unlimited 2263500 2015-07-01 side1 Business 427000 2015-07-01 side1 Extreme 3540900 2015-07-01 side1 Lifestyle 59360000 2015-07-01 side1 Socialita 240850500 2015-07-01 **side2 Unlimited** 6160 2015-07-01 **side2 Unlimited** 113502000 ``` and i want to select my table with result like this : ``` date column1 column2 trx type 2015-07-01 **side1 Internet** 777903315 pre 2015-07-01 **side1 Internet** 41426210 post 2015-07-01 side1 Unlimited 2263500 pre 2015-07-01 side1 Business 427000 pre 2015-07-01 side1 Extreme 3540900 pre 2015-07-01 side1 Lifestyle 59360000 pre 2015-07-01 side1 Socialita 240850500 pre 2015-07-01 **side2 Unlimited** 6160 post 2015-07-01 **side2 Unlimited** 113502000 pre ``` row that have same value in column1 and column2 with row after the minimum trx add field as type 'post' and the maximun 'pre'
``` SELECT t.date, t.c1, t.c2, t.trx , CASE WHEN t.trx = m.trx THEN 'pre' ELSE 'post' END AS `type` FROM so_q31531850 t LEFT OUTER JOIN ( SELECT DATE, c1, c2, MAX(trx) AS trx FROM so_q31531850 GROUP BY 1,2,3) m ON m.trx = t.trx AND m.c1 = t.c1 AND m.c2 = t.c2 AND m.date = t.date ``` Please note that, if the transaction number is same, my suggested query treats it as `post` type of entry.
You can use the following query: ``` SELECT m.*, CASE trx WHEN mintrx THEN 'post' WHEN maxtrx THEN 'pre' ELSE 'pre' END AS type FROM mytable AS m LEFT JOIN ( SELECT column1, column2, MIN(trx) AS mintrx, MAX(trx) AS maxtrx FROM mytable GROUP BY column1, column2 HAVING MIN(trx) <> MAX(trx) ) AS t ON m.column1 = t.column1 AND m.column2 = t.column2 ``` This query performs a `LEFT JOIN` with a derived table containing only duplicate `column1`, `column2` rows. Minimum / maximum `trx` matches produce `post` / `pre` values respectively, whereas `pre` is the default value for non-matched table rows. [**Demo here**](http://sqlfiddle.com/#!9/abf72/3)
How to compare 2 row in same table
[ "", "mysql", "sql", "" ]
MySQL Table Diagram: ![enter image description here](https://i.stack.imgur.com/V0e0E.jpg) My query this far: ``` SELECT tblcourses.CourseStandard, tblcourses.CourseID, tblcourses.CourseRef, tblcourses.CourseStandard, tblcourses.CourseName, tblcourses.CourseDuration, tblcourses.NQFLevel, tblcourses.CoursePrice, tblcoursestartdates.StartDate FROM etcgroup.tblcoursestartdates tblcoursestartdates INNER JOIN etcgroup.tblcourses tblcourses ON (tblcoursestartdates.CourseID = tblcourses.CourseID) WHERE tblcoursestartdates.StartDate >= Now() ``` If you look at the diagram you will see I have a 3rd table. The query above works fine. It display all the data as it should. I want to show all the courses and their respective dates excluding those that the student is already booked for. Keep in mind that there can be 20 start dates for 1 course. This is why I am only choosing dates >= Now(). I want to make sure that a student does not get double booked. Yes I can do it afterwards. Beep student already booked BUT if I can have it now show the course dates that the student already booked then great. Any suggestions?
This is pretty straightforward. Presumably you know the `StudentID` you'd like to see. Do a left join to the bookings table and select the mismatches. ``` SELECT tblcourses.CourseStandard, tblcourses.CourseID, tblcourses.CourseRef, tblcourses.CourseStandard, tblcourses.CourseName, tblcourses.CourseDuration, tblcourses.NQFLevel, tblcourses.CoursePrice, tblcoursestartdates.StartDate FROM etcgroup.tblcoursestartdates tblcoursestartdates INNER JOIN etcgroup.tblcourses tblcourses ON tblcoursestartdates.CourseID = tblcourses.CourseID AND tblcoursestartdates.StartDate >= Now() LEFT JOIN tblbookings ON tblbookings.CourseId = tblcourses.CourseId AND tblbookings.StudentId = <<<the student ID in question >>> WHERE tblbookings.BookingID IS NULL ``` The trick here is the LEFT JOIN ... IS NULL pattern. It *eliminates* the rows where the ON condition of the LEFT JOIN hit, leaving only the ones where it missed.
Do a left join to tblBookings on `courseID` where the `bookingID` is `null` (there are no matches). You'll have to provide the `studentID` as a parameter to the query. ``` SELECT DISTINCT c.CourseStandard, c.CourseID, c.CourseRef, c.CourseStandard, c.CourseName, c.CourseDuration, c.NQFLevel, c.CoursePrice, d.StartDate FROM etcgroup.tblcoursestartdates d INNER JOIN etcgroup.tblcourses c ON d.CourseID = c.CourseID LEFT JOIN etcgroup.tblBookings b on c.CourseID = b.CourseID and b.StudentID = @StudentID WHERE d.StartDate >= Now() and b.bookingID is null ```
I want to go down further into query, but not sure how
[ "", "mysql", "sql", "" ]
I'm trying to read all the rows from a table on a SQL server and store them in string slices to use for later. The issue I'm running into is that the previously scanned rows are getting overwritten every time I scan a new row, even though I've converted all the mutable byte slices to immutable strings and saved the result slices to another slice. Here is the code I'm using: ``` rawResult := make([]interface{}, len(cols)) // holds anything that could be in a row result := make([]string, len(cols)) // will hold all row elements as strings var results [][]string // will hold all the result string slices dest := make([]interface{}, len(cols)) // temporary, to pass into scan for i, _ := range rawResult { dest[i] = &rawResult[i] // fill dest with pointers to rawResult to pass into scan } for rows.Next() { // for each row err = rows.Scan(dest...) // scan the row if err != nil { log.Fatal("Failed to scan row", err) } for i, raw := range rawResult { // for each scanned byte slice in a row switch rawtype := raw.(type){ // determine type, convert to string case int64: result[i] = strconv.FormatInt(raw.(int64), 10) case float64: result[i] = strconv.FormatFloat(raw.(float64), 'f', -1, 64) case bool: result[i] = strconv.FormatBool(raw.(bool)) case []byte: result[i] = string(raw.([]byte)) case string: result[i] = raw.(string) case time.Time: result[i] = raw.(time.Time).String() case nil: result[i] = "" default: // shouldn't actually be reachable since all types have been covered log.Fatal("Unexpected type %T", rawtype) } } results = append(results, result) // append the result to our slice of results } ``` I'm sure this has something to do with the way Go handles variables and memory, but I can't seem to fix it. Can somebody explain what I'm not understanding?
You should create new slice for each data row. Notice, that a slice has a pointer to underlying array, so every slice you added into `results` have same pointer on actual data array. That's why you have faced with that behaviour.
When you create a slice using func make() it return a type (Not a pointer to type). But it does not allocate new memory each time a element is reassigned. Hence result := make([]string, 5) will have fix memory to contain 5 strings. when a element is reassigned, it occupies same memory as before hence overriding the old value. Hopefully following example make things clear. <http://play.golang.org/p/3w2NtEHRuu> Hence in your program you are changing the content of the same memory and appending it again and again. To solve this problem you should create your result slice inside the loop.
Go SQL scanned rows getting overwritten
[ "", "sql", "go", "" ]
Hello I'm currently working on SQL problem that I can't quite figure out. Here is the Schema I'm working with: ![enter image description here](https://i.stack.imgur.com/VPkPJ.jpg) Here is the question I am stuck on: -- 3 Find the first name, last name and total combined film length of Sci-Fi films for every actor. That is the result should list the names of all of the actors (even if an actor has not been in any Sci-Fi films) and the total length of Sci-Fi films they have been in. So far I have ``` SELECT actor.first_name, actor.last_name, (SELECT SUM(film.length) from film INNER JOIN film_category ON film.film_id = film_category.film_id INNER JOIN category ON film_category.category_id = category.category_id INNER JOIN film_actor ON film_actor.film_id = film.film_id INNER JOIN actor ON film_actor.actor_id = actor.actor_id WHERE category.name = 'Sci-fi' ) from actor ``` I know I need to group it by actor\_id but i'm unable to do this in a select subquery. Anyone have some tips?
There is no need to use a subquery. Aggregate functions work on the entire data set. The 'group by' specifies how to group the data you're aggregating. ``` select a.actor_id, a.first_name, a.last_name, sum(f.length) from actor a left outer join film_actor fa on fa.actor_id = a.actor_id left outer join film f on f.film_id = fa.film_id left outer join film_categories fc on fc.film_id = f.film_id left outer join categories c on c.category_id = fc.category_id where c.name = 'sci-fi' group by a.actor_id ; ``` The outer joins ensure actors with no sci-fi film experience are included in the results by
This should get you exactly what you want, including the part about having actors that aren't in Sci-Fi movies. You can LEFT JOIN on film to include all films the film\_actor is in. The additional AND statement works with the LEFT JOIN to include actors not in Sci-Fi movies for your aggregate sum function. ``` SELECT a.actor_id, a.first_name, a.last_name, sum(f.length) AS length FROM actor a INNER JOIN film_actor fa ON fa.actor_id = a.actor_id INNER JOIN film_category fc ON fc.film_id = fa.film_id INNER JOIN category c ON c.category_id = fc.category_id LEFT JOIN film f ON f.film_id = fa.film_id AND c.name = 'Sci-Fi' GROUP BY a.actor_id; ```
SQL query using sum()
[ "", "mysql", "sql", "" ]
I have a table called "customers" which looks like this: ``` ID | NAME | ADDRESS | ZIP John | Peterson | Street 2 | 45738 Jan | 74383 | Street 4 | 74383 Eric | Stone | 74383 | 74383 Julia| Carry | Street 9 | 74383 ``` I want to replace any occurrence of "74383" with "". I have about 1.5 Million rows. Is this doable? Okay sorry that was my mistake, the number can be in any COLUMN! I updated it.
It is strange for me the requirement like this but, anyway, you can try this: ``` START TRANSACTION; UPDATE customers SET NAME = '' WHERE NAME = '74383'; UPDATE customers SET ADDRESS = '' WHERE ADDRESS = '74383'; UPDATE customers SET ZIP = '' WHERE ZIP = '74383'; COMMIT; ```
Assuming you mean in the `zip` column: ``` update customers set zip = '' where zip = '74383'; ``` Note: you might prefer to set the value to `NULL` rather than the empty string (`''`), but your question explicitly suggests an empty string. EDIT: I would actually suggest that you use separate `update` statements for each column if you have indexes on them. The `updates` will be much faster. Second, if you replace the value with `NULL` rather than `''`, you can use the logic: ``` update customers set zip = NULLIF(zip, '74383'), address = NULLIF(address, '74383'), name = NULLIF(name, '74383') where '74383' in (name, address, zip); ``` (You can actually replace this with an empty string using `COALESCE(NULLIF(zip, '74383'), '')`, but I prefer a `CASE` once the logic gets at all complicated.)
Update whole table in SQL?
[ "", "mysql", "sql", "" ]
I have a table `Ty` contains: ``` a integer NOT NULL, b text, c text, d text, e text ``` I'm trying to do insrert statment as follows: ``` insert into Ty (b,c,d,e) values ('hello','world',select current_date,select name from users where userid=4) ``` But it doesn't work. It says: > ERROR: syntax error at or near "select" All guides I read says that I can do SQL statement in Insert as long as they return just one value. So why it doesn't work?
``` insert into Ty (b,c,d,e) SELECT 'hello','world',current_date, name from users where userid=4 ```
``` insert into Ty (b,c,d,e) select 'hello','world',current_date,name from users where userid=4 ```
insert into table from query in Postgresql
[ "", "sql", "postgresql", "" ]
I have one table, in which I want to update the value for a particular column. The column value should be row number. Based on a column distinct value I need to update the newly created column value. Example: Contents of `INFO_T` table: ``` Name | id ------------ Arijit | 120 Suman | 240 Priyam | 555 Arijit | 456 Suvayu | 110 Priyam | 300 ``` I need to add one more column within the above table `INFO_T`, below is the example. I have to use `ROW_NUMBER() OVER` function, through which I have to update that `SEQ_NO` column. ``` Name | id | Seq_no ------------------ Arijit | 120 | 1 Suman | 240 | 2 Priyam | 555 | 3 Arijit | 120 | 1 Suvayu | 110 | 4 Priyam | 555 | 3 ``` How to get that above result?
You can use `dense_rank()` for this purpose: ``` select name, id, dense_rank() over (order by minid) as seqno from (select t.*, min(id) over (partition by name) as minid from table t ) t; ``` If you wanted to do this just with `row_number()`: ``` select t.name, t.id, tt.seqnum from table t join (select t.name, row_number() over (order by min(id)) as seqno from table t group by t.name ) tt on t.name = tt.name; ``` However, I don't know why you would want to do that.
[SQL FIDDLE DEMO](http://sqlfiddle.com/#!3/a1f07/9) ``` SELECT Table1.name, Table1.ID, SEQ.Seq_no FROM (SELECT name, ROW_NUMBER() OVER(order by name) as Seq_no FROM (SELECT DISTINCT name FROM Table1) as unique_name ) as SEQ INNER JOIN Table1 on SEQ.name = Table1.Name ```
How to update a column value with ROW_NUMBER() OVER value based on distinct values of a column
[ "", "sql", "oracle", "window-functions", "row-number", "" ]
I have to edit a stored procedure who has to return the sums of three columns having nullable values. If there is a `null` value, I need to cast it to `0` Here is a screenshot of data : ![enter image description here](https://i.stack.imgur.com/A6Uhj.png) And here is the originial request using the first column only : ``` SELECT SUM(reglProj.Montant) /* SUM of 'Montant', 'FraisMagasing', 'FraisVendeur' instead */ AS SommeReglement FROM Projet.LigneEcheancierProjet ligne INNER JOIN Projet.ReglementProjetEcheance reglProj ON reglProj.LigneEcheancierProjetId = ligne.LigneEcheancierProjetId .... ``` Do you have some best practices using the `sum` and `case` conditions in T-SQL ?
``` --ANSI standard SELECT SUM(COALESCE(col1,0)) + SUM(COALESCE(col2,0)) + SUM(COALESCE(col3,0)) --SQL Server Style SELECT SUM(ISNULL(col1,0)) + SUM(ISNULL(col2,0)) + SUM(ISNULL(col3,0)) --The one wthout functions. It will work the same as previous OR FASTER. SELECT SUM(CASE WHEN col1 IS NULL THEN 0 ELSE col1 END) + SUM(CASE WHEN col2 IS NULL THEN 0 ELSE col2 END) + SUM(CASE WHEN col3 IS NULL THEN 0 ELSE col3 END) ``` Choose one for yourself. OR you might need following (if you want to add sums by row): ``` --ANSI standard SELECT SUM(COALESCE(col1,0) +COALESCE(col2,0) + COALESCE(col3,0)) --SQL Server Style SELECT SUM(ISNULL(col1,0)+ ISNULL(col2,0) + ISNULL(col3,0)) --The one wthout functions. It will work the same as previous OR FASTER. SELECT SUM(CASE WHEN col1 IS NULL THEN 0 ELSE col1 END + CASE WHEN col2 IS NULL THEN 0 ELSE col2 END + CASE WHEN col3 IS NULL THEN 0 ELSE col3 END) ```
In Sql Server, (and probably in most if not all relational databases) the [`SUM`](https://msdn.microsoft.com/en-us/library/ms187810.aspx?f=255&MSPPError=-2147217396) Aggregation function [ignores null values](http://sqlfiddle.com/#!3/3171d/1) by default, so there really is no need to use [`coalesce`](https://msdn.microsoft.com/en-us/library/ms190349.aspx) or [`isnull`](https://msdn.microsoft.com/en-us/library/ms184325.aspx) inside it. If you want the sum of all 3 columns for every single row, then you need to use isnull: ``` SELECT ISNULL(reglProj.Montant,0) + ISNULL(reglProj.FraisMagasing ,0) + ISNULL(reglProj.FraisVendeur,0) FROM Projet.LigneEcheancierProjet ligne INNER JOIN Projet.ReglementProjetEcheance reglProj ON reglProj.LigneEcheancierProjetId = ligne.LigneEcheancierProjetId ``` If you need the aggregated sum of all 3 columns you can simply do it like this: ``` SELECT ISNULL(SUM(reglProj.Montant), 0) + ISNULL(SUM(reglProj.FraisMagasing), 0) + ISNULL(SUM(reglProj.FraisVendeur), 0) FROM Projet.LigneEcheancierProjet ligne INNER JOIN Projet.ReglementProjetEcheance reglProj ON reglProj.LigneEcheancierProjetId = ligne.LigneEcheancierProjetId ```
Sum on multiple columns with nullable values
[ "", "sql", "sql-server", "" ]
I have a table which logs the HTTP status code of a website whenever the status changes, so the table looks like this... ``` id status date ----------------------------------- 1 404 2015-10-01 13:30:00 2 200 2015-10-02 13:30:00 3 404 2015-10-03 13:30:00 ``` I want to use this data to display a table on my website showing how many times each status has been logged and the percentage duration of the status to the present time. I have successfully managed to get the total count for each status using following query.... ``` SELECT `status`, COUNT(*) AS `status_count` FROM `table_name` GROUP BY `status` ORDER BY `status` ``` ...when executed gives me something like this... ``` status status_count ---------------------- 200 1 404 2 ``` I would like to modify my sql add duration to my results calculated by the date column, my goal is to end up with this... ``` status status_count duration (%) ----------------------------------- 200 1 20 404 2 80 ```
Here is [SQL FIDDLE DEMO](http://sqlfiddle.com/#!9/b2242/1) ``` SELECT t1.status ,COUNT(t1.id) as status_count ,SUM(IF(t2.date IS NULL, NOW(), t2.date)-t1.date) / (NOW()-t3.start_date) as duration FROM table_name t1 LEFT JOIN table_name t2 ON t1.id = (t2.id - 1) ,(SELECT MIN(date) as start_date FROM table_name) t3 GROUP BY t1.status ```
Mine is more complicated than nick but give a different result. And i try it on excel to verify values are correct. I start date with `2015-07-01 13:30:00` so `NOW()` function can work That mean seconds are ``` 404 | 86400 1 day | 0.05101 200 | 86400 1 day | 0.05101 404 | 1521138 17 days | 0.89799 total 1693938 ``` Final Result ``` 404 | 2 | 0.94899 200 | 1 | 0.05101 ``` [SQL FIDDLE DEMO](http://sqlfiddle.com/#!9/5ec8f/8) ``` SELECT status, Count(status), SUM(secdiff) / MAX(sectotal) as porcentage FROM ( SELECT h1.status, h2.dateupdate d1, h1.dateupdate d2, TIMESTAMPDIFF(SECOND,h1.dateupdate, h2.dateupdate) secdiff, TIMESTAMPDIFF(SECOND, (SELECT MIN(dateupdate) from logHttp), NOW()) sectotal FROM logHttp as h1 INNER JOIN ( (Select * from logHttp) union (select MAX(id) +1, 0, NOW() from logHttp) ) as h2 On h1.id + 1 = h2.id ) as t1 group by status; ```
Calculating duration percentage
[ "", "mysql", "sql", "" ]
Simple question but I am confused about the query, I have three tables First table: projects ``` proj_id project_cost project_description ------- ------------ ------------------- 1 123.45 construction plan A 2 222.22 construction plan B 3 333.33 construction plan c 4 444.44 construction plan D 5 555.55 construction plan E ``` Second table: project\_estimates ``` est_id proj_id fy Q1 Q2 Q3 Q4 ------- ------- --------- ----- ------ ----- ------- 1 2 2015-16 12 11 15 19 2 3 2015-16 11 22 45 23 3 4 2015-16 31 32 36 56 4 1 2015-16 51 34 45 78 5 5 2015-16 33 44 59 98 ``` Third table: project\_expenditure: ``` exp_id proj_id fy Q1 Q2 Q3 Q4 ------- ------- --------- ----- ------ ----- ------- 1 3 2015-16 10 12 41 15 2 5 2015-16 31 24 39 70 ``` Now I would like to join these tables to get an output like: Project details for 2015-16 ``` proj_id Proposed expenditure --------- ----------------------- --------------------- Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4 ---------------------------------------------------------------- 1 51 34 45 78 NULL NULL NULL NULL 2 12 11 15 19 NULL NULL NULL NULL 3 11 22 45 23 10 12 41 15 4 31 32 36 56 NULL NULL NULL NULL 5 33 44 59 98 31 24 39 70 ``` further, there are other entries in table two and table three for financial 2015-16, I need only project\_id which has project estimates in 2015-16 but I want the columns values from project\_expenditure even if entries are not made. I tried with `a.proj_id=b.proj_id` and `a.proj_id=c.proj_id(+)` but it is not giving the expected output. Anything else that can be put in scope to get the result.
Right, I couldn't test this atm but I would do something like this: ``` SELECT proj_id, project_estimates.Q1, project_estimates.Q2, project_estimates.Q3, project_estimates.Q4, project_expenditure.Q1, project_expenditure.Q2, project_expenditure.Q3, project_expenditure.Q4 FROM projects LEFT JOIN project_estimates ON (project_estimates.proj_id = projects.proj_id) LEFT JOIN project_expenditure ON (project_expenditure.proj_id = projects.proj_id) WHERE projects.proj_id IN ( (SELECT proj_id FROM project_estimates WHERE (project_estimates.fy = '2015-16')) UNION (SELECT proj_id FROM project_expenditure WHERE (project_expenditure.fy = '2015-16')) GROUP BY proj_id ) ORDER BY proj_id ```
The answer should be first an INNER join as you want only projects with estimates in 2015-16 and then a LEFT join as you want it to get populated by NULL if the entry is not there : ``` SELECT prj.proj_id, prj_est.Q1, prj_est.Q2, prj_est.Q3, prj_est.Q4, prj_exp.Q1, prj_exp.Q2, prj_exp.Q3, prj_exp.Q4 FROM projects AS prj INNER JOIN project_estimates AS prj_est ON prj.proj_id = prj_est.proj_id AND prj_est.fy = '2015-16' LEFT JOIN project_expenditure AS prj_exp ON prj.proj_id = prj_exp.proj_id; ```
joining multiple tables
[ "", "sql", "oracle", "" ]
I am inserting records using left joining in Hive.When I set limit 1 query works but for all records query get stuck at 99% reduce job. Below query works ``` Insert overwrite table tablename select a.id , b.name from a left join b on a.id = b.id limit 1; ``` But this does not ``` Insert overwrite table tablename select table1.id , table2.name from table1 left join table2 on table1.id = table2.id; ``` I have increased number of reducers but still it doesn't work.
If your query is getting stuck at 99% check out following options - * Data skewness, if you have skewed data it might possible 1 reducer is doing all the work * Duplicates keys on both side - If you have many duplicate join keys on both side your output might explode and query might get stuck * One of your table is small try to use map join or if possible SMB join which is a huge performance gain over reduce side join * Go to resource manager log and see amount of data job is accessing and writing.
Here are a few Hive optimizations that might help the query optimizer and reduce overhead of data sent across the wire. ``` set hive.exec.parallel=true; set mapred.compress.map.output=true; set mapred.output.compress=true; set hive.exec.compress.output=true; set hive.exec.parallel=true; set hive.cbo.enable=true; set hive.compute.query.using.stats=true; set hive.stats.fetch.column.stats=true; set hive.stats.fetch.partition.stats=true; ``` However, I think there's a greater chance that the underlying problem is key in the join. For a full description of skew and possible work arounds see this <https://cwiki.apache.org/confluence/display/Hive/Skewed+Join+Optimization> You also mentioned that table1 is much smaller than table2. You might try a map-side join depending on your hardware constraints. (<https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Joins>)
Hive query stuck at 99%
[ "", "sql", "hadoop", "hive", "mapreduce", "hiveql", "" ]
I'm using *SQL-Server 2008*. I need to combine rows with the same `Name` and increase counter when: 1. 1 or more `Id's` for the same `Name` is `blank` 2. NOT merge rows if `Id` is `NULL`! 3. NOT merge rows if have the same `Name`, but different `Ids` **Output for now:** ``` Name Id Cnt John 1 1 Peter 2 2 -- This Peter with the same Id have 2 entries so Cnt = 2 Peter 3 1 -- This is other Peter with 1 entry so Cnt = 1 Lisa 4 1 Lisa NULL 1 David 5 1 David 1 -- here Id is blank '' Ralph 2 -- Ralph have both rows with blank Id so Cnt = 2 ``` **Desired output:** ``` Name Id Cnt John 1 1 Peter 2 2 Peter 3 1 Lisa 4 1 Lisa NULL 1 -- null still here David 5 2 -- merged with blank '' so Cnt = 2 Ralph 2 -- merged both blanks '' so Cnt = 2 ``` **SQL-Query:** This is sample query what I'm using for now: ``` SELECT Name, Id, COUNT(Id) AS Cnt FROM Employees WHERE Condition = 1 GROUP BY Name, Id ``` **What I have tried:** Added aggregate `MAX` to `Id` in `SELECT` clause and grouped by `Name` only, but in this case merged rows with `NULL` values and with the same names with different `Id's` what is wrong for me. ``` SELECT Name, MAX(Id), -- added aggregate COUNT(Id) AS Cnt FROM Employees WHERE Condition = 1 GROUP BY Name -- grouped by Name only ``` Have you any ideas? If anything is not clear about problem - ask me, I will provide more details. **UPDATE:** **DDL** ``` CREATE TABLE Employees ( Name NVARCHAR(40), Id NVARCHAR(40) ); ``` **DML** ``` INSERT INTO Employees VALUES ('John' , '1') ,('Peter', '2') ,('Peter', '2') ,('Peter', '3') ,('Lisa' , '4') ,('Lisa' , NULL) ,('David', '5') ,('David', '') ,('Ralph', '') ,('Ralph', '') ``` **DEMO:** `SQL FIDDLE`
**Edit** ``` DECLARE @Data table (Name varchar(10), Id varchar(10)) -- Id must be varchar for blank value INSERT @Data VALUES ('John', '1'), ('Peter', '2'),('Peter', '2'), ('Peter', '3'),--('Peter', ''), --For test ('Lisa', '4'), ('Lisa', NULL), ('David', '5'), ('David', ''), ('Ralph', ''), ('Ralph', '') ``` --- ``` SELECT Name, Id, COUNT(*) + ISNULL( (SELECT COUNT(*) FROM @data WHERE Name = d.Name AND Id = '' AND d.Id <> '') , 0) AS Cnt FROM @data d WHERE Id IS NULL OR Id <> '' OR NOT EXISTS(SELECT * FROM @data WHERE Name = d.Name AND Id <> '') GROUP BY Name, Id ```
You can use `CASE` statement inside your `SELECT`. It allows you to set Id = [some value] for employees where it is blank. Query can be something like this: ``` SELECT E.Name, CASE WHEN E.Id = '' THEN (Select Employees.Id from Employees where Employees.Id <> '' and E.Name = Employees.Name) ELSE E.Id END as Idx, COUNT(Id) AS Cnt FROM Employees as E WHERE Condition = 1 GROUP BY Name, Idx ```
Combine rows if value is blank
[ "", "sql", "sql-server", "t-sql", "merge", "group-by", "" ]
For serveral times I've been returning to understanding of database relational theory, and still I don't have success. I'll try once more. Let's say I have two tables: animals: ``` CREATE TABLE animals (id INTEGER PRIMARY KEY, name TEXT); ``` and food: ``` CREATE TABLE food (id INTEGER PRIMARY KEY, food TEXT); ``` What I need is to make this two tables connected. For example, I want to select 'pig' from **animals** table and recive all the things the pig can eat from **food** table. I just don't get how to relate them. I belive I can add a foreign key to **food** table, which would link to the primary key of **animal** table, but there is an issue I can't figure out: What if I make entries to the database from, for example, a web form, where I enter animal name and product which it eats. The animal name goes to the first table and automatically recieves an id. It just autoincrements. So, in order to make it a relation for the second table I must to select the new ID from the first table! So we got THREE sql requests: 1) `INSERT INTO animals (name) VALUES ('pig);` 2) `SELECT id FROM animals WHERE name='pig';` (we store it in a variable, does not really matters for now) 3) `INSERT INTO food (product, animal_id) VALUES ('something', 'id of a pig');` I just feel that it is wrong. Or my mind is just not capable of understanding such complex abstractions. Please advice.
That's how you implement such a many-to-many relationship: * [How to implement a many-to-many relationship in PostgreSQL?](https://stackoverflow.com/questions/9789736/how-to-implement-a-many-to-many-relationship-in-postgresql/9790225#9790225) And you can accomplish the task you describe with a ***single*** query using a [data-modifying CTE](https://stackoverflow.com/questions/10471757/insert-rows-into-multiple-tables-in-a-single-query-selecting-from-an-involved-t/10472136#10472136): ``` WITH ins AS ( INSERT INTO animals (name) VALUES ('pig') RETURNING animal_id -- return generated ID immediately ) INSERT INTO animal_food (food_id, animal_id) -- m:m link table SELECT food_id, animal_id -- food_id passed as 2nd param FROM ins; ``` Assuming we operate with a known-existing food (like it was select from a drop-down menu. Else you need one more step to look up the food or possibly `INSERT` a row there, too: * [Is SELECT or INSERT in a function prone to race conditions?](https://stackoverflow.com/questions/15939902/is-select-or-insert-in-a-function-prone-to-race-conditions/15950324#15950324) ... still a single query. The linked answer provides some insight in the more tricky matter of race conditions with concurrent transactions.
You need a junction table, that related `Animals` and `Food`. This would look like: ``` CREATE TABLE AnimalFoods ( id INTEGER PRIMARY KEY, AnimalId int references Animal(id), FoodId int references Food(id) ); ``` You can then answer your questions using various joins among these tables.
SQL and relations between tables
[ "", "sql", "postgresql", "" ]
I have a table like: ``` create table tbl ( id int, data image ) ``` It's found that the column `data` have very small size, which can be stored in `varbinary(200)` So the new table would be, ``` create table tbl ( id int, data varbinary(200) ) ``` How can I migrate this table to new design ***without loosing the data in it.***
Just do two separate [`ALTER TABLE`](https://msdn.microsoft.com/en-GB/library/ms190273.aspx)s, since you can only convert `image` to `varbinary(max)`, but you can, afterwards, change its length: ``` create table tbl ( id int, data image ) go insert into tbl(id,data) values (1,0x0101010101), (2,0x0204081632) go alter table tbl alter column data varbinary(max) go alter table tbl alter column data varbinary(200) go select * from tbl ``` Result: ``` id data ----------- --------------- 1 0x0101010101 2 0x0204081632 ```
You can use this ALTER statement to convert existing column `IMAGE` to `VARBINARY(MAX)`. [Refer Here](http://www.sqlservercentral.com/Forums/Topic914594-149-1.aspx) ``` ALTER Table tbl ALTER COLUMN DATA VARBINARY(MAX) ``` After this conversion, you are surely, get your data backout. NOTE:- Don't forgot to take backup before execution. The IMAGE datatype has been deprecated in future version SQL SERVER, and needs to be converted to VARBINARY(MAX) wherever possible.
Convert table column data type from image to varbinary
[ "", "sql", "sql-server", "t-sql", "sql-server-2014", "alter-table", "" ]
If given the Average for 24 hours for each date in a year. I want to spread this hourly average to average at each minute. e.g. given ``` Date Time Average 01-Jan-15 23:00 20 02-Jan-15 00:00 50 02-Jan-15 01:00 30 ``` I want the output to be calculated something as below .... ``` DateTime AVG_VALUE 01/01/2015 23:00:00 20 01/01/2015 23:01:00 20.5 01/01/2015 23:02:00 21 01/01/2015 23:03:00 21.5 01/01/2015 23:04:00 22 01/01/2015 23:05:00 22.5 01/01/2015 23:06:00 23 01/01/2015 23:07:00 23.5 01/01/2015 23:08:00 24 01/01/2015 23:09:00 24.5 01/01/2015 23:10:00 25 01/01/2015 23:11:00 25.5 01/01/2015 23:12:00 26 01/01/2015 23:13:00 26.5 01/01/2015 23:14:00 27 01/01/2015 23:15:00 27.5 01/01/2015 23:16:00 28 01/01/2015 23:17:00 28.5 01/01/2015 23:18:00 29 01/01/2015 23:19:00 29.5 01/01/2015 23:20:00 30 01/01/2015 23:21:00 30.5 01/01/2015 23:22:00 31 01/01/2015 23:23:00 31.5 01/01/2015 23:24:00 32 01/01/2015 23:25:00 32.5 01/01/2015 23:26:00 33 01/01/2015 23:27:00 33.5 01/01/2015 23:28:00 34 01/01/2015 23:29:00 34.5 01/01/2015 23:30:00 35 01/01/2015 23:31:00 35.5 01/01/2015 23:32:00 36 01/01/2015 23:33:00 36.5 01/01/2015 23:34:00 37 01/01/2015 23:35:00 37.5 01/01/2015 23:36:00 38 01/01/2015 23:37:00 38.5 01/01/2015 23:38:00 39 01/01/2015 23:39:00 39.5 01/01/2015 23:40:00 40 01/01/2015 23:41:00 40.5 01/01/2015 23:42:00 41 01/01/2015 23:43:00 41.5 01/01/2015 23:44:00 42 01/01/2015 23:45:00 42.5 01/01/2015 23:46:00 43 01/01/2015 23:47:00 43.5 01/01/2015 23:48:00 44 01/01/2015 23:49:00 44.5 01/01/2015 23:50:00 45 01/01/2015 23:51:00 45.5 01/01/2015 23:52:00 46 01/01/2015 23:53:00 46.5 01/01/2015 23:54:00 47 01/01/2015 23:55:00 47.5 01/01/2015 23:56:00 48 01/01/2015 23:57:00 48.5 01/01/2015 23:58:00 49 01/01/2015 23:59:00 49.5 02/01/2015 50 02/01/2015 00:01:00 49.66666667 02/01/2015 00:02:00 49.33333333 02/01/2015 00:03:00 49 02/01/2015 00:04:00 48.66666667 02/01/2015 00:05:00 48.33333333 ``` The idea is to get the smooth incline or decline graph between two interval. In the output you can see the avg is gradually increasing with increasing minutes as we move from 20->50 Can this be achived using Oracle Query or some PL/SQL code?
EDIT: Added union to include the final missing row Some thing like this may work. Assuming the input data is in table a, ``` with b as (select level-1 lev from dual connect by level <= 60 ), v as ( select start_date, value current_value, lead(value) over (order by start_date) next_value from a ) select start_date+ (lev)/(24*60), (current_value*((60-(b.lev))/60) + next_value*(b.lev)/60) avg_value from v, b where v.next_value is not null union select start_date, current_value from v where v.next_value is null order by 1 ```
You can use [recursive subquery factoring](http://docs.oracle.com/cd/E11882_01/server.112/e41084/statements_10002.htm#BCEJGIBG) to do the interval halving, and find the weighted average (or whatever this calculation is supposed to be finding) for each step: ``` with r (period_start, period_average, step, step_start, step_end, step_average) as ( select period_start, period_average, 1, period_start + ((lead(period_start) over (order by period_start) - period_start)/2), lead(period_start) over (order by period_start) - 1/86400, (period_average + lead(period_average) over (order by period_start))/2 from averages union all select period_start, period_average, r.step + 1, case when r.step_start = period_start + 60/86400 then period_start else trunc(period_start + ((r.step_start - period_start)/2) + 30/86400, 'MI') end, r.step_start - 1/86400, case when r.step_start = period_start + 60/86400 then period_average else (period_average + r.step_average)/2 end from r where r.step_start > r.period_start ) --cycle step_start set is_cycle to 1 default 0 select * from r where step_start is not null order by step_start; ``` The anchor member gets the initial half-hour slot and the next period's average value, via `lead()`, and uses those to calculate the initial (20+50)/2 etc.: ``` PERIOD_START PERIOD_AVERAGE STEP STEP_START STEP_END STEP_AVERAGE ---------------- -------------- ---- ---------------- ---------------- ------------ 2015-01-01 06:00 20 1 2015-01-01 06:30 2015-01-01 06:59 35.00000 2015-01-01 07:00 50 1 2015-01-01 07:30 2015-01-01 07:59 45.00000 2015-01-01 08:00 40 1 2015-01-01 08:30 2015-01-01 08:59 35.00000 ... ``` The recursive member then repeats that process but with the previous step's period length and calculated average. I've now made it stop when it reaches the last minute in the period. So that gives you the intermediate result set: ``` PERIOD_START PERIOD_AVERAGE STEP STEP_START STEP_END STEP_AVERAGE ---------------- -------------- ---- ---------------- ---------------- ------------ 2015-01-01 06:00 20 7 2015-01-01 06:00 2015-01-01 06:00 20.00000 2015-01-01 06:00 20 6 2015-01-01 06:01 2015-01-01 06:01 20.46875 2015-01-01 06:00 20 5 2015-01-01 06:02 2015-01-01 06:03 20.93750 2015-01-01 06:00 20 4 2015-01-01 06:04 2015-01-01 06:07 21.87500 2015-01-01 06:00 20 3 2015-01-01 06:08 2015-01-01 06:14 23.75000 2015-01-01 06:00 20 2 2015-01-01 06:15 2015-01-01 06:29 27.50000 2015-01-01 06:00 20 1 2015-01-01 06:30 2015-01-01 06:59 35.00000 2015-01-01 07:00 50 7 2015-01-01 07:00 2015-01-01 07:00 50.00000 2015-01-01 07:00 50 6 2015-01-01 07:01 2015-01-01 07:01 49.84375 2015-01-01 07:00 50 5 2015-01-01 07:02 2015-01-01 07:03 49.68750 2015-01-01 07:00 50 4 2015-01-01 07:04 2015-01-01 07:07 49.37500 2015-01-01 07:00 50 3 2015-01-01 07:08 2015-01-01 07:14 48.75000 2015-01-01 07:00 50 2 2015-01-01 07:15 2015-01-01 07:29 47.50000 2015-01-01 07:00 50 1 2015-01-01 07:30 2015-01-01 07:59 45.00000 2015-01-01 08:00 40 7 2015-01-01 08:00 2015-01-01 08:00 40.00000 2015-01-01 08:00 40 6 2015-01-01 08:01 2015-01-01 08:01 39.84375 2015-01-01 08:00 40 5 2015-01-01 08:02 2015-01-01 08:03 39.68750 2015-01-01 08:00 40 4 2015-01-01 08:04 2015-01-01 08:07 39.37500 2015-01-01 08:00 40 3 2015-01-01 08:08 2015-01-01 08:14 38.75000 2015-01-01 08:00 40 2 2015-01-01 08:15 2015-01-01 08:29 37.50000 2015-01-01 08:00 40 1 2015-01-01 08:30 2015-01-01 08:59 35.00000 ``` You can then use another recursive CTE, or I think more simply a `connect by` clause, to expand each of those steps into the appropriate number of minutes, each with the same 'average' value: ``` with r (period_start, period_average, step, step_start, step_end, step_average) as ( ... ) select step_start + (level - 1)/24/60 as min_start, step_average from r where step_start is not null connect by level <= (step_end - step_start) * 60 * 24 + 1 and prior step_start = step_start and prior dbms_random.value is not null order by min_start; ``` Which gives you: ``` MIN_START STEP_AVERAGE ---------------- --------------------------------------- 2015-01-01 06:00 20 2015-01-01 06:01 20.46875 2015-01-01 06:02 20.9375 2015-01-01 06:03 20.9375 2015-01-01 06:04 21.875 2015-01-01 06:05 21.875 2015-01-01 06:06 21.875 2015-01-01 06:07 21.875 2015-01-01 06:08 23.75 2015-01-01 06:09 23.75 ... 2015-01-01 06:14 23.75 2015-01-01 06:15 27.5 2015-01-01 06:16 27.5 ... 2015-01-01 06:29 27.5 2015-01-01 06:30 35 2015-01-01 06:31 35 ... 2015-01-01 06:59 35 2015-01-01 07:00 50 2015-01-01 07:01 49.6875 2015-01-01 07:02 49.6875 2015-01-01 07:03 49.375 ... ```
How to spread the average between two intervals in oracle
[ "", "sql", "oracle", "oracle11g", "intervals", "" ]
I have a table like this: ``` T A B C ID 2015-07-19 a b c 1 2015-07-16 a y z 2 2015-07-21 a b c 1 2015-07-17 a y c 2 2015-07-18 a y c 1 2015-07-20 a b c 1 2015-07-17 a y c 1 2015-07-19 a b c 2 2015-07-16 a y z 1 2015-07-20 a b c 2 2015-07-15 a y z 1 2015-07-22 x b c 1 2015-07-21 a b c 2 2015-07-18 a y c 2 2015-07-15 a y z 2 2015-07-22 a y c 2 2015-07-14 x b c 1 ``` I need to get an ordered result by datetime column T, but I need that the query detects and avoid repeated rows in columns A, B and C. And all this ordered and separated by ID. It could be a stored procedure. It's important to be fast, because is a huge log table. With millions of rows. The result should be like this: ``` T A B C ID 2015-07-22 x b c 1 2015-07-19 a b c 1 2015-07-17 a y c 1 2015-07-15 a y z 1 2015-07-14 x b c 1 2015-07-22 a y c 2 2015-07-19 a b c 2 2015-07-17 a y c 2 2015-07-15 a y z 2 ``` Any ideas?
This query gives the expected result (tested): ``` SELECT t1.* FROM mytable t1 LEFT JOIN mytable t2 ON t1.t = t2.t + INTERVAL 1 DAY AND t1.A = t2.A AND t1.B = t2.B AND t1.C = t2.C AND t1.ID = t2.ID WHERE t2.T IS NULL ORDER BY t1.ID, t1.T DESC ```
Try this query: ``` SELECT max(t),a,b,c,id FROM table GROUP BY A,B,C,id ORDER BY ID, max(T) ```
MySQL - Select distinct detecting changes on different ordered rows
[ "", "mysql", "sql", "distinct", "" ]
I have this query: ``` select top 5 * from tbl_post ORDER BY Id DESC ``` I want to select the first 5 rows after the 20th row. How I can do this?
Use OFFSET and FETCH [MSDN OFFSET FETCH Clause](https://technet.microsoft.com/en-us/library/gg699618%28v=sql.110%29.aspx): ``` SELECT * FROM tbl_post ORDER BY whatever OFFSET 20 ROWS FETCH NEXT 5 ROWS ONLY; ``` Note that you have to order by something for this to work, and you cannot use `top` at the same time
``` with x as (select row_number() over(order by id desc) as rn, * from tbl_post) select t.* from x join tbl_post t on x.id = t.id where x.rn between 20 and 25 ``` This is the easiest way to assign row numbers and selecting the rows you need later on.
How to select top 5 after 20 rows
[ "", "sql", "sql-server", "" ]
I have two separate queries that work fine by themselves but I need them to work in one query. I can combine the results easily enough in excel but this is to be part of a larger query. The two separate queries are: ``` SELECT SiteProductVariation.ProductVariationID, COUNT(SiteProduct.SiteProductID) AS Expr1 FROM SiteProductVariation INNER JOIN SiteProduct ON SiteProduct.SiteProductID = SiteProductVariation.SiteProductID WHERE (SiteProductVariation.ProductVariationID = 159868) AND (SiteProduct.ProductDisplay = 0) GROUP BY SiteProductVariation.ProductVariationID ``` and ``` SELECT SiteProductVariation.ProductVariationID, COUNT(SiteProduct.SiteProductID) AS Expr1 FROM SiteProductVariation INNER JOIN SiteProduct ON SiteProduct.SiteProductID = SiteProductVariation.SiteProductID WHERE (SiteProductVariation.ProductVariationID = 159868) AND (SiteProduct.ProductDisplay = 1) GROUP BY SiteProductVariation.ProductVariationID ``` For this `ProductVariationID` there are 11 Siteproductids, 7 with Productdisplay=1 and 4 with ProductDisplay=0, and these queries do return that information fine. But I tried to combine them into: ``` SELECT SiteProductVariation.ProductVariationID, COUNT(SiteProduct.SiteProductID) AS Expr1, COUNT(SiteProduct_1.SiteProductID) AS Expr2 FROM SiteProductVariation INNER JOIN SiteProduct ON SiteProduct.SiteProductID = SiteProductVariation.SiteProductID INNER JOIN SiteProduct AS SiteProduct_1 ON SiteProduct_1.SiteProductID = SiteProductVariation.SiteProductID WHERE (SiteProductVariation.ProductVariationID = 159868) AND (SiteProduct.ProductDisplay = 0) AND (SiteProduct_1.ProductDisplay = 1) GROUP BY SiteProductVariation.ProductVariationID ``` and get no results. I think it is because of using group by when I looked this problem up. Any help is appreciate even if to say I can't do this. There is a chance a siteproductid may have 0 results with ProductDisplay equalling 1 or 0 so that may need to be taken into account. Thanks in advance.
``` SELECT SiteProductVariation.ProductVariationID , COUNT(CASE WHEN SiteProduct.ProductDisplay = 0 THEN SiteProduct.SiteProductID END) AS Expr1 , COUNT(CASE WHEN SiteProduct.ProductDisplay = 1 THEN SiteProduct.SiteProductID END) AS Expr2 FROM SiteProductVariation INNER JOIN SiteProduct ON SiteProduct.SiteProductID = SiteProductVariation.SiteProductID WHERE (SiteProductVariation.ProductVariationID = 159868) AND (SiteProduct.ProductDisplay = 0 OR SiteProduct.ProductDisplay = 1) GROUP BY SiteProductVariation.ProductVariationID ```
You can do it like this: ``` SELECT SiteProductVariation.ProductVariationID, SUM(CASE WHEN SiteProduct.ProductDisplay = 0 THEN 1 ELSE 0 END) AS Expr1, SUM(CASE WHEN SiteProduct.ProductDisplay = 1 THEN 1 ELSE 0 END) AS Expr2 FROM SiteProductVariation INNER JOIN SiteProduct ON SiteProduct.SiteProductID = SiteProductVariation.SiteProductID WHERE (SiteProductVariation.ProductVariationID = 159868) AND (SiteProduct.ProductDisplay IN (0,1)) GROUP BY SiteProductVariation.ProductVariationID ```
Two counts in one query with opposing where criteria
[ "", "sql", "sql-server", "count", "group-by", "" ]
This is a bit specific so please bear with me... I want my where statement to give me all the results that meet the following criteria: ``` WHERE TestCode = A1 AND TestResult > 50 AND TestCode = A2 AND TestResult > 200 ``` In real terms I want a list of all the patients that have both an `A1>50` whilst also having an `A2 > 200` I can see that this will not work "as is" because I'm giving the clause two sets of greater thans, but however I add brackets to indicate what I want, it returns no data. Original (not working) query ``` AND (SQLUser.EP_VisitTestSetData.VISTD_TestCode_DR = 'A0165' AND SQLUser.EP_VisitTestSetData.VISTD_TestData > '35') AND (SQLUser.EP_VisitTestSetData.VISTD_TestCode_DR = 'A0155' AND SQLUser.EP_VisitTestSetData.VISTD_TestData > '25') ) ``` Working Code ``` AND ((SQLUser.EP_VisitTestSetData.VISTD_TestCode_DR = 'A0165' AND SQLUser.EP_VisitTestSetData.VISTD_TestData > '35') OR (SQLUser.EP_VisitTestSetData.VISTD_TestCode_DR = 'A0155' AND SQLUser.EP_VisitTestSetData.VISTD_TestData > '25')) AND ((SQLUser.EP_VisitTestSetData.VISTD_TestCode_DR = 'A0155' AND SQLUser.EP_VisitTestSetData.VISTD_TestData > '25') OR (SQLUser.EP_VisitTestSetData.VISTD_TestCode_DR = 'A0165' AND SQLUser.EP_VisitTestSetData.VISTD_TestData > '35')) ``` Sorry about the formatting...
Were you looking to do something like this? ``` WHERE ((TestCode = A1) AND (TestResult > 50)) OR ((TestCode = A2) AND (TestResult > 200)) ``` It looks like your testing for `WHEN TestCode = 'A1'` `AND` `WHEN TestCode = 'A2'` Which would be impossible for them to equal both at the same time, returning empty.
I think the syntax you're after is: ``` SELECT * FROM your_table AS A WHERE EXISTS ( SELECT TOP 1 1 FROM your_table AS A1_50 WHERE A1_50.ID = A.ID AND A1_50.TestCode = 'A1' AND A1_50.TestResult > 50) AND EXISTS ( SELECT TOP 1 1 FROM your_table AS A2_200 WHERE A2_200.ID = A.ID AND A2_200.TestCode = 'A2' AND A2_200.TestResult > 200) ``` Just replace your\_table with the table you're querying and ID with the patient\_id column you're checking to have the scores > 50 / 200 EDIT: You can also use INTERSECT: ``` SELECT PatientID FROM your_table WHERE TestCode = 'A1' AND TestResult > 50 INTERSECT SELECT PatientID FROM your_table WHERE TestCode = 'A2' AND TestResult > 200 ```
SQL Where condition a is met and condition b is met No Results
[ "", "sql", "" ]
How can we get last day of the month through month name in Postgresql?
Try this way : ``` select date_trunc('month', to_date('January ', 'Month'))+'1month'::interval-'1day'::interval ``` To parse a date object from a month name you can try this way : ``` to_date('January', 'Month') ```
``` SELECT DATE_PART('days', DATE_TRUNC('month', NOW()) + '1 MONTH'::INTERVAL - DATE_TRUNC('month', NOW()) ) ``` You can use any date instead now
Last Date of the Month in Postgres Sql
[ "", "sql", "postgresql", "" ]
I have a table that has four columns: `id`, `item_number`, `feature`, `value`. The table looks like this and has about 5 million entries. ``` ╔════╦═════════════╦═════════╦═══════╗ ║ id ║ item_number ║ feature ║ value ║ ╠════╬═════════════╬═════════╬═══════╣ ║ 1 ║ 234 ║ 5 ║ 15 ║ ║ 2 ║ 234 ║ 3 ║ 256 ║ ║ 3 ║ 453 ║ 5 ║ 14 ║ ║ 4 ║ 453 ║ 4 ║ 12 ║ ║ 5 ║ 453 ║ 7 ║ 332 ║ ║ 6 ║ 17 ║ 5 ║ 88 ║ ║ 7 ║ 17 ║ 9 ║ 13.86 ║ ╚════╩═════════════╩═════════╩═══════╝ ``` How can I sort the table so that I can get the `item_numbers` in descending order based on the feature value? I am also selecting other feature numbers with their values but I only want to sort by feature number 5.
In your query, add `order by item_number desc` If you are trying to query based on a specific feature, so only receive one set of data for a feature at at time, add `where feature = 'feature'` where "feature" is the feature value you want to search for. If you are looking to provide all features but sort them, you can add `order by feature, item_number desc` and you will be give all features in ascending order and together (grouped) then the items\_number(s) in descending order EDIT:: Sounds like from your latest comment, this may be your solution: ``` SELECT item_number FROM table WHERE feature = '5' ORDER BY value DESC ```
Using `order by` with `desc` and `where` clauses: ``` select `item_numbers` from `tbl` where `feature` = 5 order by `value` desc ```
MySQL data sort
[ "", "mysql", "sql", "sorting", "" ]