Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I need to find out the average time in which an event starts. The start time is recorded in DB in startDate column. ``` | StartDate | |=====================| |2015/04/10 3:46:07 AM| |2015/04/09 3:47:37 AM| |2015/04/08 3:48:07 AM |2015/04/07 3:43:44 AM| |2015/04/06 3:39:08 AM| |2015/04/03 3:47:50 AM| ``` So what I need is calculate the average time (hh:MM:ss) the event starts daily. I am not quite sure how to approach this. Below query won't work coz it just sums up and divide the total value by total number: ``` SELECT AVG(DATE_FORMAT(StartDate,'%r')) FROM MyTable ```
Demo: <http://sqlfiddle.com/#!9/33c09/3> Statement: ``` select TIME_FORMAT(avg(cast(startDate as time)),'%h:%i:%s %p') as avg_start_date from demo ``` Setup: ``` create table demo (startDate datetime); insert demo (startDate) values ('2015-04-10 3:46:07'); insert demo (startDate) values ('2015-04-09 3:47:37'); insert demo (startDate) values ('2015-04-08 3:48:07'); insert demo (startDate) values ('2015-04-07 3:43:44'); insert demo (startDate) values ('2015-04-06 3:39:08'); insert demo (startDate) values ('2015-04-03 3:47:50'); ``` Explanation: * Casting to Time ensures the Date component is ignored (if you're averaging datetimes and hoping for an average time it's like averaging double figure numbers and hoping for the unit to match what you'd have seen if you'd only averaged the units of those numbers). * AVG is the average function you're already familiar with. * TIME\_FORMAT is to present your data in a user friendly way so you can check your results.
Firstly you have to extract your datetime format to contain only time so that you can use an aggregate function on it. This is done by casting it to type `TIME`. Then you can use built-in aggregate function `AVG()` to achieve your goal and grouping to get average start date for every day you have in your table. This is done by ``` SELECT AVG(StartDate::TIME) FROM MyTable ``` To get average starting time per day simply ``` SELECT StartDate::DATE, AVG(StartDate::TIME) FROM MyTable GROUP BY StartDate::DATE ``` And if you also wish to have days that have no events starting in that time check [here](https://stackoverflow.com/questions/6870499/generate-series-equivalent-in-mysql) for equivalent to generate\_series which is not an option in MySQL.
How to find average on a timestamp column
[ "", "mysql", "sql", "date", "" ]
![Example](https://i.stack.imgur.com/2iU0P.jpg) when a row of ID is void, it will not show the cc\_type, but I need get the cc\_type for those void rows. ``` CASE WHEN CC_TYPE = 'Credit' OR Voided = 'Yes' THEN 'Credit' WHEN CC_TYPE = 'Debit OR Voided = 'Yes' THEN 'Debit' END ``` Obviously, this approach wouldn't work. Row 3 will consider as Credit to since it satisfy the condition for voided = 'Yes'. My logic is if ID and Name are same then append the CC\_Type value to Voided row, but I don't know how to get it work. Thanks
You can use a window function: ``` SELECT id, Name, MAX(CC_Type) OVER (PARTITION BY id, name) AS CC_Type, Voided FROM yourtable ```
this worked for me: ``` create table result as select a.id,a.name,b.cc_type,a.voided from (select id, name,voided from table1) as a inner join (select distinct id, cc_type from table1 where cc_type ne "") as b on a.id = b.id; quit; ```
Get data based on other row
[ "", "sql", "sql-server-2012", "" ]
I want to get the number of unique mobile phone entries per day that have been logged to a database and have never appeared in the log. I thought it was a trivial query but shock when the query took 10 minutes on a table with about 900K entries. A sample Select is getting the number of unique mobile phones that were logged on the 9th of April 2015 and had never been logged before. Its like getting who are the truly new visitors to you site on a specific day. [SQL Fiddle Link](http://sqlfiddle.com/#!15/9ee8e/4) ``` SELECT COUNT(DISTINCT mobile_number) FROM log_entries WHERE created_at BETWEEN '2015-04-09 00:00:00' AND '2015-04-09 23:59:59' AND mobile_number NOT IN ( SELECT mobile_number FROM log_entries WHERE created_at < '2015-04-09 00:00:00' ) ``` I have individual indexes on `created_at` and on `mobile_number`. Is there a way to make it faster? I see a very similar question [here on SO](https://stackoverflow.com/q/16901755/1082673) but that was working with two tables.
A `NOT IN` can be rewritten as a `NOT EXISTS` query which is very often faster (unfortunately the Postgres optimizer isn't smart enough to detect this). ``` SELECT COUNT(DISTINCT l1.mobile_number) FROM log_entries as l1 WHERE l1.created_at >= '2015-04-09 00:00:00' AND l1.created_at <= '2015-04-09 23:59:59' AND NOT EXISTS (SELECT * FROM log_entries l2 WHERE l2.created_at < '2015-04-09 00:00:00' AND l2.mobile_number = l1.mobile_number); ``` An index on `(mobile_number, created_at)` should further improve the performance. --- A side note: `created_at <= '2015-04-09 23:59:59'` will not include rows with fractional seconds, e.g. `2015-04-09 23:59:59.789`. When dealing with timestamps it's better to use a "lower than" with the "next day" instead of a "lower or equal" with the day in question. So better use: `created_at < '2015-04-10 00:00:00'` instead to also "catch" rows on that day with fractional seconds.
I tend to suggest transforming `NOT IN` into a left anti-join (i.e. a left join that only keeps the left rows that do *not* match the right side). It's complicated somewhat in this case by the fact that it's a self join against two distinct ranges of the same table, so you're really joining two subqueries: ``` SELECT COUNT(n.mobile_number) FROM ( SELECT DISTINCT mobile_number FROM log_entries WHERE created_at BETWEEN '2015-04-09 00:00:00' AND '2015-04-09 23:59:59' ) n LEFT OUTER JOIN ( SELECT DISTINCT mobile_number FROM log_entries WHERE created_at < '2015-04-09 00:00:00' ) o ON (n.mobile_number = o.mobile_number) WHERE o.mobile_number IS NULL; ``` I'd be interested in the performance of this as compared with the typical `NOT EXISTS` formulation provided by @a\_horse\_with\_no\_name. Note that I've also pushed the `DISTINCT` check down into the subquery. Your query seems to be "how many newly seen mobile numbers are there in <time range>". Right?
Is there a way to make an SQL NOT IN query faster?
[ "", "sql", "postgresql", "" ]
For a small project, I am creating an entity relationship diagram for a simple stock-tracking app. **User Story** Products are sold by product suppliers. Products are ordered by an office and delivered to them. One or more deliveries may be required to full-fill an order. These products ordered by this office are in turn delivered to various branches. Is there a general way that I can represent how stock will be distributed by the office to the branches it is responsible for? **ER Diagram** Here is a very simplified diagram describing the above. ![ER Diagram](https://i.stack.imgur.com/6PDpr.png) Deliveries will be made to an office and in turn the branches. Each department which is child of HeadQuarters(not shown in diagram) has different quantities of stock that do not necessarily have one to one correspondence with the OrdersDetail. The problem is how to show the inventories of the various departments given the current schema or to modify it in such a way that this is easier to show. **Update:** Started bounty and created new ERD diagram.
This is a bit of an odd structure. Normally the way I would handle this wouldn't be with a daisy-chain structure that you have here, but would in turn use some sort of transaction-based system. The way I'd handle it is to have everything off of `order`, and `have one-to-many` relationships off of that. For instance, I do see that you have that with `OrderDetail` off of `Order`, however this will always be a subset of `Order`. All orders will always have detail; I'd link `OrderDelivery` off of the main `Order` table, and have the detail accessible at any point as just a reference table off of it *instead* of off of `OrderDetailDelivery`. I'd have `Office` as a field on `OrderDelivery` and also use `Branch` in that way as well. If you want to have separate tables for them, that is fine, but I'd use `OrderDelivery` as a central place for these. A `null` could indicate whether it had been delivered or not, and then you could use your application layer to handle the order of the process. In other words, `OfficeID` and `BranchID` could exist as fields to indicate a foreign key to their respective tables off of `OrderDelivery` ## Edit Since the design has changed a bit (and it does look better), one thing I'd like to point out is that you have `supplier` with the same metadata as `Delivery`. `Supplier` to me sounds like an entity, whereas `Delivery` is a process. I'd think that `Supplier` might live well on it's own as a reference table. In other words, you don't need to include all of the same metadata on this table; instead you might want to create a table (much like you have right now for `supplier`) but instead called `SupplierDelivery`. The point I see is that you would like to be able to track all of the pieces of an order of various products through all of its checkpoints. With this in mind you might not necessarily want to have a separate entity for this, but instead track something like `SupplierDate` as one of the fields on `Delivery`. Either way I wouldn't get too hung up on the structure; your application layer will be handling a good deal of this. **One thing I'd be *very* careful about:** if multiple tables have fields with the same name, but are not keys referencing each other, you may wish to create distinct names. For example, if `deliveryDate` on supplier is different from the same key on `Delivery`, you might want to think about calling it something like `shipDate` or if you mean the date it arrived at the supplier, `supplierDeliveryDate` otherwise you can confuse yourself a lot in the future with your queries, and will make your code extremely difficult to parse without a lot of comments. ## Edit to include diagram [edited again for a better diagram]: Below is how I'd handle it. Your redone diagram is pretty close but here are a few changes ![enter image description here](https://i.stack.imgur.com/av3sz.png) My explanation: It's easiest to set this up with the distinct entities first, and then set up their relationships afterward, and determine whether or not there needs to be a link table. The distinct entities are as described: * **Order** * **Product** * **Supplier** * **Branch** **Headquarters**, while I included it, is actually not a necessary component of the diagram; presumably orders and requests are made here, but my understanding is that at no point does the order flow through the headquarters; it is more of a central processing area. I gather products do not run through Headquarters, but instead go directly to branches. If they do (which might slow down delivery processes, but that's up to you), you can replace *Branch* with it, and have branch as a link of it as you had before. Otherwise, I think you'd be safe to remove it from the diagram entirely. **Link Tables** These are set up for the many-to-many relationships which arise. * **OrderProductDetail** - orders can have many products, and many orders can have the same product. Each primary key combo can be associated with a number of products per order [edit: see below, *this now ties together orders, products and suppliers, through SupplierProduct*]. Because this is a link table, you can have any number of products per order. * **SupplierProduct** - this operates on the assumption that there is more than one supplier for the same product, and that one supplier might have multiple products, so this link table is created to handle the inventory available per product. **Edit:** *this is now the direct link to OrderProductDetail as it makes sense that individual suppliers have a link to the order, instead of two tables removed* This can serve as a central link to combine suppliers and products, but then tied to OrderProducDtail. Because this is a link table, you can have any number of suppliers delivering any number or amount of product. * **Delivery** - Branches can receive many deliveries, and as you mentioned, an order may be split up into various pieces, based on availability. For this reason, this links to **OrderProductDetail** which is what holds the specific amounts with each product. Since **OrderProductDetail** is already a link table with dual primary keys, *orderId* has a foreign key of the dual primary key off of **OrderProductDetail** using the paired keys of *productId* and *orderId* to make sure there is a distinct association with the specific product within a larger order. To sum this up, **supplierProduct** contains the combination of suppliers and product which then passes to **OrderProductDetail** which combines these with the details of the orders. This table essentially does the bulk of the work to put everything together before it passes through a delivery to the branches. **Note:** Last edit: Added supplier id to OrderProductDetail and switched to the dual primary key of supplierId and productId from supplierProduct to make sure you have a clearer way of making sure you can be granular enough in the way the products go from suppliers to OrderProductDetail. I hope this helps.
![enter image description here](https://i.stack.imgur.com/t076F.png)![I have tried to create an ERD for you. Its very easy to understand. You only need to have one table between Delivery and Branch. Once a delivery is arrived against an order, the office will assign it to any branch by creating a record. That delivery will not then be assigned to any other branch. You can make changes by removing primary keys if you want to assign more than one branches.](https://i.stack.imgur.com/fGP27.jpg) I hope this will solve your problem. If there are issues, let me know. Thanks
ER Diagram - Showing Deliveries to Office and to its Branches
[ "", "sql", "entity-relationship", "erd", "" ]
I've a problem with this part using Postgresql 9.4 where I need that the program only shows `sum(v.price) > 1000` but if I put `total > 1000` in where conditions says me that total doesn't exists and doesn't let me put `sum(v.price)` because it's not possible to do this kind of operations in this section. The creation tables are: ``` CREATE TABLE PATIENT ( Pat_Number INTEGER, Name VARCHAR(50) NOT NULL, Address VARCHAR(50) NOT NULL, City VARCHAR(30) NOT NULL, CONSTRAINT pk_PATIENT PRIMARY KEY (Pat_Number) ); CREATE TABLE VISIT ( Doc_Number INTEGER, Pat_Number INTEGER, Visit_Date DATE, Price DECIMAL(7,2), Turn INTEGER NOT NULL, CONSTRAINT Visit_pk PRIMARY KEY (Doc_Number, Pat_Number, Visit_Date), CONSTRAINT Visit_Doctor_fk FOREIGN KEY (Doc_Number) REFERENCES DOCTOR(Doc_Number), CONSTRAINT Visit_PATIENT_fk FOREIGN KEY (Pat_Number) REFERENCES PATIENT(Pat_Number) ); ``` This is the statement where I have problems: ``` SELECT p.name, p.address, p.city, sum(v.price) as total FROM VISIT v JOIN PATIENT p ON p.Pat_Number = v.Pat_Number WHERE Date(Visit_Date) < '01/01/2012' GROUP BY p.name, p.address, p.city, p.Pat_Number, v.Pat_Number ORDER BY total DESC; ``` How can I do it?
Add `Having sum(v.price) > 1000` after `group by`: ``` SELECT p.name, p.address, p.city, sum(v.price) as total FROM VISIT v JOIN PATIENT p ON p.Pat_Number = v.Pat_Number WHERE Date(Visit_Date) < '01/01/2012' GROUP BY p.name, p.address, p.city, p.Pat_Number, v.Pat_Number HAVING SUM(v.price) > 1000 ORDER BY total DESC; ```
The conditions in the `WHERE` part of the query apply at each individual row. You can't use aggregate functions there. There is a similar functionality for groups called `HAVING`. `HAVING` is like `WHERE`, but the conditions are applied per group. So adding `HAVING sum(v.price) > 1000` to the query will filter only those groups, where the sum of price is above 1000.
How to us an aggregated value in the where clause
[ "", "sql", "postgresql", "" ]
For example, I have simple SQL: ``` select product.name from products; ``` In my table I have 2 products: wheel, steering wheel. So my result is: ``` wheel steering wheel ``` How to get quoted result if the result has more than 1 word ?: ``` wheel "steering wheel" ``` I tried to use `quote_literal` function but the result is unfortunately always quoted. `quote_literal` works like I need only for constant strings like: `quoted_literal('steering wheel')` / `quoted_literal('wheel')`
What about: ``` SELECT CASE WHEN (product.name LIKE '% %') THEN quoted_ident(product.name) ELSE product.name END FROM products; ```
use this: ``` SELECT name FROM products WHERE position(' ' in name)= 0 union SELECT '"'|| name ||'"' FROM products WHERE position(' ' in name)<>0 ``` Here is a [sqlfiddle](http://sqlfiddle.com/#!9/1a3dd/18)
Return quoted data from SQL query result only if needed
[ "", "sql", "postgresql", "" ]
I am stuck at a certain point with my select queries. I need to find from the table 'patient' a cellphone number that has a matching pair with the other patients cellphone number (if that makes any sense). at the moment I got this insert values from the table patient: ``` insert into patient values(1022010201, 88.2, 77, 0676762516); insert into patient values(1022010202, 66.7, 55, 0676762518); insert into patient values(1022010203, 59.6, 65, 0676762517); insert into patient values(1022010204, 99.1, 76, 0676762515); insert into patient values(1022010205, 88.2, 89, 0676762514); insert into patient values(1022010207, 91.4, 76, 0676762513); insert into patient values(182704726, 54.4, 44, 0676762516); ``` the first and the last telephone numbers are the same. With my 'self-join' query I am not getting that one value, but I am also getting the other values. In a way it makes sense, because my query connects the table patient with it self. So in this way the select query does find numbers who are a matching pair. So at this point I am stuck how to only get the real matching phone numbers instead of all the others. ``` this is my select query at the moment: select p1.telefoonnr, p2.telefoonnr from patient p1, patient p2 where p1.telefoonnr = p2.telefoonnr; ``` this is the result: ``` TELEFOONNR TELEFOONNR -------------------- -------------------- 676762516 676762516 676762518 676762518 676762517 676762517 676762515 676762515 676762514 676762514 676762513 676762513 06-12345678 06-12345678 7 rows selected ``` Any help would be great. Thanks in advance
I have found the awnser to my own question. The result that I was looking for is this: ``` TELEFOONNR BSN -------------------- ---------- 676762516 1022010201 676762516 182704726 ``` This was possible due the following query: ``` select telefoonnr, bsn from patient where telefoonnr = (select telefoonnr from patient group by telefoonnr having count(*) >1); ```
Try this: ``` select telefoonnr,count(telefoonnr) from patient group by telefoonnr having count (telefoonnr) > 1; ``` This is like finding duplicates in your table. So the above query will give you the `telefoonnr` along with `count` and other details from your table.
SQL SELF-JOIN with Oracle 11G - Finding a patient who has the same telephone number
[ "", "sql", "join", "oracle11g", "self-join", "" ]
I am using *Oracle SQL* and am having trouble updating a large amount of specific records from my `CTRL_NUMBER` *table*. Currently, when I want to only update one record, the following expression works: ``` UPDATE STOCK SET PCC_AUTO_KEY=36 WHERE CTRL_NUMBER=54252 ``` But, since I have *over 1,000 records to update*, I do not want to type this in for each record (`CTRL_NUMBER`). So I attempted the following with only two records, and the database did not update with the new `PCC_AUTO_KEY` in the `SET` condition. ``` UPDATE STOCK SET PCC_AUTO_KEY=36 WHERE CTRL_NUMBER=54252 AND CTRL_NUMBER=58334 ``` When I execute the above expression, I do not receive any error codes and it will let me commit the expression, but the database information does not change after I verify the `CTRL_NUMBER`. How else could I approach this update effort or how should I change my expression to successfully update the `PCC_AUTO_KEY` for multiple `CTRL_NUMBER`? Thanks for your time!
In your second `Update` command, you have: ``` WHERE CTRL_NUMBER=54252 AND CTRL_NUMBER=58334 ``` *My question*: is it possible for a field to have tow values at a same time in a specific record? of course no. If you have a `range` of values for `CTRL_NUMBER` and you want to update your table on base of them you can do your update with following where clauses: ``` WHERE CTRL_NUMBER BETWEEN range1 AND range2 ``` or ``` WHERE CTRL_NUMBER >= range1 AND CTRL_NUMBER <= range2 ``` *But:* if you have not an specific range and you have different values for `CTRL_NUMBER` then you can use `IN` operator with your where clause: ``` WHERE CTRL_NUMBER IN (value1,value2,value3,etc) ``` You can also have your values from another `select` statement: ``` WHERE CTRL_NUMBER IN (SELECT value FROM anotherTable) ```
Use the IN clause - ``` UPDATE STOCK SET PCC_AUTO_KEY=36 WHERE CTRL_NUMBER IN (54252, 58334) ``` Your statement is trying to update where CTRL\_NUMBER is 54252 AND 58334, but it can only be one of those at a time. If you changed your statement to ``` UPDATE STOCK SET PCC_AUTO_KEY=36 WHERE CTRL_NUMBER=54252 OR CTRL_NUMBER=58334 ``` it would work.
How do I update multiple records under one "Where" clause? (Oracle SQL)
[ "", "sql", "oracle", "where-clause", "" ]
I have created two tables one employee table and the other is department table .Employee table has fields `EmpId , Empname , DeptID , sal , Editedby and editedon` where `EmpId` is the primary key and `Dept` table has `DeptID` and `deptname` where `DeptID` is the secondary key. I want the SQL query to show names of employees belonging to software departmant The entries in dept table are as below : ``` DeptID Deptname 1 Software 2 Accounts 3 Administration 4 Marine ```
Use `INNER JOIN`: ``` SELECT E.empname FROM Employee E INNER JOIN department D ON E.DeptID=D.DeptID WHERE D.DeptID = '1' ```
Is this what you need? ``` SELECT EmpName FROM Employee WHERE DeptID = 1 ```
SQL query to find names of employees belonging to particular department
[ "", "mysql", "sql", "sql-server", "" ]
I have the following query which selects the top 3 records per category. At the moment it limits the records to 9. However, this is not right because if I limit the number to 8 the last subcategory loses one record from display. I want to limit the records per SubCategories queried number i.e. 3 subcategories only with their top 3 products. What I have is the following: ``` SELECT TOP 9 * FROM tProduct p WHERE p.ProductID IN ( SELECT TOP 3 ProductID FROM tProduct PP WHERE pp.SubCategoryID = p.SubCategoryID ) ORDER BY SubCategoryID ``` Any ideas how to modify the above? Edit: The closest I got so far based on the `CROSS APPLY` as suggested is this: ``` SELECT * FROM tSubCategory c CROSS APPLY ( SELECT TOP(3) * FROM tProduct p WHERE c.SubCategoryID = p.SubCategoryID ORDER BY p.ProductID DESC ) x WHERE c.SubCategoryID BETWEEN 1 AND 2; ``` However, the query should specify only one number i.e. 4 categories and not between 1 and 2 which applies to the subcategory id.
Do you mean something like: ``` SELECT * FROM (SELECT TOP(3) * FROM tSubCategory) c CROSS APPLY ( SELECT TOP(3) * FROM tProduct p WHERE c.SubCategoryID = p.SubCategoryID ORDER BY p.ProductID DESC ) x; ``` ...where I've based just wrapped your tSubCategory into a subquery to limit it to just three rows.
This looks like an example for CROSS APPLY as mention by Rob Farley here: <http://blogs.lobsterpot.com.au/2011/04/13/the-power-of-t-sqls-apply-operator/> For this particular query, you would want to selct tthe top 3 subcategory rows and then use OUTER APPLY to append that to each product row. You will not be able to use top 9 on the select however, as that will take 9 rows only (as opposed to the top 9 product ids). This will need to be done with a WHERE clause. The query below (with some column name modifications) should do the trick: ``` SELECT * FROM Product AS p OUTER APPLY ( SELECT TOP (3) s.description FROM SubCategory AS s WHERE s.ID = p.subcategoryId GROUP BY s.id, s.description ORDER BY s.id DESC) as s WHERE p.Id IN (select distinct top 9 Id from Product); ```
Selecting top x categories with their top x products
[ "", "sql", "sql-server", "" ]
On a SQL Server 2008 I have a view `revenue` with the following schema: ``` +----------------------------+  | id | year | month | amount |  +----------------------------+  |  1 | 2014 |    11 |    100 |  |  2 | 2014 |    12 |   3500 |  |  3 | 2014 |    12 |     90 |  |  4 | 2015 |     1 |   1000 |  |  5 | 2015 |     2 |   6000 |  |  6 | 2015 |     2 |    600 |  |  7 | 2015 |     3 |    70 |  |  8 | 2015 |     3 |    340 |  +----------------------------+  ``` The schema and data above is simplified and the view is very big with millions of rows. I have no control over the schema, so cannot change the fields to DATE or similar. The year and month fields are INT. I'm looking for a SELECT statement that returns me x months worth of data starting from an arbitrary month. For example rolling 3 months, rolling 5 months, etc. What I came up with is this: ``` SELECT rolling_date, amount FROM (SELECT CAST('01/' + RIGHT('00' + CONVERT(VARCHAR(2), month), 2) + '/' + CAST(year AS VARCHAR(4)) AS DATE) AS rolling_date, amount FROM [revenue] ) date_revenue WHERE rolling_date BETWEEN CAST('01/12/2014' AS DATE) AND CAST('31/02/2015' AS DATE) ``` However, ... 1. This doesn't work and throws `Error line 1: Conversion failed when converting date and/or time from character string..` which seems to be referring to the BETWEEN clause 2. This seems a terribly awkward way of doing it and a waste of resources. What is an efficient way to write this query?
First off, the conversion error happens because there is no February 31st. I changed it to February 28th in the sample below. Since your table contains millions of rows, you're best off avoiding any conversions or calculations on the data in the table. Instead, convert the input to a format which matches your table. That way you can take advantage of indexes. The following example will be very efficient, especially if you can create a nonclustered index on `Year, Month`. ``` declare @start datetime = '2014-12-01' declare @end datetime = '2015-02-28' declare @startyear int = datepart(year, @start) declare @startmonth int = datepart(month, @start) declare @endyear int = datepart(year, @end) declare @endmonth int = datepart(month, @end) select * from revenue where (Year > @startyear OR (Year = @startyear AND Month >= @startmonth)) AND (Year < @endyear OR (Year = @endyear AND Month <= @endmonth)) ``` Edit: The following example is identical from a processing standpoint, and does not declare any new variables: ``` select * from revenue where (Year > datepart(year, @start) OR (Year = datepart(year, @start) AND Month >= datepart(month, @start))) AND (Year < datepart(year, @end) OR (Year = datepart(year, @end) AND Month <= datepart(month, @end))) ``` Edit 2: If you're able to pass in the Year & Month individually, you can run this: ``` select * from revenue r where (Year > 2014 OR (Year = 2014 AND Month >= 12)) AND (Year < 2015 OR (Year = 2015 AND Month <= 2)) ```
You can do an integer comparison for your year month: ``` SELECT id, yr, month, amount FROM Magazines WHERE yr*100 + month >= 201412 AND yr*100 + month <= 201503 ``` This will not return your year month as a date however. Is this a requirement?
filter date records
[ "", "sql", "sql-server", "date", "between", "" ]
I hope you all can help me, it's been 2 days WITH a MuleSoft(MS) Guru by my side and we cannot figure this issue out. Simply put, we cannot connect to a SQL Server database. I am using a `Generic_Database_Connectory` with the following information: * URL: `jdbc:jtds:sqlserver://ba-crmdb01.ove.local:60520;Instance=CRM;user=rcapilli...` * Driver Class Name: `com.microsoft.sqlserver.jdbc.SQLServerDriver` It's a `sqljdbc4.jar` file, the latest. The "test connection" works fine. No issues there. But when I run the app, I get this error (below) Anyone been able to get a SQL Server DB connection to work??? ``` ERROR 2015-04-09 14:05:31,106 [pool-17-thread-1] org.mule.exception.DefaultSystemExceptionStrategy: Caught exception in Exception Strategy: null java.lang.NullPointerException at org.mule.module.db.internal.domain.connection.DefaultDbConnection.isClosed(DefaultDbConnection.java:100) ~[mule-module-db-3.6.1.jar:3.6.1] at org.mule.module.db.internal.domain.connection.TransactionalDbConnectionFactory.releaseConnection(TransactionalDbConnectionFactory.java:136) ~[mule-module-db-3.6.1.jar:3.6.1] at org.mule.module.db.internal.processor.AbstractDbMessageProcessor.process(AbstractDbMessageProcessor.java:99) ~[mule-module-db-3.6.1.jar:3.6.1] at org.mule.transport.polling.MessageProcessorPollingMessageReceiver$1.process(MessageProcessorPollingMessageReceiver.java:164) ~[mule-core-3.6.1.jar:3.6.1] at org.mule.transport.polling.MessageProcessorPollingMessageReceiver$1.process(MessageProcessorPollingMessageReceiver.java:148) ~[mule-core-3.6.1.jar:3.6.1] at org.mule.execution.ExecuteCallbackInterceptor.execute(ExecuteCallbackInterceptor.java:16) ~[mule-core-3.6.1.jar:3.6.1] at org.mule.execution.HandleExceptionInterceptor.execute(HandleExceptionInterceptor.java:30) ~[mule-core-3.6.1.jar:3.6.1] at org.mule.execution.HandleExceptionInterceptor.execute(HandleExceptionInterceptor.java:14) ~[mule-core-3.6.1.jar:3.6.1] at org.mule.execution.BeginAndResolveTransactionInterceptor.execute(BeginAndResolveTransactionInterceptor.java:54) ~[mule-core-3.6.1.jar:3.6.1] at org.mule.execution.ResolvePreviousTransactionInterceptor.execute(ResolvePreviousTransactionInterceptor.java:44) ~[mule-core-3.6.1.jar:3.6.1] at org.mule.execution.SuspendXaTransactionInterceptor.execute(SuspendXaTransactionInterceptor.java:50) ~[mule-core-3.6.1.jar:3.6.1] at org.mule.execution.ValidateTransactionalStateInterceptor.execute(ValidateTransactionalStateInterceptor.java:40) ~[mule-core-3.6.1.jar:3.6.1] at org.mule.execution.IsolateCurrentTransactionInterceptor.execute(IsolateCurrentTransactionInterceptor.java:41) ~[mule-core-3.6.1.jar:3.6.1] at org.mule.execution.ExternalTransactionInterceptor.execute(ExternalTransactionInterceptor.java:48) ~[mule-core-3.6.1.jar:3.6.1] at org.mule.execution.RethrowExceptionInterceptor.execute(RethrowExceptionInterceptor.java:28) ~[mule-core-3.6.1.jar:3.6.1] at org.mule.execution.RethrowExceptionInterceptor.execute(RethrowExceptionInterceptor.java:13) ~[mule-core-3.6.1.jar:3.6.1] at org.mule.execution.TransactionalErrorHandlingExecutionTemplate.execute(TransactionalErrorHandlingExecutionTemplate.java:109) ~[mule-core-3.6.1.jar:3.6.1] at org.mule.execution.TransactionalErrorHandlingExecutionTemplate.execute(TransactionalErrorHandlingExecutionTemplate.java:30) ~[mule-core-3.6.1.jar:3.6.1] at org.mule.transport.polling.MessageProcessorPollingMessageReceiver.pollWith(MessageProcessorPollingMessageReceiver.java:147) ~[mule-core-3.6.1.jar:3.6.1] at org.mule.transport.polling.MessageProcessorPollingMessageReceiver.poll(MessageProcessorPollingMessageReceiver.java:138) ~[mule-core-3.6.1.jar:3.6.1] at org.mule.transport.AbstractPollingMessageReceiver.performPoll(AbstractPollingMessageReceiver.java:216) ~[mule-core-3.6.1.jar:3.6.1] at org.mule.transport.PollingReceiverWorker.poll(PollingReceiverWorker.java:80) ~[mule-core-3.6.1.jar:3.6.1] at org.mule.transport.PollingReceiverWorker.run(PollingReceiverWorker.java:49) ~[mule-core-3.6.1.jar:3.6.1] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) ~[?:1.7.0_75] at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) ~[?:1.7.0_75] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178) ~[?:1.7.0_75] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) ~[?:1.7.0_75] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_75] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_75] at java.lang.Thread.run(Thread.java:745) [?:1.7.0_75] ```
This is how I used to configure and it works for me.. you can try this example :- ``` <db:generic-config name="Generic_Database_Configuration" url="jdbc:sqlserver://ANIRBAN-PC\\SQLEXPRESS:1433;databaseName=MyDBName;user=sa;password=mypassword" driverClassName="com.microsoft.sqlserver.jdbc.SQLServerDriver" doc:name="Generic Database Configuration" /> ``` and in Mule flow :- ``` <db:select config-ref="Generic_Database_Configuration" doc:name="Database"> <db:parameterized-query><![CDATA[select * from table1]]></db:parameterized-query> </db:select> ```
If your test connection works fine, there should be no issues with your DB connection. Please post you config XML and that will help identifying the issue.
MuleSoft and SQL Server DB connection failure
[ "", "sql", "sql-server", "mule", "" ]
I have three tables in Mariadb ``` sensor +---------------+-------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +---------------+-------------+------+-----+---------+-------+ | name | varchar(64) | NO | PRI | NULL | | | reload_time | int(11) | NO | | NULL | | | discriminator | varchar(20) | NO | | NULL | | +---------------+-------------+------+-----+---------+-------+ sensor_common_service +---------------+-------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +---------------+-------------+------+-----+---------+-------+ | service_name | varchar(64) | NO | PRI | NULL | | | sensor_name | varchar(64) | NO | PRI | NULL | | +---------------+-------------+------+-----+---------+-------+ common_service +---------------+-------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +---------------+-------------+------+-----+---------+-------+ | service_name | varchar(64) | NO | PRI | NULL | | | version | int(11) | NO | | NULL | | | reload_time | int(11) | NO | | NULL | | +---------------+-------------+------+-----+---------+-------+ ``` And I want to get all common\_services that have exactly a set of sensors, for example, all common\_services that have as sensors temperature and humidity. So, if I have ``` common_service1 : sensors [temperature] common_service2 : sensors [temperature,humidity] common_service3 : sensors [temperature,humidity, luminosity] ``` The query should return only common\_service2. My first attemp was trying to adapt the queries on [Join between mapping (junction) table with specific cardinality](https://stackoverflow.com/questions/9170101/join-between-mapping-junction-table-with-specific-cardinality) And this is the result ``` SELECT * FROM custom_service JOIN ( SELECT scm.service_name FROM sensor_custom_service scm WHERE scm.sensor_name IN ( SELECT s.name FROM sensor s WHERE s.name='luminosity' OR s.name='temperature' ) GROUP BY scm.service_name HAVING COUNT(DISTINCT scm.sensor_name)=2 ) AS jt ON custom_service.service_name=jt.service_name; ``` another one, ``` SELECT scs.* FROM sensor_custom_service scs where scs.sensor_name IN ( 'luminosity', 'temperature' ) GROUP BY scs.service_name; HAVING COUNT(scs.sensor_name) = 2 ``` But with this queries I also get the common\_services that have other sensors since the having count is only counting the sensor\_custom\_service that meets the where clause. Using the above example this queries both return ``` common_service2 : sensors [temperature,humidity] common_service3 : sensors [temperature,humidity] ``` I think that this query would be easy using INTERSECT operator with sth like this ``` SELECT scs.* FROM sensor_custom_service scs where scs.sensor_name IN ( 'luminosity', 'temperature' ) INTERSECT SELECT scs.* FROM sensor_custom_service scs HAVING COUNT(scs.sensor_name) = 2 ``` but Mariabb returns ``` ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'INTERSECT ``` because it isn't supported ( I think, because both queries works individually) --- Solution using 9000's query. ``` select * from custom_service cs where exists (select 1 from sensor_custom_service scs where scs.service_name = cs.service_name and scs.sensor_name = 'luminosity') AND exists (select 1 from sensor_custom_service scs where scs.service_name = cs.service_name and scs.sensor_name = 'temperature') AND NOT exists (select 1 from sensor_custom_service scs where scs.service_name = cs.service_name and scs.sensor_name NOT IN ('temperature', 'luminosity')); ```
MariaDB and MySQL have the GROUP\_CONCAT() function to accomplish this task: ``` SELECT service_name, GROUP_CONCAT(sensor_name ORDER BY sensor_name) AS sensors FROM sensor_common_service GROUP BY service_name HAVING sensors='humidity,temperature' ``` <http://sqlfiddle.com/#!9/2f574/1>
Note that `IN` gives you an `OR` semantics when you need an `AND` semantics. Basically if you need all 3 sensors present, you need 3 joins, each representing a separate sensor. Something along the lines of ``` select * from common_service cs where exists (select 1 from sensor_common_service scs where scs.service_name = cs.service_name and scs.sensor_name = 'luminosity') AND exists (select 1 from sensor_common_service scs where scs.service_name = cs.service_name and scs.sensor_name = 'temperature') -- add more sensors along these lines ``` This also means that it's impossible to write a variadic query that works for any number of sensors. (I'd love to be proved wrong!)
sql query with an specific cardinality (many to many)
[ "", "sql", "count", "mariadb", "" ]
While updating data in a table1 with `inner join` returning `O row updated`, both the table are in different database. I got alternate method to update this, but I don't why my `INNER JOIN` query gone wrong. **Not working with inner join** ``` UPDATE DB1.table1 SET t1.column3='value3' from DB1.table1 t1 INNER JOIN DB2.table2 t2 on t1.column2=t2.column2 WHERE (t1.column1 = 'value1') AND (t2.column3 = 'value3') ``` **Working query without using inner join.** ``` UPDATE DB1.table1 SET column3='value3' WHERE (column1 = 'value1') AND (column3 = 'value3') AND (column2 in (select column2 from DB2.table2 where column3='value3' and column3='value3' and column4='value4')) DB1..table1 column1 column2 column3 column4 c1 c2 c3a c4 c1 c2 c3a c4 c1 c2 c3b c4 c1 c2 c3b c4 DB2..table2 column1 column2 column3 column4 c1 c2 c3a c4 c1 c2 c3a c4 c1 c2 c3b c4 ``` Can any body suggest this?
You have some syntax error in your statement. Use alias in SET and UPDATE part or use TableName, don't mix: ``` UPDATE t1 SET column3='value3' from DB1.table1 t1 INNER JOIN DB2.table2 t2 on t1.column2=t2.column2 WHERE (t1.column1 = 'value1') AND (t2.column3 = 'value3') ```
in your first query which does not works you have: ``` UPDATE DB1.table1 SET t1.column3='value3' . . . ``` you mentioned the actual name of the table for `update` but in `set` statement you have used the `alias` so there is a confusion. and in second one you told: ``` UPDATE DB1.table1 SET column3= ... ``` you have used the actual name of table and column and there is nothing wrong here and the code works fine. **So** when you use aliases then try to use it in Update part also **OR** use the actual names everywhere, the below tow query should work for you: ``` UPDATE DB1.table1 SET DB1.table1.column3=DB2.table2.column3 FROM DB1.table1 JOIN DB2.table2 ON DB1.table1.column2=DB2.table2.column2 AND DB1.table1.column1 = 'value1' AND DB2.table2.column3 = 'value3' ``` and ``` UPDATE t1 SET t1.column3=t2.column3 FROM DB1.table1 t1 JOIN DB2.table2 t2 ON t1.column2=t2.column2 AND t1.column1 = 'value1' AND t2.column3 = 'value3' ```
update query with inner join [0 rows updated]
[ "", "sql", "sql-server", "sql-server-2008", "sql-update", "inner-join", "" ]
Can someone explain the performance difference between these 3 queries? `concat()` function: ``` explain analyze select * from person where (concat(last_name, ' ', first_name, ' ', middle_name) like '%Ива%'); Seq Scan on person (cost=0.00..4.86 rows=1 width=15293) (actual time=0.032..0.140 rows=6 loops=1) Filter: (pg_catalog.concat(last_name, ' ', first_name, ' ', middle_name) ~~ '%Ива%'::text) Total runtime: 0.178 ms ``` SQL standard concatenation with `||`: ``` explain analyze select * from person where ((last_name || ' ' || first_name || ' ' || middle_name) like '%Ива%'); Seq Scan on person (cost=0.00..5.28 rows=1 width=15293) (actual time=0.023..0.080 rows=6 loops=1) Filter: ((((((last_name)::text || ' '::text) || (first_name)::text) || ' '::text) || (middle_name)::text) ~~ '%Ива%'::text) Total runtime: 0.121 ms ``` Search fields separately: ``` explain analyze select * from person where (last_name like '%Ива%') or (first_name like '%Ива%') or (middle_name like '%Ива%'); Seq Scan on person (cost=0.00..5.00 rows=1 width=15293) (actual time=0.018..0.060 rows=6 loops=1) Filter: (((last_name)::text ~~ '%Ива%'::text) OR ((first_name)::text ~~ '%Ива%'::text) OR ((middle_name)::text ~~ '%Ива%'::text)) Total runtime: 0.097 ms ``` Why is `concat()` slowest one and why are several `like` conditions faster?
While not a concrete answer, the following might help you to reach some conclusions: 1. Calling `concat` to concatenate the three strings, or using the `||` operator, results in postgres having to allocate a new buffer to hold the concatenated string, then copy the characters into it. This has to be done for each row. Then the buffer has to be deallocated at the end. 2. In the case where you are ORing together three conditions, postgres may only have to evaluate only one or maybe two of them to decide if it has to include the row. 3. It is possible that expression evaluation using the `||` operator might be more efficient, or perhaps more easily optimizable, compared with a function call to `concat`. I would not be surprised to find that there is some special case handling for internal operators. 4. As mentioned in the comments, your sample is too small to make proper conclusions anyway. At the level of a fraction of a millisecond, other noise factors can distort the result.
What you have observed so far is interesting, but only a minor cost overhead for concatenating strings. The **much more important** difference between these expressions does not show in your minimal test case without indexes. The first two examples are not [**sargable**](https://dba.stackexchange.com/a/217983) (unless you build a tailored expression index): ``` where concat(last_name, ' ', first_name, ' ', middle_name) like '%Ива%' where (last_name || ' ' || first_name || ' ' || middle_name) like '%Ива%' ``` While this one is: ``` where last_name like '%Ива%' or first_name like '%Ива%' or middle_name like '%Ива%' ``` I.e., it can use a plain trigram index to great effect (order of columns is unimportant in a GIN index): ``` CREATE INDEX some_idx ON person USING gin (first_name gin_trgm_ops , middle_name gin_trgm_ops , last_name gin_trgm_ops); ``` Instructions: * [PostgreSQL LIKE query performance variations](https://stackoverflow.com/questions/1566717/postgresql-like-query-performance-variations/13452528#13452528) ### Incorrect test if `null` is possible [`concat()`](https://www.postgresql.org/docs/current/functions-string.html#FUNCTIONS-STRING-OTHER) is generally slightly more expensive than simple string concatenation with `||`. It is also **different**: If any of the input strings is `null`, the concatenated result is also `null` in your second case, but not in your first, since `concat()` just ignores `null` input. But you'd still get a useless space character in the result. Detailed explanation: * [Combine two columns and add into one new column](https://stackoverflow.com/questions/12310986/combine-two-columns-and-add-into-one-new-column/12320369#12320369) If you are looking for a clean, elegant expression (about the same cost), use [`concat_ws()`](https://www.postgresql.org/docs/current/functions-string.html#FUNCTIONS-STRING-OTHER) instead: ``` concat_ws( ' ', last_name, first_name, middle_name) ```
Query performance with concatenation and LIKE
[ "", "sql", "postgresql", "pattern-matching", "concatenation", "postgresql-performance", "" ]
What is the difference between these query ``` SELECT A.AnyField FROM A LEFT JOIN B ON B.AId = A.Id LEFT JOIN C ON B.CId = C.Id ``` and this another query ``` SELECT A.AnyField FROM A LEFT JOIN ( B JOIN C ON B.CId = C.Id ) ON B.AId = A.Id ```
Original answer: They are not the same. For example `a left join b left join c` will return a rows, plus b rows even if there are no c rows. `a left join (b join c)` will never return b rows if there are no c rows. Added later: ``` SQL>create table a (id int); SQL>create table b (id int); SQL>create table c (id int); SQL>insert into a values (1); SQL>insert into a values (2); SQL>insert into b values (1); SQL>insert into b values (1); SQL>insert into c values (2); SQL>insert into c values (2); SQL>insert into c values (2); SQL>insert into c values (2); SQL>select a.id from a left join b on a.id = b.id left join c on b.id = c.id; id =========== 1 1 2 3 rows found SQL>select a.id from a left join (b join c on b.id = c.id) on a.id = b.id; id =========== 1 2 2 rows found ```
The first query is going to take ALL records from table `a` and then only records from table `b` where `a.id` is equal to `b.id`. Then it's going to take all records from table `c` where the resulting records in table `b` have a `cid` that matches `c.id`. The second query is going to first JOIN `b` and `c` on the `id`. That is, records will only make it to the resultset from that join where the `b.CId` and the `c.ID` are the same, because it's an `INNER JOIN`. Then the result of the `b INNER JOIN c` will be LEFT JOINed to table `a`. That is, the DB will take all records from `a` and only the records from the results of `b INNER JOIN c` where `a.id` is equal to `b.id` The difference is that you may end up with more data from `b` in your first query since the DB isn't dropping records from your result set just because `b.cid <> c.id`. For a visual, the following Venn diagram shows which records are available ![enter image description here](https://i.stack.imgur.com/skN59.png)
Differences between forms of LEFT JOIN
[ "", "sql", "" ]
I am trying to use the following code to run a different `join` query, based on a `CASE` statement. So if the `customer CLI` is equal to `84422881` I want it to join based on the `[Extenstion]` field and if the condition is not met i.e. the `ELSE` then to join on `[Customer CLI]` ``` use VoiceflexBilling CASE WHEN [dbo].[MARU15_OWH07579_Calls].[CustomerCLI] = '84422881' THEN UPDATE [dbo].[MARU15_OWH07579_Calls] SET [dbo].[MARU15_OWH07579_Calls].[CustomerLookup] = CLIMapping.[customer id] FROM [BillingReferenceData].[dbo].[CLIMapping] INNER JOIN [VoiceflexBilling].[dbo].[MARU15_OWH07579_Calls] on [VoiceflexBilling].[dbo].[MARU15_OWH07579_Calls].[Extension] = [BillingReferenceData].[dbo].[CLIMapping].[CLI] ELSE UPDATE [dbo].[MARU15_OWH07579_Calls] SET [dbo].[MARU15_OWH07579_Calls].[CustomerLookup] = CLIMapping.[customer id] FROM [BillingReferenceData].[dbo].[CLIMapping] INNER JOIN [VoiceflexBilling].[dbo].[MARU15_OWH07579_Calls] on [VoiceflexBilling].[dbo].[MARU15_OWH07579_Calls].[CustomerCLI] = [BillingReferenceData].[dbo].[CLIMapping].[CLI] END ``` At the moment I get the following error: > Msg 156, Level 15, State 1, Line 3 Incorrect syntax near the keyword > 'CASE'. Msg 156, Level 15, State 1, Line 11 Incorrect syntax near the > keyword 'ELSE'. Msg 102, Level 15, State 1, Line 19 Incorrect syntax > near 'END'. Can anyone help with the correct syntax? Thanks,
you can use the `case` statement in the `join condition`: ``` UPDATE [dbo].[MARU15_OWH07579_Calls] SET [dbo].[MARU15_OWH07579_Calls].[CustomerLookup] = CLIMapping.[customer id] FROM [BillingReferenceData].[dbo].[CLIMapping] INNER JOIN [VoiceflexBilling].[dbo].[MARU15_OWH07579_Calls] ON [BillingReferenceData].[dbo].[CLIMapping].[CLI] = CASE WHEN [dbo].[MARU15_OWH07579_Calls].[CustomerCLI] = '84422881' THEN [VoiceflexBilling].[dbo].[MARU15_OWH07579_Calls].[Extension] ELSE [VoiceflexBilling].[dbo].[MARU15_OWH07579_Calls].[CustomerCLI] END ```
Some comments on your query. First, `case` is an *expression*, not control flow. So, it belongs inside a query. Second, you can just use "regular" logic for this query. Third, table aliases would make the query much easier to write and to read: ``` UPDATE c SET c.[CustomerLookup] = CLIMapping.[customer id] FROM [VoiceflexBilling].[dbo].[MARU15_OWH07579_Calls] cm [BillingReferenceData].[dbo].[CLIMapping] c INNER JOIN on (c.[CustomerCLI] = '84422881' AND c.[Extension] = cm.[CLI]) or (c.[CustomerCLI] <> '84422881' AND c.[CustomerCLI] = cm.[CLI]) ```
Different join/update based on case query
[ "", "sql", "sql-server", "join", "sql-update", "case", "" ]
I have two seperate tables which I want to join based on Dates. However, I don't want the dates in the tables to be equal to one another I want the date (and accompanying value) from one table to be joined with the next date *available* after that date in the second table. I've put an example of the problem below: Table 1: ``` Date Value 2015-04-13 A 2015-04-10 B 2015-04-09 C 2015-04-08 D ``` Table 2: ``` Date Value 2015-04-13 E 2015-04-10 F 2015-04-09 G 2015-04-08 H ``` Desired Output Table: ``` Table1.Date Table2.Date Table1.Value Table2.Value 2015-04-10 2015-04-13 B E 2015-04-09 2015-04-10 C F 2015-04-08 2015-04-09 D G ``` I'm at a bit of an ends of where to even get going with this, hence the lack of my current SQL starting point! Hopefully that is clear. I found this related question that comes close but I get lost on incorporating this into a join statment!! [SQL - Select next date query](https://stackoverflow.com/questions/3609671/sql-select-next-date-query) Any help is much appreciated! M. EDIT- There is a consideration that is important in that the day will not always be simply 1 day later. They need to find the next day *available*, which was in the original question but Ive update my example to reflect this.
Since you want the next available date, and that might not necessarily be the following date (eg. date + 1) you'll want to use a correlated subquery with either `min` or `top 1`. This will give you the desired output: ``` ;WITH src AS ( SELECT Date, NextDate = (SELECT MIN(Date) FROM Table2 WHERE Date > t1.Date) FROM table1 t1 ) SELECT src.Date, src.NextDate, t1.Value, t2.Value FROM src JOIN Table1 t1 ON src.Date = t1.Date JOIN Table2 t2 ON src.NextDate = t2.Date WHERE src.NextDate IS NOT NULL ORDER BY src.Date DESC ``` [Sample SQL Fiddle](http://www.sqlfiddle.com/#!6/128bd/1)
``` try this select [Table 1].Date,[Table 1].Value,[Table 2].date,[Table 2].Value from [Table 1] join [Table 1] on dateadd(dd,1,[Table 1].date) = [Table 2].date ```
SQL Join based on dates- Table2.Date=Next date after Table1.Date
[ "", "sql", "sql-server", "t-sql", "date", "join", "" ]
I have the below query which runs ok and returns the string `2,3`. `FN_GET_PLAN_UPGRADE_OPTIONS` is returning a `VARCHAR`. ``` SELECT FN_GET_PLAN_UPGRADE_OPTIONS(1); ``` Now when I add this function call into a WHERE clause using the IN operator it only returns the result where act\_id contains `2` and no results where act\_id contains `3`, there is data for both so both should be returned, any suggestions or ideas, no errors are thrown!? ``` SELECT act_id , act_name FROM account_types WHERE act_id IN (FN_GET_PLAN_UPGRADE_OPTIONS(1)); ```
The query you want to run is this: ``` SELECT ... WHERE act_id IN (2,3) ``` but the query you are actually running is this: ``` SELECT ... WHERE act_id IN ('2,3') ``` because your function returns a STRING (not a list of values), and since you are comparing a string with a NUMBER (act\_id), MySQL will automatically cast this string to a number, so you end up running this: ``` SELECT ... WHERE act_id IN (2) ``` you can use FIND\_IN\_SET to solve your problem: ``` SELECT ... WHERE FIND_IN_SET(act_id, FN_GET_PLAN_UPGRADE_OPTIONS(1))>0 ```
You should use `find_in_set()`: ``` WHERE find_in_set(act_id, FN_GET_PLAN_UPGRADE_OPTIONS(1)) ``` In interprets the returned value, say `'1,2'` as a single string of length three -- rather than two different values.
MySQL Function Oddness When Using IN Operator
[ "", "mysql", "sql", "function", "" ]
Is it expected that when I test if a null value is **not** in a list, the result is always false. That is: ``` select 'Hello world' where null not in(1,2,3); ``` Don't select anything, because null not in(1,2,3) is false. I don't understand this, because the list(1,2,3) don't contain any undefined values(null) so I would expect the null not in(1,2,3) to be true. So why is it false?
Actually `null not in (1,2,3)` returns `null`, not `false`, but it works *like* false a `where` clause (since it's not `true`). Null comparisons `(a = null)`, etc. always return `null`, and since `null not in (1,2,3)` is essentially the same as: `NOT (null = 1 OR null = 2 OR null = 3)` The return value will be `null`. Your best bet is to do explicit null checks and handle those cases separately. Side note: `NULL in (null, 1, 2, 3)` will return `null` as well since `null = null` returns `null`.
Null is always a special case in SQL. See <http://www.postgresql.org/docs/8.3/static/functions-comparison.html> > The ordinary comparison operators yield null (signifying "unknown") > when either input is null. So `Null not In (...) will wield Null` You can always use a CASE WHEN to bypass this behavior. See <http://www.postgresql.org/docs/7.4/static/functions-conditional.html>
Why is null not in(1,2,3) false
[ "", "sql", "postgresql", "" ]
In Oracle SQL Developer when you right click into a query result and select export it brings up the Export Wizard. After selecting the format you want (in this case .xls) the default save as target is (for me)`C:\Documents and Settings\XXXXX\export.xls` What i would like is to change the default output file name from always being `export` to something that is dynamic, a timestamp perhaps? I would even be happy to implement a 3rd part extension to handle this.
As discovered this is not currently possible but is currently down as a feature request with Oracle to be implemented if it gets enough attention from users. **Feature request:** **<https://apex.oracle.com/pls/apex/f?p=43135:7:114031166834052::NO:RP,7:P7_ID:41561>**
I know this is an old thread but I was looking for an answer to this, and wanted to share some information for anyone else who comes along. As far as I know you still can't set the *exact* text of the file name or add a timestamp to it, but if you change the "Save As" setting in the Export Wizard window (this can be made permanent through the Tools - Preferences menu) to "Separate Files" then each export will be given a different file name in the specified folder automatically. This works even if the export is the same query. SQL Dev names each file "TABLE\_EXPORT\_DATA.xls" and will amend a number at the end after each export. So for example: TABLE\_EXPORT\_DATA\_4.xls TABLE\_EXPORT\_DATA\_3.xls TABLE\_EXPORT\_DATA\_2.xls TABLE\_EXPORT\_DATA\_1.xls
Oracle SQL Developer query result Export Wizard default output file name
[ "", "sql", "oracle", "oracle-sqldeveloper", "" ]
I have multiple set of data to insert at once ``` INSERT INTO MyTable VALUES ("John", "Doe", 1234567890, "employee", ""); INSERT INTO MyTable VALUES ("Susen", "Gupta", 1234567890, "leander"); INSERT INTO MyTable VALUES ("Karn", "Share", 1234567890, "employee", "home"); ``` I want to insert multiple rows in a single SQL statement. And can it possible to do it with different number of values.
[Multi-row insert](http://en.wikipedia.org/wiki/Insert_%28SQL%29#Multirow_inserts) has been part of the SQL standard since SQL-92, and many of the modern DBMS' support it. That would allow you to do something like: ``` insert into MyTable ( Name, Id, Location) values ('John', 123, 'Lloyds Office'), ('Jane', 124, 'Lloyds Office'), ('Billy', 125, 'London Office'), ('Miranda', 126, 'Bristol Office'); ``` You'll notice I'm using the *full* form of `insert into` there, listing the columns to use. I prefer that since it makes you immune from whatever order the columns default to. If your particular DBMS does *not* support it, you could do it as part of a transaction which depends on the DBMS but basically looks like: ``` begin transaction; insert into MyTable (Name,Id,Location) values ('John',123,'Lloyds Office'); insert into MyTable (Name,Id,Location) values ('Jane',124,'Lloyds Office'), insert into MyTable (Name,Id,Location) values ('Billy',125,'London Office'), insert into MyTable (Name,Id,Location) values ('Miranda',126,'Bristol Office'); commit transaction; ``` This makes the operation atomic, either inserting all values or inserting none.
Yes you can, but it depends on the SQL taste that you are using :) , for example in mysql, and sqlserver: ``` INSERT INTO Table ( col1, col2 ) VALUES ( val1_1, val1_2 ), ( val2_1, val2_2 ), ( val3_1, val3_2 ); ``` But in oracle: ``` INSERT ALL INTO Table (col1, col2, col3) VALUES ('val1_1', 'val1_2', 'val1_3') INTO Table (col1, col2, col3) VALUES ('val2_1', 'val2_2', 'val2_3') INTO Table (col1, col2, col3) VALUES ('val3_1', 'val3_2', 'val3_3') . . . SELECT 1 FROM DUAL; ```
Insert multiple rows of data in a single SQL statement
[ "", "sql", "insert", "sql-insert", "" ]
The below query gives me results for count greater than 70. ``` SELECT books.name, COUNT(library.staff) FROM (library INNER JOIN books ON library.staff = books.id) GROUP BY library.staff,books.id HAVING COUNT(library.staff) > 70; ``` How do I modify my query to get the result with the *maximum* count?
One method is `order by` and `limit`: ``` SELECT b.name, COUNT(l.staff) as cnt FROM library l INNER JOIN books b ON l.staff = b.id GROUP BY l.staff, b.name ORDER BY cnt DESC LIMIT 1; ``` I find it strange that you are grouping by two columns, but only one is in the `select`. However, if the query is working, then it is just looking for duplicates.
you can do like this ``` SELECT books.name, COUNT(library.staff) FROM (library INNER JOIN books ON library.staff = books.id) GROUP BY library.staff,books.id HAVING COUNT(library.staff) = (select max(library.staff) from library); ```
SQL - getting max value
[ "", "sql", "postgresql", "join", "count", "aggregate-functions", "" ]
I have a SQL Server 2008 table, and I need to select a running balance from it ``` TransDate Credit Debit Datasource ------------------------------------------ 2014-01-01 5000 NULL 3 2014-01-07 NULL 2000 3 2014-01-11 5000 NULL 3 2014-02-03 6000 NULL 4 2014-02-06 NULL 4000 4 2014-02-11 3000 NULL 4 2014-02-21 NULL 1000 3 2014-02-28 2000 NULL 3 2014-03-01 5000 NULL 3 ``` I tried a correlated query ``` Select t.TransDate, t.Credit, t.Debit, (Select sum(coalesce(x.credit, 0) - coalesce(x.debit, 0)) From Transactions x WHERE x.DataSource IN (3,4) AND (x.TransDate >= '2014/02/01' AND x.TransDate <= '2014/02/28' ) AND x.TransDate = t.TransDate) Balance From Transactions t ``` but I get all nulls for the `Balance`. Expected output is ``` TransDate Credit Debit Balance ------------------------------------------ 2014-01-11 NULL NULL 8000 <- opening balance 2014-02-03 6000 NULL 14000 2014-02-06 NULL 4000 10000 2014-02-11 3000 NULL 13000 2014-02-21 NULL 1000 12000 2014-02-28 2000 NULL 15000 <- closing balance ```
You need to self join table. ``` CREATE TABLE Test ( TransDate DATE, Credit INT, Debit INT, ); INSERT INTO Test VALUES ('2014-01-01', 5000, NULL), ('2014-01-07', NULL, 2000), ('2014-01-11', 5000, NULL), ('2014-02-03', 6000, NULL), ('2014-02-06', NULL, 4000), ('2014-02-11', 3000, NULL), ('2014-02-21', NULL, 1000), ('2014-02-28', 2000, NULL), ('2014-03-01', 5000, NULL) WITH CTE AS ( SELECT t2.TransDate, t2.Credit, t2.Debit, SUM(COALESCE(t1.credit, 0) - COALESCE(t1.debit, 0)) AS Balance FROM Test t1 INNER JOIN Test t2 ON t1.TransDate <= t2.TransDate WHERE t1.DataSource IN (3,4) GROUP BY t2.TransDate, t2.Credit, t2.Debit ) SELECT * FROM CTE WHERE (TransDate >= '2014/01/11' AND TransDate <= '2014/02/28' ) ``` **OUTPUT** ``` TransDate Credit Debit Balance 2014-01-11 5000 (null) 8000 2014-02-03 6000 (null) 14000 2014-02-06 (null) 4000 10000 2014-02-11 3000 (null) 13000 2014-02-21 (null) 1000 12000 2014-02-28 2000 (null) 14000 ``` [**SQL FIDDLE**](http://sqlfiddle.com/#!6/6bdf6/33)
I would recommend to doing this: ## Data Set ``` CREATE TABLE Test1( Id int, TransDate DATE, Credit INT, Debit INT ); INSERT INTO Test1 VALUES (1, '2014-01-01', 5000, NULL), (2, '2014-01-07', NULL, 2000), (3, '2014-01-11', 5000, NULL), (4, '2014-02-03', 6000, NULL), (5, '2014-02-06', NULL, 4000), (6, '2014-02-11', 3000, NULL), (7, '2014-02-21', NULL, 1000), (8, '2014-02-28', 2000, NULL), (9, '2014-03-01', 5000, NULL) ``` ## Solution ``` SELECT TransDate, Credit, Debit, SUM(isnull(Credit,0) - isnull(Debit,0)) OVER (ORDER BY id ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) as Balance FROM Test1 order by TransDate ``` ## OUTPUT ``` TransDate Credit Debit Balance 2014-01-01 5000 NULL 5000 2014-01-07 NULL 2000 3000 2014-01-11 5000 NULL 8000 2014-02-03 6000 NULL 14000 2014-02-06 NULL 4000 10000 2014-02-11 3000 NULL 13000 2014-02-21 NULL 1000 12000 2014-02-28 2000 NULL 14000 2014-03-01 5000 NULL 19000 ``` Thank You!
Select running balance from table credit debit columns
[ "", "sql", "sql-server-2008", "select", "cumulative-sum", "" ]
I have two tables: 1. "Sessions" - it have int key identity, "session\_id" - varchar, "device\_category" - varchar and some other colums. There are 149239 rows. 2. Session\_events" - it have int key identity, "session\_id" - uniqueidentifier and some other fields. There are 3140768 rows there. This tables has been imported from not relational database - Cassandra, so I not created any connections in MS SQL Server designer. But real connection between Sessions and Session\_events on column session\_id is Many-To-Many Now I want to delete all web-sessions that was not take place on Personal Computer "device\_category". So I run request `Delete * FROM sessions where device_category != "PC"` that was fast. Now I want to to delete all not PC sessions from Session\_events table. So I run request ``` Delete FROM session_events where session_id Not In (SELECT distinct session_id FROM sessions) ``` That request is currently running for more then 24 hour and I don't know how long it can take... (I have 16 GB ram and Intel Xenon). I know that Left Join can be faster but 20% is not interesting. Do you see the way to finish my task much faster? ``` ---- CREATE TABLE [dbo].[session_events]( [key] [bigint] IDENTITY(1,1) NOT NULL, [session_id] [uniqueidentifier](max) NULL, [visitor_id] [uniqueidentifier] NULL, [shipping_method] [varchar](max) NULL, [shipping_price] [varchar](max) NULL, [site_id] [int] NULL, [stream_data_chunk] [varbinary](max) NULL, [total] [varchar](max) NULL, [total_inc_tax] [varchar](max) NULL, [tracker_ver] [varchar](max) NULL ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY] ``` --- ``` CREATE TABLE [dbo].[visitor_sessions]( [key] [int] IDENTITY(1,1) NOT NULL, [visitor_id] [varchar](max) NULL, [created] [varchar](max) NULL, [session_id] [varchar](max) NULL ) CONSTRAINT [PK_visitor_sessions4] PRIMARY KEY CLUSTERED ( [key] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY] GO SET ANSI_PADDING OFF GO ```
Deleting a huge bunch of data at once means that the DB engine has to do all of that in a single transaction. This means a huge overhead when you don't actually need that (e.g. you don't need to rollback the whole operation, or you don't care about consistency - you just want to delete everything, if it fails in the middle, you'll just run the query again to delete the rest). For your case, you could try deleting in batches. For example: ``` delete top 1000 from session_events where session_id Not In (SELECT distinct session_id FROM sessions) ``` Repeat until the table is empty. Also, you have started from the wrong point. You might have been better off creating a foreign key between the two first, and using "on delete cascade". That would automatically delete all the `session_events` that no longer have a valid `session`. If you can start over, it *might* be significantly faster. No promises, though :D
Why not use a left join? other alternative is to use EXISTS instead of IN: ``` DELETE FROM Session_events WHERE NOT EXISTS( SELECT 1 FROM Session WHERE Session.Session_Id = Session_events.Session_Id ) ```
Extremely long "Not In" request
[ "", "sql", "sql-server", "database", "performance", "query-performance", "" ]
I have a Table (let's say table) with 4 columns - ``` Product_ID, Designer, Exclusive, Handloom ``` ID is the primary key and other 3 columns have values 0 or 1. Eg - `0` in `Designer` means the product is not designer and `1` means it is designer. I want to write down a query to select 6 rows out of the given 16 rows, having `>=4 Designer`, `>=4 Exclusive` and `>=4 Handloom` products. 6 rows starting from the top, not limited to top 6 rows only (As there can be multiple combinations, so we will start from top) I am not able to find out a clear solution to this. Below is the data of the Table table: ``` Code Designer Exclusive Handloom A 1 0 1 B 1 0 0 C 0 0 1 D 0 1 0 E 0 1 0 F 1 0 1 G 0 1 0 H 0 0 0 I 1 1 1 J 1 1 1 K 0 0 1 L 0 1 0 M 0 1 0 N 1 1 0 O 0 1 1 P 1 1 0 ``` If I solve it manually, the result would be rows with `Product_ID: a,f,i,j,n,o`
Is this going to be horribly slow? ``` select t1.Code, t2.Code, t3.Code, t4.Code, t5.Code, t6.Code from Table t1, Table t2, Table t3, Table t4, Table t5, Table t6 where t1.Code < t2.Code and t2.Code < t3.Code and t3.Code < t4.Code and t4.Code < t5.Code and t5.Code < t6.Code and t1.Designer + t2.Designer + t3.Designer + t4.Designer + t5.Designer + t6.Designer >= 4 and t1.Exclusive + t2.Exclusive + t3.Exclusive + t4.Exclusive + t5.Exclusive + t6.Exclusive >= 4 and t1.Handloom + t2.Handloom + t3.Handloom + t4.Handloom + t5.Handloom + t6.Handloom >= 4 order by t1.Code, t2.Code, t3.Code, t4.Code, t5.Code, t6.Code limit 1; ``` It seems you want the alphabetically lowest combination of product codes that satisfy your condition. I don't know if the 6-way cross join is going to be a performance issue but I believe it's correct per the requirement and possibly a reasonable starting point. Depending on how much you know about the data in advance you might improve performance by eliminating rows (and thus the total number of row combinations) where only a single flag is set. I tried this on SQL Server. Without the limit of one row it returns 157 matches. AFIJNP is number 69. There are 7 that include B, the first of which is ABDIJO. If I change the query to sort first on t6.Code then the result is ADEFIJ. So I don't know if I understand that part of your requirement.
``` SELECT Product_ID, Designer, Exclusive, Handloom FROM table-name WHERE Designer <> 0 AND (Exclusive <> 0 OR Handloom <> 0) LIMIT 6 ; ```
MYSQL - Select rows fulfilling many count conditions
[ "", "mysql", "sql", "" ]
I want to figure out if a query result contains exactly some items. I explain: I have a profile\_stores table that contains the managed stores of each profile Let's say i have 5 profiles P1-> P5 and 4 stores s1 -> s4 and my table contains ``` profie | store ------------------ P1 | S1 P2 | S2 P3 | S3 P2 | S2 P1 | S2 P1 | S3 P4 | S1 P5 | S2 ``` Now, we have the following profile 1 P1 is managing the stores S1 and S2 and S3 ``` => P1(S1,S2,S3) P2(S1,S2) P3(S4) P4(S1) P5(S2) ``` What i need to know now is how can i set my query to get the profiles that are managing simultanously and only the stores S1 and S2 ? Ps: In my case the query should return only P2 Any Help will be appreciated
`GROUP BY`-solution: ``` select profile from tablename group by profile having count(distinct store) = 2 and min(store) = 'S1' and max(store) = 'S2' ``` Alternative solution: ``` select profile from tablename t1 where not exists (select 1 from tablename t2 where t1.profile = t2.profile and t2.store not in ('S1', 'S2', ...)) group by profile having count(distinct store) = 2 (or 3 or 4...) ```
``` SELECT DISTINCT profile FROM profile_stores WHERE profile IN (SELECT profile FROM profile_stores WHERE store = 'S1') AND profile IN (SELECT profile FROM profile_stores WHERE store = 'S2') AND profile NOT IN (SELECT profile FROM profile_stores WHERE store NOT IN ('S1','S2')) ``` Alternatively, use the lesser-known set operators (which give a more easily readable SQL code in my opinion): ``` SELECT profile FROM profile_stores WHERE store = 'S1' INTERSECT SELECT profile FROM profile_stores WHERE store = 'S2' EXCEPT SELECT profile FROM profile_stores WHERE store NOT IN ('S1', 'S2') ``` In both cases, we start by filtering out only those profiles that are linked to stores 'S1' and 'S2' simultaneously. From this set, we then remove profiles that are also linked to other stores. Both these set-oriented methods can be easily expanded to a variety of search conditions. For example to find profiles that are linked to any number of stores while not being linked to others (in general, or specific), etc.
How to figure out if a query result contains exactly some items
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "" ]
What are the best and the most efficient way of finding null values in multiple columns. For example: ``` Name Location Age Address Mike CLT 19 Null Null NY 28 Null ``` and so on... I just need to find out if there is any NULL value in any of these columns.
if you want to know if there are nulls in any column, this could be a good trick to generate an XML file from the rows containing those nulls; it should work for almost any table, just replace 'yourtable' with the name of the relevant table: ``` SELECT CAST ( (SELECT * FROM yourtable FOR XML path('x'),ELEMENTS XSINIL) AS XML) .query('//.[@xsi:nil="true"]/..') ```
Check this query. Hope this gives you desired result. ``` Select * from YourTableName where Name is null or location is null or age is null or address is null ```
Finding Null values in multiple columns in SQL
[ "", "sql", "sql-server", "t-sql", "" ]
I have 2 tables, `Table1 (Id, Name, Email)` and `Table2 (Id, ItemName, Price, IsShipping)`. I need to join them on `Id` columns, but `IsShipping` column has `NULL` values, I need to change `NULL` to `0`. For now I have: ``` SELECT Table1.Id, Table1.Name, Table1.Email, Table2.ItemName, Table2.Price, Table2.IsShipping FROM Table1 JOIN Table2 ON Table1.Id = Table2.Id --here I need to make something like: WHERE IsShipping IS NULL = 0 -- or etc ```
You can use [ISNULL](https://msdn.microsoft.com/en-us/library/ms184325.aspx) in following: > ISNULL replaces NULL with the specified replacement value. ``` SELECT t1.Id, t1.Name, t1.Email, t2.ItemName, t2.Price, ISNULL(t2.IsShipping, 0) -- Here you pass ISNULL FROM Table1 t1 JOIN Table2 t2 ON t1.Id = t2.Id ```
Use the ANSI standard function `coalesce()`: ``` SELECT t1.Id, t1.Name, t1.Email, t2.ItemName, t2.Price, coalesce(t2.IsShipping , 0) as IsShipping FROM Table1 t1 JOIN Table2 t2 ON t1.Id = t2.Id; ``` I also added table aliases. They make the query easier to write and to read.
How to select data from 2 tables and convert NULL to 0?
[ "", "sql", "sql-server", "" ]
EDIT: This is an example relation! I need it to work on a bigger relation so no workarounds! So I was given a simple task and at first I didn't see what could possibly be wrong and now I just don't understand why it doesnt work. Lets say I have a table of people and their friends and I want to select the ones who have 2 or more friends. **people** ``` ------------------------------ |person | friend | relation | |----------------------------- |ana | jon | friend | |ana | jon | lover | |ana | phillip| friend | |ana | kiki | friend | |mary | jannet | friend | |mary | jannet | lover | |peter | july | friend | ``` --- I would want to do a ``` SELECT person FROM people GROUP BY person HAVING count(distinct friend) > 1; ``` and get ``` ------- | ana | ------- ``` But I get a syntax error when using the '`distinct`' in the HAVING clause. I understand that the '`distinct`' is a part of the projection clause but how do I make '`count`' only count distinct entries without an additional subquery or something? EDIT: The best I could come up with is: ``` SELECT tmp.person FROM (SELECT person, count(distinct friend) AS numfriends FROM people GROUP BY person) AS tmp WHERE tmp.numfriends > 1; ```
From the doc <http://www-01.ibm.com/support/knowledgecenter/SSGU8G_12.1.0/com.ibm.sqls.doc/ids_sqs_0162.htm> > The condition in the HAVING clause cannot include a DISTINCT or UNIQUE > aggregate expression. An work-around would be to have the count distinct in the select ``` SELECT person, count(distinct friend) as f_count FROM people GROUP BY person HAVING f_count > 1; ``` **UPDATE :** Checked the document and found the facts > The HAVING clause is evaluated before the SELECT - so the server > doesn't yet know about that alias. So to achieve the goal it could be done as ``` select person, f_count from( SELECT person, count(distinct friend) as f_count FROM people GROUP BY person )x where f_count > 1 ```
you need to write it like this ``` SELECT person FROM people WHERE relation = 'friend' GROUP BY person HAVING count(*) > 1; ```
How to have 'distinct' in having clause
[ "", "sql", "informix", "" ]
Hi I have the following code ``` SELECT entertainer_id, entertainer_groupname FROM casestudy_entertainer INNER JOIN casestudy_availability ON casestudy_entertainer.entertainer_id = CASESTUDY_AVAILABILITY.AVAILABILITY_ENTERTAINERID INNER JOIN casestudy_calendardates ON CASESTUDY_AVAILABILITY.AVAILIBILITY_CALENDARDATEID = casestudy_calendardates.calendar_Id WHERE entertainer_type = '&Entertainer_TYPE' AND casestudy_calendardates.calendar_date = '&Event_date' ``` And I don't seem to be able to figure out what its not liking when I run this. It gives me the following error > ORA-00904: "CASESTUDY\_AVAILIBILITY"."AVAILIBILITY\_CALENDARDATEID": invalid identifier > 00904. 00000 - "%s: invalid identifier" > \*Cause: > \*Action: > Error at Line: 7 Column: 4 I do have all the tables in place with all the correct rows. The only thing is I have no data as of yet, Could this possibly be the issue?
You should try the lower case for the table/column identifiers(like in `from`/`inner join` clauses): ``` SELECT entertainer_id, entertainer_groupname FROM casestudy_entertainer INNER JOIN casestudy_availability ON casestudy_entertainer.entertainer_id = casestudy_availability.availability_entertainerid INNER JOIN casestudy_calendardates ON casestudy_availability.availibility_calendardateid = casestudy_calendardates.calendar_id WHERE entertainer_type = '&Entertainer_TYPE' AND casestudy_calendardates.calendar_date = '&Event_date' ```
This Error is caused by a special character in one of the columns of the database table. DBA will be able to help you.
SQL Queries "00904. 00000 - "%s: invalid identifier"
[ "", "sql", "oracle", "" ]
I would like to know how to replace the a part of a value which is stored into a columns named "keyword\_data" in the table "keyword". The values in the MySQL columns look like this: ``` a:5:{s:11:"{%KEYWORD%}";s:59:"keyword test 1";s:9:"{%PRICE%}";s:6:"279,99";s:12:"{%CATEGORY%}";s:6:"Camera";s:9:"{%IMAGE%}";s:61:"http://exemple.com/images/I/1172oJo2lpL._AA160_.jpg";s:12:"{%REDIRECT%}";s:158:"http://www.exemple.net/keywordtest1/dp/B00EHK7QVK/ref=sr_1_750?s=hi&amp;ie=UTF8&amp;qid=1428056859&amp;sr=1-750&amp;keywords=hd1?tag=mytag";} a:5:{s:11:"{%KEYWORD%}";s:53:"keyword test 2";s:9:"{%PRICE%}";s:6:"397,61";s:12:"{%CATEGORY%}";s:3:"cat";s:9:"{%IMAGE%}";s:61:"exemple.com/images/I/21iYVLu80SL._AA160_.jpg";s:12:"{%REDIRECT%}";s:187:"exemple.com/keywordtest2/dp/B00RDF1GLW/ref=sr_1_6064?s=industrial&amp;ie=UTF8&amp;qid=1428944582&amp;sr=1-6064&amp;keywords=hd2?tag=mytag";} ``` etc. I cannot just make a simple replace value since they are all different. Each row has a different keyword, I only need to replace this for each row in the table. ``` ?tag=mytag ``` into ``` &tag=mytag ``` Any help from the community would be cool :)
Just do: ``` UPDATE keyword SET keyword_data = REPLACE( keyword_data , '?tag=mytag' , '&tag=mytag') WHERE keyword_data LIKE '%?tag=mytag%'; ``` [Working example!](http://sqlfiddle.com/#!9/26e1d/1) Also you should be really careful with this. Always make a copy of your database before you fool around with it like that since you can easily mess everything up just because you missed a little char somewhere.
mysql provides a command called REPLACE. REPLACE has the following structure: ``` REPLACE(text, from_string, to_string) ``` This can be used in an update statement: ``` UPDATE keyword SET keyword_data = REPLACE(keyword_data, '?tag=mytag', '&tag=mytag'); ```
Search and replace part of string
[ "", "mysql", "sql", "" ]
I have 4 tables in my SQL Server 2008 database : * CONTACT * CONTACT\_DETAILS * PLANS * PLANS\_DETAILS Every record is recorded in `CONTACT` and `CONTACT_DATAILS` but a `CONTACT` can have 0, 1, 2 or more records in `PLANS`, Active or Cancelled. So I did this: ``` SELECT * from CONTACT as c left join PLANS as pp on pp.PKEY = c.PKEY left join PLANS_DETAILS as pd on pd.PDKEY = p.PDKEY inner join CONTACT_DETAILS as cd on cd.DKEY = c.DKEY WHERE c.KEY = '267110' and PP.STATUS = 'Active' ``` "267110" have 1 active PLAN so it shows me 1 line, everything I need. But if I put ``` WHERE c.KEY = '100003' and PP.STATUS = 'Active' ``` "100003" have 2 cancelled plans, so the result is empty. If I remove *PP.STATUS = 'Active'* , it returns me 2 identical results, but I need just one. In resume: I need a select that returns me 1 row only. If there is an active plan, return the columns, if not, return the columns null. If someone have 1 cancelled and 1 active plan, return me only the active plan columns.
The answer to your question is to move the condition on `pp` to the `on` clause. ``` SELECT * from CONTACT c inner join CONTACT_DETAILS cd on cd.DKEY = c.DKEY left join PLANS pp on pp.PKEY = c.PKEY AND PP.STATUS = 'Active' left join PLANS_DETAILS pd on pd.PDKEY = p.PDKEY WHERE c.KEY = '267110' ; ``` In addition, when you have a series of inner and left joins, I recommend putting all the inner joins first, followed by the outer joins. That makes it clear which joins are used for keeping records and which for filtering.
Just add an `ORDER BY PP.STATUS DESC` and a `TOP 1` clause, and delete the `and PP.STATUS = 'Active'`, like this ``` SELECT TOP 1 * from CONTACT as c left join PLANS as pp on pp.PKEY = c.PKEY left join PLANS_DETAILS as pd on pd.PDKEY = p.PDKEY inner join CONTACT_DETAILS as cd on cd.DKEY = c.DKEY WHERE c.KEY = '100003' ORDER BY PP.STATUS DESC ```
Join questionn - Select must return only one record
[ "", "sql", "sql-server", "join", "" ]
so I have 3 tables, actor(id, name), movie (id,name,year) and casts(aid, mid) (which are actor id and movie id). My goal is to select all the actors who acted in a film before 1900 and also in a film after 2000. My query is ``` select a.id from actor a, movie m1, casts c1, movie m2, casts c2 where a.id = c1.aid = c2.aid and c1.mid = m1.id and c2.mid = m2.id and m1.year >2000 and m2.year <1900; ``` this query took really long time and didnt seem to produce the right result. So someone could please help me?
I assume the problem is the expression `a.id = c1.aid = c2.aid`. If I am not mistaken, this first compares `c1.aid` with `c2.aid` and then the boolean result with `a.id`. You could try this: ``` select a.id from actor a inner join casts c1 on c1.aid = a.id inner join casts c2 on c2.aid = a.id inner join movie m1 on c1.mid = m1.id inner join movie m2 on c2.mid = m2.id where m1.year >2000 and m2.year <1900; ``` Or, if you better like the `where` syntax of inner joins, just change `a.id = c1.aid = c2.aid` to `a.id = c1.aid and a.id = c2.aid`
To get actors who were in films during two date ranges, use two subqueries. Something like this: ``` select yourFields from yourTables where actorId in (subquery to get actor id's for one date range) and actorId in (subquery to get actor id's for second date range) ``` You can work out the details.
sql list actors who acted in a film before 1900 and also in a film after 2000
[ "", "sql", "database", "sqlite", "select", "" ]
What I like to do is to pull a data set only for busy hour on every Friday for 1 year period. I have used the following query: ``` select to_char(datetimelocal,'DD/MM/YYYY HH24:MI:SS'), colA,colB,colC,colD from Schema_X.Table_Y where DATETIMELOCAL between '1-Apr-2014' and '1-Apr-2015' and to_char(datetimelocal,'D')=6 and to_number(to_char(datetimelocal,'sssss')) between 57600 and 64800 ``` This query worked, but I got the following warning message from the System admin that I have exhausted the system resources. "There is an user xxx running a query over Schema\_X tables and those are scanning the whole table and not doing a partition pruning. So the user should use partitioned field also reduce the date range, which is too big" I found the Table\_X is daily partitioned, but don't know how to use the partitions wisely to reduce the system load. Partitions are like this: ``` PARTITION_NAME,HIGH_VALUE,HIGH_VALUE_LENGTH,TABLESPACE_NAME,COMPRESSION,NUM_ROWS,BLOCKS,EMPTY_BLOCKS,LAST_ANALYZED,AVG_SPACE,SUBPARTITION_COUNT 20121230,TO_DATE(' 2012-12-31 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'),83,NATIONAL_RPT,DISABLED,,,,,,0 20121231,TO_DATE(' 2013-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'),83,NATIONAL_RPT,DISABLED,,,,,,0 20130101,TO_DATE(' 2013-01-02 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'),83,NATIONAL_RPT,DISABLED,,,,,,0 20130102,TO_DATE(' 2013-01-03 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'),83,NATIONAL_RPT,DISABLED,,,,,,0 20130103,TO_DATE(' 2013-01-04 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'),83,NATIONAL_RPT,DISABLED,,,,,,0 .... ```
As Jon said, Oracle can not zoom in to specific partitions because of how your where clause is expressed. If you want all fridays you have to give the sql engine specific days. This can be done by creation a table with all fridays you need or generating one on the fly. ``` -- generate table CREATE TABLE friday_table (friday_date DATE); DECLARE v_last_friday_of_period DATE := to_date('2015.04.10','yyyy.mm.dd'); v_particular_friday DATE := v_last_friday_of_period; BEGIN WHILE v_last_friday_of_period - v_particular_friday < 365 LOOP INSERT INTO friday_table VALUES (v_particular_friday); v_particular_friday := v_particular_friday - 7; END LOOP; END; / SELECT * FROM tbl t ,friday_table f WHERE t.datetimelock BETWEEN to_date(to_char(f.friday_date,'yyyy.mm.dd ')||'12:00:00','yyyy.mm.dd hh24:mi:ss') AND to_date(to_char(f.friday_date,'yyyy.mm.dd ')||'13:00:00','yyyy.mm.dd hh24:mi:ss'); -- on the fly SELECT * FROM tbl t ,(SELECT to_date('2015.04.10','yyyy.mm.dd') - rownum * 7 AS friday_date FROM dual CONNECT BY rownum <= 52) f WHERE t.datetimelock BETWEEN to_date(to_char(f.friday_date,'yyyy.mm.dd ')||'12:00:00','yyyy.mm.dd hh24:mi:ss') AND to_date(to_char(f.friday_date,'yyyy.mm.dd ')||'13:00:00','yyyy.mm.dd hh24:mi:ss'); ```
Your query is filtering in stages, first by year then by day, then by time. That means that in principal any record could be counted; the system cannot easily zoom in on the partitions you need. You need to express the filter in terms of specific dates: ``` SELECT * FROM Tbl WHERE DateTimeLock >= DATE'2015-04-03' AND DateTimeLock < DATE'2015-04-04' AND AND EXTRACT(HOUR FROM DateTimeLock)) BETWEEN 16 AND 17 -- Inclusive ``` This will zoom in on the exact partition you need. However, it obviously just gives you the one day's data. You'll probably need to use a loop to query each Friday in a separate query, collecting the results in a table. You could try to keep it as a single query by using OR in the first `DateTimeLock` filter: ``` ( DateTimeLock >= DATE'2015-04-03' AND DateTimeLock < DATE'2015-04-04' OR DateTimeLock >= DATE'2015-03-27' AND DateTimeLock < DATE'2015-03-28' OR DateTimeLock >= DATE'2015-03-20' AND DateTimeLock < DATE'2015-03-21' ) AND EXTRACT(HOUR FROM DateTimeLock)) BETWEEN 16 AND 17 -- Inclusive ``` ...however, I suspect that the query engine will convert this to a table scan, which is what you started out with.
Oracle SQL pull data for a selected date from a table that is daily partitioned
[ "", "sql", "oracle", "overloading", "partition", "" ]
I want to join these two SQL select statements and get both results as a single result. ``` --1st select statement SELECT TA.DevID, TA.Task, TA.AssignDate, TA.DevStart,TA.TaskType, TA.EstimateTime, TA.Status, TA.DevFinish, TAKPI.actAsignTime, TAKPI.actTime, TAKPI.KPI FROM TT_TaskAssign TA INNER JOIN HR_EmployeeMaster1 EM on TA.DevID=EM.EmpNo INNER JOIN dbo.TT_TaskAsignKPI TAKPI ON TAKPI.AssignID = TA.AssignID WHERE TA.DevID='1435' --2nd select statement SELECT TT_TaskAssign.DevID AS DevID ,HR_EmployeeMaster1.EmpFirstName+' '+HR_EmployeeMaster1.EmpMiddleName+' '+HR_EmployeeMaster1.EmpLastName as Developer, SUM( CASE WHEN TT_TaskAssign.Status='Done' THEN 1 ELSE 0 END ) as DoneProjects , SUM( CASE WHEN TT_TaskAssign.Status='Developing' THEN 1 ELSE 0 END ) as DevelopingPrjects, SUM( CASE WHEN TT_TaskAssign.Status='Assign' THEN 1 ELSE 0 END ) as AssignedPending, SUM(CONVERT(int, TAKPI.KPINum)) AS KPINum FROM TT_TaskAssign inner join Project_Master on TT_TaskAssign.ProID=Project_Master.Project_Code Inner Join HR_EmployeeMaster1 on TT_TaskAssign.DevID=HR_EmployeeMaster1.EmpNo INNER JOIN dbo.TT_TaskAsignKPI TAKPI ON TAKPI.AssignID = TAKPI.AssignID WHERE TT_TaskAssign.DevID='1435' GROUP BY TT_TaskAssign.DevID,HR_EmployeeMaster1.EmpFirstName+' '+HR_EmployeeMaster1.EmpMiddleName+' '+HR_EmployeeMaster1.EmpLastName ORDER BY SUM(CONVERT(int, TAKPI.KPINum)) DESC ``` I did it in follwing way ``` SELECT T1.*, T2.* FROM (SELECT TA.DevID, TA.Task, TA.AssignDate, TA.DevStart,TA.TaskType, TA.EstimateTime, TA.Status, TA.DevFinish, TAKPI.actAsignTime, TAKPI.actTime, TAKPI.KPI FROM TT_TaskAssign TA INNER JOIN HR_EmployeeMaster1 EM on TA.DevID=EM.EmpNo INNER JOIN dbo.TT_TaskAsignKPI TAKPI ON TAKPI.AssignID = TA.AssignID WHERE TA.DevID='1435') T1 INNER JOIN (SELECT TT_TaskAssign.DevID AS DevID ,HR_EmployeeMaster1.EmpFirstName+' '+HR_EmployeeMaster1.EmpMiddleName+' '+HR_EmployeeMaster1.EmpLastName as Developer, SUM( CASE WHEN TT_TaskAssign.Status='Done' THEN 1 ELSE 0 END ) as DoneProjects , SUM( CASE WHEN TT_TaskAssign.Status='Developing' THEN 1 ELSE 0 END ) as DevelopingPrjects, SUM( CASE WHEN TT_TaskAssign.Status='Assign' THEN 1 ELSE 0 END ) as AssignedPending, SUM(CONVERT(int, TAKPI.KPINum)) AS KPINum FROM TT_TaskAssign inner join Project_Master on TT_TaskAssign.ProID=Project_Master.Project_Code Inner Join HR_EmployeeMaster1 on TT_TaskAssign.DevID=HR_EmployeeMaster1.EmpNo INNER JOIN dbo.TT_TaskAsignKPI TAKPI ON TAKPI.AssignID = TAKPI.AssignID WHERE TT_TaskAssign.DevID='1435' GROUP BY TT_TaskAssign.DevID,HR_EmployeeMaster1.EmpFirstName+' '+HR_EmployeeMaster1.EmpMiddleName+' '+HR_EmployeeMaster1.EmpLastName ORDER BY SUM(CONVERT(int, TAKPI.KPINum)) DESC) T2 ON T1.DevID=T2.DevID ``` Now It's giving following error ``` Msg 1033, Level 15, State 1, Line 17 The ORDER BY clause is invalid in views, inline functions, derived tables, subqueries, and common table expressions, unless TOP or FOR XML is also specified. ``` Please somebody help me to fix this error
remove `order by` from second query ``` SELECT T1.*, T2.* FROM (SELECT TA.DevID, TA.Task, TA.AssignDate, TA.DevStart,TA.TaskType, TA.EstimateTime, TA.Status, TA.DevFinish, TAKPI.actAsignTime, TAKPI.actTime, TAKPI.KPI FROM TT_TaskAssign TA INNER JOIN HR_EmployeeMaster1 EM on TA.DevID=EM.EmpNo INNER JOIN dbo.TT_TaskAsignKPI TAKPI ON TAKPI.AssignID = TA.AssignID WHERE TA.DevID='1435') T1 INNER JOIN (SELECT TT_TaskAssign.DevID AS DevID ,HR_EmployeeMaster1.EmpFirstName+' '+HR_EmployeeMaster1.EmpMiddleName+' '+HR_EmployeeMaster1.EmpLastName as Developer, SUM( CASE WHEN TT_TaskAssign.Status='Done' THEN 1 ELSE 0 END ) as DoneProjects , SUM( CASE WHEN TT_TaskAssign.Status='Developing' THEN 1 ELSE 0 END ) as DevelopingPrjects, SUM( CASE WHEN TT_TaskAssign.Status='Assign' THEN 1 ELSE 0 END ) as AssignedPending, SUM(CONVERT(int, TAKPI.KPINum)) AS KPINum FROM TT_TaskAssign inner join Project_Master on TT_TaskAssign.ProID=Project_Master.Project_Code Inner Join HR_EmployeeMaster1 on TT_TaskAssign.DevID=HR_EmployeeMaster1.EmpNo INNER JOIN dbo.TT_TaskAsignKPI TAKPI ON TAKPI.AssignID = TAKPI.AssignID WHERE TT_TaskAssign.DevID='1435' GROUP BY TT_TaskAssign.DevID,HR_EmployeeMaster1.EmpFirstName+' '+HR_EmployeeMaster1.EmpMiddleName+' '+HR_EmployeeMaster1.EmpLastName ) T2 ON T1.DevID=T2.DevID ```
As the error message sayed. `ORDER BY` is not allowed in subquery. If you need the `order by` clause you have to use it for the whole result of the `join`.
Join two SQL select statements and get both results in to single
[ "", "sql", "sql-server", "sql-server-2005", "" ]
I'm trying to extract email addresses from an existing comments field and put it into its own column. The string may be something like this "this is an example comment with an email address of someemail@domain.org" or just literally the email itself "someemail@domain.org". I figure the best thing to do would be to find the index of the '@' symbol and search in both directions until either the end of the string was hit or there was a space. Can anyone help me out with this implementation?
You can search for `'@'` in the string. Then you get the string at the `LEFT` and `RIGHT` side of `'@'`. You then want to `REVERSE` the `LEFT` side and get first occurrence of `' '` then get the `SUBSTRING` from there. Then `REVERSE` it to get the original form. Same principle apply to the `RIGHT` side without doing `REVERSE`. Example string: `'some text someemail@domain.org some text'` 1. `LEFT` = 'some text someemail' 2. `RIGHT` = '@domain.org some text' 3. Reverse LEFT = 'liameemos txet emos' 4. `SUBSTRING` up to the first space = 'liameemos' 5. `REVERSE`(4) = someemail 6. `SUBSTRING` (2) up to the first space = '@domain.org' 7. Combine 5 and 6 = 'someemail@domain.org' Your query would be: ``` ;WITH CteEmail(email) AS( SELECT 'someemail@domain.org' UNION ALL SELECT 'some text someemail@domain.org some text' UNION ALL SELECT 'no email' ) ,CteStrings AS( SELECT [Left] = LEFT(email, CHARINDEX('@', email, 0) - 1), Reverse_Left = REVERSE(LEFT(email, CHARINDEX('@', email, 0) - 1)), [Right] = RIGHT(email, CHARINDEX('@', email, 0) + 1) FROM CteEmail WHERE email LIKE '%@%' ) SELECT *, REVERSE( SUBSTRING(Reverse_Left, 0, CASE WHEN CHARINDEX(' ', Reverse_Left, 0) = 0 THEN LEN(Reverse_Left) + 1 ELSE CHARINDEX(' ', Reverse_Left, 0) END ) ) + SUBSTRING([Right], 0, CASE WHEN CHARINDEX(' ', [Right], 0) = 0 THEN LEN([Right]) + 1 ELSE CHARINDEX(' ', [Right], 0) END ) FROM CteStrings ``` **Sample Data:** ``` email ---------------------------------------- someemail@domain.org some text someemail@domain.org some text no email ``` **Result** ``` --------------------- someemail@domain.org someemail@domain.org ```
I know wewesthemenace already answered the question, but his/her solution seems over complicated. Why concatenate the left and right sides of the email address together? I'd rather just find the beginning and the end of the email address and then use substring to return the email address like so: ## My Table ``` DECLARE @Table TABLE (comment NVARCHAR(50)); INSERT INTO @Table VALUES ('blah MyEmailAddress@domain.org'), --At the end ('blah MyEmailAddress@domain.org blah blah'), --In the middle ('MyEmailAddress@domain.org blah'), --At the beginning ('no email'); ``` ## Actual Query: ``` SELECT comment, CASE WHEN CHARINDEX('@',comment) = 0 THEN NULL ELSE SUBSTRING(comment,beginningOfEmail,endOfEmail-beginningOfEmail) END email FROM @Table CROSS APPLY (SELECT CHARINDEX(' ',comment + ' ',CHARINDEX('@',comment))) AS A(endOfEmail) CROSS APPLY (SELECT DATALENGTH(comment)/2 - CHARINDEX(' ',REVERSE(' ' + comment),CHARINDEX('@',REVERSE(' ' + comment))) + 2) AS B(beginningOfEmail) ``` Results: ``` comment email -------------------------------------------------- -------------------------------------------------- blah MyEmailAddress@domain.org MyEmailAddress@domain.org blah MyEmailAddress@domain.org blah blah MyEmailAddress@domain.org MyEmailAddress@domain.org blah MyEmailAddress@domain.org no email NULL ```
Extract email address from string using tsql
[ "", "sql", "sql-server", "regex", "t-sql", "" ]
I need to create a view in PostgreSQL 9.4 about this table: ``` CREATE TABLE DOCTOR ( Doc_Number INTEGER, Name VARCHAR(50) NOT NULL, Specialty VARCHAR(50) NOT NULL, Address VARCHAR(50) NOT NULL, City VARCHAR(30) NOT NULL, Phone VARCHAR(10) NOT NULL, Salary DECIMAL(8,2) NOT NULL, DNI VARCHAR(10) NOT NULL, CONSTRAINT pk_Doctor PRIMARY KEY (Doc_Number) ); ``` The view will show the rank of the doctors with highest `salary` for each `specialty`, I tried this code but it shows all of the doctors fro each specialty: ``` CREATE VIEW top_specialty_doctors AS (Select MAX(Salary), name, specialty from DOCTOR where specialty = 'family and community' or specialty = 'psychiatry' or specialty = 'Rheumatology' group by name, salary, specialty); ``` How can I do for the view shows only the doctor with highest salary for each specialty.
`DISTINCT ON` is a simple Postgres specific technique to get *one* winner per group. Details: * [Select first row in each GROUP BY group?](https://stackoverflow.com/questions/3800551/select-first-row-in-each-group-by-group/7630564#7630564) ``` CREATE VIEW top_specialty_doctors AS SELECT DISTINCT ON (specialty) salary, name, specialty FROM doctor WHERE specialty IN ('family and community', 'psychiatry', 'Rheumatology') ORDER BY specialty, salary DESC, doc_number -- as tiebreaker ``` And you do *not* need parentheses around the query for [`CREATE VIEW`](http://www.postgresql.org/docs/current/interactive/sql-createview.html). If multiple docs tie for the highest salary, the one with the smallest `doc_number` is selected. If `salary` can be NULL, use `DESC NULLS LAST`: * [PostgreSQL sort by datetime asc, null first?](https://stackoverflow.com/questions/9510509/postgresql-sort-by-datetime-asc-null-first/9511492#9511492) For big tables and certain data distributions other query techniques are superior: * [Optimize GROUP BY query to retrieve latest record per user](https://stackoverflow.com/questions/25536422/optimize-group-by-query-to-retrieve-latest-record-per-user/25536748#25536748)
Here's a query that shows the best doctor by salary for each of the specialties: ``` with specialty_ranks as ( select Salary, name, specialty, rank() over ( partition by specialty order by salary desc ) as rank from DOCTOR where specialty in ('family and community', 'psychiatry', 'Rheumatology') ) select specialty, name, salary from specialty_ranks where rank = 1; ``` The query uses a [CTE](http://www.postgresql.org/docs/9.3/static/queries-with.html) and the [RANK() window function](http://www.postgresql.org/docs/9.3/static/functions-window.html) to do the job. You might want to read their docs if you haven't used them before.
create view selecting the max in one column related with another column
[ "", "sql", "postgresql", "max", "greatest-n-per-group", "sql-view", "" ]
table 1: ``` id 1 2 3 5 6 ``` table 2: ``` id 2 3 7 ``` I want only mismatch `id` from table 1 like my result should be `1,5,6`. Please help on this, thanks in advance.
Try this: ``` SELECT * FROM table1 WHERE id NOT IN (SELECT id FROM table2) ```
You can use `NOT IN` ``` select id from table1 where id not in (select id from table2) ```
i have two table find out only mismatch id
[ "", "sql", "" ]
I am trying to to reduce cost and size of a query I use regularly, it looks far too repetitive to be the most efficient way of doing things. A simplified representation can be seen below: ``` SELECT [Year], [Month], COUNT(CASE WHEN Type = 'Quotation' THEN clientID ELSE NULL END), COUNT(CASE WHEN Type = 'Purchase' THEN ClientID ELSE NULL END), SUM(CASE WHEN Type = 'Purchase' THEN Cost ELSE NULL END) ... FROM dbo.Example GROUP BY [Year],[Month] ``` It is the `CASE WHEN ... THEN ELSE NULL END`'s that I think I must be able to trim down, in terms of size and cost. Any recommendations? I am using SQL Server 2008 R2, Thanks.
I think that is the recommended way to 'rotate' a table. Check this article from Microsoft -> <https://support.microsoft.com/en-us/kb/175574> Probably the cost is in the 'Type' column, maybe you can index that -> [How to create an index for a string column in sql?](https://stackoverflow.com/questions/9455568/how-to-create-an-index-for-a-string-column-in-sql)
How about making a view by type ? (Quotation, purchase, etc). In that way, you can optimize view by view and make a join query to get what you want.
Repetitive CASE WHEN in Aggregate SQL SELECT Statements
[ "", "sql", "sql-server", "sql-server-2008", "case", "" ]
I need to write some SQL that will determine if the current month is based on a quarter from a specified date. If I give it '2015-04-09', the function should determine if the month is equal to 1, 4, 7, or 10. If I give it '2015-05-09', it will be 2, 5, 8, 11. Etc... It would work best if the entire solution is self contained within the SELECT statement. I'm not quite sure how to approach this and I couldn't really find anything that fit my situation. I don't care about the day or the year, I just need to verify the month.
It looks like you want to know when this expression is true: ``` (mod(EXTRACT(MONTH FROM SYSDATE()), 3)) = (mod(EXTRACT(MONTH FROM <input date>), 3)) ```
You can extract the month using `month()` and then use modulo arithmetic. Something like this: ``` select (case when mod(month($input) - month($thedate), 3) = 0 then 1 else 0 end) ```
Determine if month of date falls on a quarter of date
[ "", "mysql", "sql", "date", "" ]
I need to display the total amount of hours elapsed for an action within a month and the previous month before it like this: ``` ___________________________________________ | Rank | Action | Month | Prev Month | |------|----------|------------|------------| | 1 | Action1 | 580.2 | 200.7 | | 2 | Action8 | 412.5 | 550.2 | | 3 | Action10 | 405.0 | 18.1 | --------------------------------------------- ``` I have a SQL table in the format of: ``` _____________________________________________________ | Action | StartTime | EndTime | |---------|---------------------|---------------------| | Action1 | 2015-02-03 06:01:53 | 2015-02-03 06:12:05 | | Action1 | 2015-02-03 06:22:16 | 2015-02-03 06:25:33 | | Action2 | 2015-02-03 06:36:07 | 2015-02-03 06:36:49 | | Action1 | 2015-02-03 06:36:46 | 2015-02-03 06:48:10 | | ..etc | 20..-..-.. ...etc | 20..-..-.. ...etc | ------------------------------------------------------- ``` What would the query look like? EDIT: **[A ツ](https://stackoverflow.com/users/4579733/a-%E3%83%84)**'s answer got me headed in the right direction however I solved the problem using a JOIN. See below for my solution.
I'm just revisiting this question after researching a bit more about SQL Server. A temporary table can be created from a query and then used inside another query - a *nested query* if you like. This way the results can `JOIN` together like through any other normal table without nasty `CASE` statements. This is also useful to display other datas required of the first query like `COUNT(DISTINCT ColumnName)` [JOIN two SELECT statement results](https://stackoverflow.com/questions/10538539/join-two-select-statement-results) ``` SELECT TOP 10 t1.Action, t1.[Time], COALESCE(t2.[Time],0) AS [Previous Period 'Time'], COALESCE( ( ((t1.[Time]*1.0) - (t2.[Time]*1.0)) / (t2.[Time]*1.0) ), -1 ) AS [Change] FROM ( SELECT Action, SUM(DATEDIFF(SECOND, StartTime, EndTime)) AS [Time], FROM Actions WHERE StartTime BETWEEN @start AND @end GROUP BY Action ) t1 LEFT JOIN ( SELECT Action, SUM(DATEDIFF(SECOND, StartTime, EndTime)) AS [Time] FROM Actions WHERE StartTime BETWEEN @prev AND @start GROUP BY Action ) t2 ON t1.Action = t2.Action ORDER BY t1.[Time] DESC ``` Hopefully this information is helpful for someone.
i changed the values a bit since only one day is rather boring ``` INSERT INTO yourtable ([Action], [StartTime], [EndTime]) VALUES ('Action1', '2015-02-18 06:01:53', '2015-02-18 06:12:05'), ('Action1', '2015-02-18 06:22:16', '2015-02-18 06:25:33'), ('Action2', '2015-04-03 06:36:07', '2015-04-03 06:36:49'), ('Action1', '2015-03-19 06:36:46', '2015-03-19 06:48:10'), ('Action2', '2015-04-13 06:36:46', '2015-04-13 06:48:10'), ('Action2', '2015-04-14 06:36:46', '2015-04-14 06:48:10') ; ``` now define the date borders: ``` declare @dateEntry datetime = '2015-04-03'; declare @date1 date , @date2 date , @date3 date; set @date1 = @dateEntry; -- 2015-04-03 set @date2 = dateadd(month,-1,@date1); -- 2015-03-03 set @date3 = dateadd(month,-1,@date2); -- 2015-02-03 ``` the selected date will include all action which starts before 2015-04-03 00:00 and starts after 2015-02-03 00:00 ``` select date1 = @date1 , date2 = @date2 , date3 = @date3 , [Action] , thisMonth = sum( case when Starttime between @date2 and @date1 then datediff(second, starttime, endtime)/360.0 end) , lastMonth = sum( case when Starttime between @date3 and @date2 then datediff(second, starttime, endtime)/360.0 end) from yourtable where starttime between @date3 and @date1 group by [Action] ``` <http://sqlfiddle.com/#!6/35784/5>
SQL SUM hours between two dates & GROUP BY Column Name?
[ "", "sql", "sql-server", "datetime", "datediff", "" ]
I am struggling to get this query right. Every time I change it, it breaks something else. ``` SELECT products.id, products.name, COUNT(transactions.id) AS sales, IFNULL(SUM(cart.price_paid), 0) AS amount FROM products LEFT JOIN cart ON cart.product_id = products.id LEFT JOIN transactions ON cart.transaction_id = transactions.id AND transactions.status = 'COMPLETED' WHERE products.user_id = $id AND products.active = 1 AND transactions.id IS NOT NULL GROUP BY products.id ``` The above is trying to get **ALL** the products a user has, regardless if they have sales or not and then show it's sales. Currently, since I have the `AND transactions.id IS NOT NULL` it does not show all products. Only those with sales. If I remove this, then I will see all products but the `amount` field will be incorrect as the amount will be inflated. These are some basic example tables: ``` # Products Table +----+------+---------+--------+ | id | name | user_id | active | +----+------+---------+--------+ | 1 | cup | 4 | 1 | | 2 | ball | 4 | 1 | +----+------+---------+--------+ # Cart Table +----+------------+----------------+-------------+ | id | product_id | transaction_id | Price_paid | +----+------------+----------------+-------------+ | 1 | 1 | 6 | 1.99 | | 2 | 1 | 7 | 1.99 | | 3 | 1 | 8 | 1.99 | | 4 | 1 | 9 | 1.99 | +----+------------+----------------+-------------+ # Transactions Table +----+--------+-----------+ | ID | amount | status | +----+--------+-----------+ | 6 | 1.99 | COMPLETED | | 7 | 1.99 | COMPLETED | | 8 | 2.99 | CREATED | | 9 | 2.99 | CREATED | +----+--------+-----------+ # Result +----+------+-------+--------+ | id | name | sales | amount | +----+------+-------+--------+ | 1 | Cup | 2 | 3.98 | | 2 | Ball | 0 | 0 | +----+------+-------+--------+ ```
A `product` table with a `user_id` column is pretty weird. I can see how a cart belongs to a user, but why would a product belong to a user? For your query, I suppose you'd like to sum the `price_paid` of sold carts only. Where a cart is considered sold if it has a completed transaction. You could do so like this: ``` SELECT products.id , products.name , COUNT(transactions.id) AS sales , SUM(CASE WHEN transactions.id IS NOT NULL THEN cart.price_paid END) AS amount FROM products LEFT JOIN cart ON cart.product_id = products.id LEFT JOIN transactions ON cart.transaction_id = transactions.id AND transactions.status = 'COMPLETED' WHERE products.user_id = $id AND products.active = 1 GROUP BY products.id ``` The `CASE` will filter out the `price_paid` for carts without a completed transaction.
The IFNULL function should be used inside the SUM function. ``` Because NULL+10 is null not 10, IFNULL(NULL,0)+10 is 10 ``` Try This.. ``` SELECT products.id, products.name, COUNT(transactions.id) AS sales, SUM(IFNULL(cart.price_paid, 0)) AS amount FROM products LEFT JOIN cart ON cart.product_id = products.id LEFT JOIN transactions ON cart.transaction_id = transactions.id AND transactions.status = 'COMPLETED' WHERE products.user_id = $id AND products.active = 1 AND transactions.id IS NOT NULL GROUP BY products.id ```
How to correct this SQL query? SUM with JOIN
[ "", "mysql", "sql", "" ]
I have the following table structure : **News** (Id\_news, news\_name) **Comments** (Id\_comment(PK),Id\_news(FK), comment) **Replies**(Id\_Reply(PK), Id\_Comment(FK), reply) --- I want to get the number of replies to comments in the news. This is the query I've tried. ``` SELECT COUNT(*) FROM (News INNER JOIN Comments ON News.Id_news = Commnets.Id_Comment) INNER JOIN Reply ON Commnets.Id_Comment = Reply.Id_Comment WHERE News.Id_news = {0} ``` This this the following situation: I have a web pages with a lot of news, but not all the news have comments and not all the comments have replies. So I need to count if there is any reply in a specific news (e.g. Id\_news = 43), to run one code or another.
As you have mentioned that you want to have a count of replies for any particular news which may or may not have any comment or reply. Here is my try ``` SELECT COUNT(Replys.Id_Reply) FROM NEWS LEFT JOIN Coments ON Coments.ID_News = Coments.Id_News AND NEWS.Id_News = {0} LEFT JOIN Replys ON Replys.Id_Comment = Coments.Id_Comment ``` **Sample code to try** ``` CREATE TABLE NEWS (Id_news INT, news_name varchar(10)) CREATE TABLE Coments (Id_comment INT,Id_news INT, comment varchar(10)) CREATE TABLE Replys(Id_Reply INT, Id_Comment INT , reply varchar(10)) INSERT INTO NEWS VALUES (1,'News1'), (2, 'News2'), (3,'News3') INSERT INTO Coments VALUES (1,1,'Comment1'), (2,1,'Comment2'), (3,2,'Comment3') INSERT INTO Replys VALUES (1,1,'Reply1'),(2,3,'Reply2'), (3,3,'Reply3') ``` **Query**: ``` DECLARE @NewId INT = 2 SELECT COUNT(Id_Reply) FROM NEWS N LEFT JOIN Coments C ON C.ID_News = N.Id_News AND N.Id_News = @NewId LEFT JOIN Replys R ON R.Id_Comment = C.Id_Comment ``` Check on [SQL Fiddle](http://sqlfiddle.com/#!6/e5a59/8)
Try this.. ``` SELECT News.Id_news,COUNT(*) FROM News JOIN Comments ON News.Id_news = Commnets.Id_news JOIN Reply ON Commnets.Id_Comment = Reply.Id_Comment WHERE News.Id_news = {0} group by News.Id_news,Commnets.Id_Comment ```
SQL Server INNER JOIN with three tables
[ "", "sql", "asp.net", "sql-server", "" ]
Let's say I have a table that looks like, ``` id 2 2 3 4 5 5 5 ``` How do I get something like, ``` id count 2 2 3 1 4 1 5 3 ``` where the `count` column is just the count of each id in the `id` column?
You want to use the GROUP BY operation ``` SELECT id, COUNT(id) FROM table GROUP BY id ```
``` select id, count(id) from table_name group by id ``` or ``` select id, count(*) from table_name group by id ```
Count items in column SQL query
[ "", "sql", "sqlite", "" ]
So what I need to do is to select the maximum of column (and increment it with 1 each time) and if this value is null (which will be the case for the first occurence) I must return the constant 0 (which will be the max for the next occurence). My goal is to do something like this : ``` SELECT (Max(id) + 1) FROM my_table WHERE id IS NOT NULL UNION SELECT 0 FROM my_table WHERE id IS NULL ``` However that does not work. What can I do?
The way your query is written, it will return two rows. One for the first query, and one after the union. If you only want a single row, you can just use the MAX() function. If the table is empty, the MAX() function returns null. An option you have then is to use COALESCE() in your select statement, which returns the first non null value. In other words this query: ``` SELECT COALESCE(MAX(id) + 1, 0) AS maxID FROM myTable; ``` Could be explained as: Give me the maximum ID from the table. If the table was empty, just return 0. Here is a [fiddle](http://sqlfiddle.com/#!9/ba4a8/1) example with an empty table and a non-empty table.
Try this ``` SELECT COALESCE(Max(id + 1), 0) FROM my_table WHERE id IS NOT NULL ```
SQL - select an increment value or return a constant
[ "", "mysql", "sql", "select", "null", "union", "" ]
Let's say I have a table like this: ``` OrderId | CustomerId | ProductName ====================================== 73 | 301 | Sponge 74 | 508 | Garbage Bag 75 | 301 | Spoon 76 | 301 | Bacon 77 | 508 | Dog treats 78 | 301 | Paper 79 | 905 | Text book ``` and I want to find a customer who has made two orders in the past. How would I set up the query? For the table above, the query would return the two rows for customer 508. How would I modify it to return customers who have one previous order, so that it would return the row for customer 905?
``` select customerId, count(*) from mytable group by customerId having count(*) >= 2 ```
If you need only `CustomerId` of those who have exactly one order in table (they exist once) then the following query groups customers and counts how many times they appear in a table (here showing only those who appear once, modify as you wish). ``` SELECT CustomerId FROM table GROUP BY 1 HAVING COUNT(*) = 1 ``` Let's say you want to list every customer and number of orders they've placed but no less than 2 then modify above query to add `COUNT(*)` in column list to be selected and the `HAVING` condition like that: ``` SELECT CustomerId COUNT(*) AS no_of_orders FROM table GROUP BY 1 HAVING COUNT(*) > 1 ```
SQL - find rows having n duplicate values
[ "", "sql", "" ]
I have a table with indexes of imported files, with dates and branches of each imported files. Now I need to do a consolidation of multiple branches, so, that I have to duplicate the information from some branches when we have holidays, so that the data is consistent, basically I need to fill these gaps with the latest available information. I tried doing some self-joins with ranking in order to shift between them and get the previous data, but it didn't work. What I have is a table: ``` rundate, branch, imported 2015-04-01, PL1, TRUE 2015-04-01, ES1, TRUE 2015-04-01, CZ4, TRUE 2015-04-02, PL1, TRUE 2015-04-02, ES1, TRUE 2015-04-02, CZ4, TRUE 2015-04-03, ES1, TRUE 2015-04-03, CZ4, TRUE ``` In this example, I would like to make a query that returns: ``` gap_date, branch, real_date 2015-04-03, PL1, 2015-04-02 ``` This table is quite small (couple thousand lines), so, performance shouldn't be a big issue. Any idea on how can I achieve that? Now I am using a function that receives the rundate and branch of the gap dates as parameters, and answers the latest before the date passed as parameter (using max(rundate) where rundate <= '$1') Thanks!
you can use `outer join`, `subquery` and `cross join`: *Schema:* ``` create table tbl(rundate date, branch varchar(10), imported bool); insert into tbl values('2015-04-01', 'PL1', TRUE), ('2015-04-01', 'ES1', TRUE), ('2015-04-01', 'CZ4', TRUE), ('2015-04-02', 'PL1', TRUE), ('2015-04-02', 'ES1', TRUE), ('2015-04-02', 'CZ4', TRUE), ('2015-04-03', 'ES1', TRUE), ('2015-04-03', 'CZ4', TRUE); ``` *Query:* ``` select q.rundate as gap_date,q.branch, (select max(tt.rundate) from tbl tt where tt.rundate<q.rundate and tt.branch=q.branch) as real_date from tbl t right outer join( select rundate,branch from ( select distinct rundate from tbl) t1 cross join ( select distinct branch from tbl)t2 )q on t.rundate=q.rundate and t.branch=q.branch where t.branch is null ``` *Result:* ``` gap_date branch real_date 2015-04-03 PL1 2015-04-02 ```
You have to select all unique dates in one set, all unique branches in another - make a cartesian product of it - and then you can check for which combination you have a gap... this is what I mean: ``` CREATE TEMPORARY TABLE _matrix SELECT t1.rundate, t2.branch (SELECT DISTINCT rundate FROM yourtable) t1, (SELECT DISTINCT branch FROM yourtable) t2 ``` Then you can find gaps using LEFT JOIN and "WHERE ... IS NULL" statement: ``` SELECT m.rundate, m.branch FROM _matrix m LEFT JOIN yourtable yt ON(yt.rundate = m.rundate AND yt.branch = m.branch) WHERE yt.rundate IS NULL ``` Of course the same can be achieved without temporary tables - using just a subquery. Especially when the performance is not crucial.
Postgresql Fill Gaps - Matrix of latest available information for a given branch
[ "", "sql", "postgresql", "window-functions", "gaps-in-data", "" ]
I need to select top 20000 from my table that has several columns (number,color etc...) and i need to get the count of the red and the count of the blue from these 20000 in one query. I know i can get what i want if i insert the top 20000 in a temp table and then select count of the red from the temp table and then select count of the blue from the temp table, but i need to do it in one query. i tried the below but it gives me the count for every number, i need the total.. ``` SELECT top 20000 [number], count(color) FROM [profile] group by number having color='red' ``` output: ## color | count ## red | 15000 ## blue | 5000
You can use nested query: ``` select inner.color, count(*) from (select top 20000 [number], color from [profile]) inner group by inner.color ```
You can use `iif` : ``` select top 20000 [number] , sum(iif([color] = 'red', 1, 0) as red_count , sum(iif([color] = 'blue', 1, 0) as blue_count from [profile] group by [number] ``` or `case`: ``` select top 20000 [number] , sum(case when [color] = 'red' then 1 else 0 end) as red_count , sum(case when [color] = 'blue' then 1 else 0 end) as blue_count from [profile] group by [number] ``` **Edit**. After you've updated your question, I guess your query should look like this: ``` select t.[color] , count(t.[color]) from (select top 20000 [color] from [profile]) t group by t.[color] ```
select 2 different count in sql server
[ "", "sql", "sql-server", "sql-server-2008", "count", "sql-server-2012", "" ]
I am looking to insert some values into a column based on the selection of a specific row. I have a table with columns a, b, c, and d. I want to insert the values 1, 2, and 3 into columns b, c, and d when column `a = X`. I cannot find how to do this. Toad for oracle is my platform and I am looking for SQL code.
You can either update them one at a time: ``` update mytable set b = 1 where a = X; update mytable set c = 2 where a = X; update mytable set d = 3 where a = X; ``` Or update them all in one go: ``` update mytable set b = 1,c = 2,d = 3 where a = X; ``` Alternatively, assuming 'a' is a primary key column or unique index and there is only 1 row `where a = X`, if you only have 4 columns and you want to update 3 of them you could delete your row and re-insert the whole lot: ``` delete from mytable where a = X; insert into mytable values(X, 1, 2, 3); ```
You can use `INSERT INTO...SELECT` and give condition to your insert queries such as: ``` INSERT INTO table_name (b, c, d) VALUES (bValue, cValue, dValue) /* Select Condition */ WHERE a=1 ```
Insert Values into table of specific row
[ "", "sql", "oracle", "sql-insert", "" ]
I have an MS Access database with 40 similar tables. Only a couple are listed below for simplicity. I want to grab 3 cells of data from a row if the date in that row is yesterday. The data comes from some other employee's homebrew spreadsheets that they refuse to give up. So I can't change the column names or get them all on the same workbook template. I've tried some variations on the code below but I can't get the dates to work right. If I drop the WHERE statement it returns everything. But I really need it whittled down. Thanks in advance! ``` SELECT [Customer name],[$ Amount], [Quote Completed] FROM 1081 WHERE [Quote Completed] = date()-1 UNION SELECT [Customer name],[$ Amount], [Quote Completed] FROM 1145 WHERE [Quote Completed] = date()-1 UNION SELECT [Customer name],[$ Amount], [Quote Completed] FROM 1149 WHERE [Quote Completed] = date()-1 UNION SELECT [Customer name],[$ Amount], [Quote Completed] FROM 1228 WHERE [Quote Completed] = date()-1 UNION SELECT [Customer name],[$ Amount], [Quote Completed] FROM 1353 WHERE [Quote Completed] = date()-1 ; ```
Do you really need a where condition on each section of your union? Or do you just want the whole results filtered? ``` SELECT * FROM ( SELECT [Customer name],[$ Amount], [Quote Completed] FROM 1081 UNION SELECT [Customer name],[$ Amount], [Quote Completed] FROM 1145 UNION SELECT [Customer name],[$ Amount], [Quote Completed] FROM 1149 UNION SELECT [Customer name],[$ Amount], [Quote Completed] FROM 1228 UNION SELECT [Customer name],[$ Amount], [Quote Completed] FROM 1353) U WHERE [Quote Completed] = date()-1 ```
You may need to use dateadd: ``` SELECT [Customer name],[$ Amount], [Quote Completed] FROM 1081 WHERE [Quote Completed] = DateAdd('d',-1, date()) ``` Added link: <http://www.techonthenet.com/access/functions/date/dateadd.php>
Why won't my "WHERE" statements work in sql?
[ "", "sql", "ms-access", "" ]
In a given Date Range I would like to get Name of the Month and End Date of the month. Based on the given values, I will different counts and calculate the final value. I can do it with CTE, But I dont have idea in how to it completely with CTE. I prefer one more way than doing it using CTE. Version of SQL 2008 R2, 2012. I want the result for the complete date range I provide. Not for one single month Example : Input which I provide is a date Range ``` Declare @Start datetime Declare @End datetime Select @Start = '2014-08-08' Select @End = '2015-04-01' ``` Output required : ``` Month End Date August 2014-08-31 00:00:00.000 September 2014-09-30 00:00:00.000 October 2014-10-31 00:00:00.000 November 2014-11-30 00:00:00.000 December 2014-12-31 00:00:00.000 January 2015-01-31 00:00:00.000 February 2015-02-28 00:00:00.000 March 2015-03-31 00:00:00.000 ```
Works on SLQ SERVER 2008+ ``` Declare @Start datetime Declare @End datetime Select @Start = '2014-08-08' Select @End = '2015-04-01' declare @day DATETIME = @Start declare @tab TABLE(MONTH_NAME varchar(20),LAST_DAY_OF_MONTH DATETIME) while @day <= @end begin INSERT INTO @tab SELECT DATENAME(month, @day), DATEADD(SECOND,-1,DATEADD(MONTH,1+ DATEDIFF(MONTH, 0, @day), 0)) set @day = DATEADD(MONTH,1,@day) end SELECT * FROM @tab ```
For both 2012+ Or 2012- ``` DECLARE @Start DATETIME = '2014-08-08', @End DATETIME = '2015-04-01' SELECT DATENAME(MONTH, DATEADD(MONTH, a.number, @Start)) AS NameOfMonth ,EOMONTH(DATEADD(MONTH, a.number, @Start)) AS EndOfMonthForSQL2012Plus ,DATEADD(DAY, -1, DATEADD(MONTH, 1 + DATEDIFF(MONTH, 0, DATEADD(MONTH, a.number, @Start)), 0)) AS EndOfMonthForSQL2012Minus FROM master.dbo.spt_values a WHERE a.type = 'P' AND a.number <= DATEDIFF(MONTH, @Start, @End); ```
How to get Month name and End Date of Month within a date rage
[ "", "sql", "sql-server", "sql-server-2008-r2", "sql-server-2012", "" ]
I'm a bit stuck with a stored procedure that is executing really slow. The stored procedure basically contains a query that uses an incoming parameter (in\_id) and is put in a cursor like this: ``` open tmp_cursor for select col1, col2, col3 from table1 tab where ((in_id is null) or (tab.id = in_id)); -- tab.id is the PK ``` When I get an execution plan for the SQL query separately with predefined value, I get good results with the query using an index. However when I call the procedure from my application, I see that no index is being used and the table gets full scan, thus giving slow performance. If I remove the first part of WHERE clause "(in\_id is null)" the performance from the application is fast again. How come the index isn't used during the call from my application (in\_id is passed in)?
*Assuming that `in_id` is a query parameter - not a column name:* The query has to have only ONE exec plan, regardless of the input. So if you pass parameter `in_id` as NULL, then it is supposed to return ALL rows. If you pass non-NULL `in_id` is should return only a single PK value. So Oracle chooses the "worst possible" exec. plan to deal with "worst possible" scenario. The "generic" queries are road to hell. Simply split the query into two. ``` select col1, col2, col3 from table1 tab where in_id is null or in_id is not null; ``` This will use FULL table scan, which is the best way how to get all the rows. ``` select col1, col2, col3 from table1 tab where tab.id = in_id; -- tab.id is the PK ``` This will use UNIQUE index scan, which is the best way how to get a single indexed row.
> in\_id is null I have answered a similar question here <https://stackoverflow.com/a/26633820/3989608> Some facts about NULL values and INDEX: * Entirely NULL keys are not entered into a β€˜normal’ B\*Tree in Oracle * Therefore, if you have a concatenated index on say C1 and C2, then you will likely find NULL values in it – since you could have a row where C1 is NULL but C2 is NOT NULL – that key value will be in the index. Some portion of the demonstration by Thomas Kyte regarding the same: ``` ops$tkyte@ORA9IR2> create table t 2 as 3 select object_id, owner, object_name 4 from dba_objects; Table created. ops$tkyte@ORA9IR2> alter table t modify (owner NOT NULL); Table altered. ops$tkyte@ORA9IR2> create index t_idx on t(object_id,owner); Index created. ops$tkyte@ORA9IR2> desc t Name Null? Type ----------------------- -------- ---------------- OBJECT_ID NUMBER OWNER NOT NULL VARCHAR2(30) OBJECT_NAME VARCHAR2(128) ops$tkyte@ORA9IR2> exec dbms_stats.gather_table_stats(user,'T'); PL/SQL procedure successfully completed. ``` Well, that index can certainly be used to satisfy β€œIS NOT NULL” when applied to OBJECT\_ID: ``` ops$tkyte@ORA9IR2> set autotrace traceonly explain ops$tkyte@ORA9IR2> select * from t where object_id is null; Execution Plan ---------------------------------------------------------- 0 SELECT STATEMENT Optimizer=CHOOSE (Cost=3 Card=1 Bytes=34) 1 0 TABLE ACCESS (BY INDEX ROWID) OF 'T' (Cost=3 Card=1 Bytes=34) 2 1 INDEX (RANGE SCAN) OF 'T_IDX' (NON-UNIQUE) (Cost=2 Card=1) ``` In fact – even if the table did not have any NOT NULL columns, or we didn’t want/need to have a concatenated index involving OWNER – there is a transparent way to find the NULL OBJECT\_ID values rather easily: ``` ops$tkyte@ORA9IR2> drop index t_idx; Index dropped. ops$tkyte@ORA9IR2> create index t_idx_new on t(object_id,0); Index created. ops$tkyte@ORA9IR2> set autotrace traceonly explain ops$tkyte@ORA9IR2> select * from t where object_id is null; Execution Plan ---------------------------------------------------------- 0 SELECT STATEMENT Optimizer=CHOOSE (Cost=3 Card=1 Bytes=34) 1 0 TABLE ACCESS (BY INDEX ROWID) OF 'T' (Cost=3 Card=1 Bytes=34) 2 1 INDEX (RANGE SCAN) OF 'T_IDX_NEW' (NON-UNIQUE) (Cost=2 Card=1) ``` Source : [Something about nothing by Thomas Kyte](http://tkyte.blogspot.in/2006/01/something-about-nothing.html)
Stored procedure SQL execution plan
[ "", "sql", "oracle", "sql-execution-plan", "" ]
I just wanted to know the best way I can use in Oracle query, to avoid updating a field , if that is unchanged? ``` Update xtab1 set xfield1='xxx' where xkey='123'; ``` In performance aspect what is best way , with which this update should not be invoked , if the existing value of xfield1 is 'xxx' . Option1 : > * step1:Invoke a SELECT to Fetch the value of xfield1 > * step2:If the above value is not 'xxx', then only invoke UPDATE Option2 : > * Invoke update as below: > > Update xtab1 set xfield1='xxx' where xkey='123' and xfield1 <> 'xxx' Please let me know which of the above 2 is best and ideal way, or is there any other ideal approach to be used? Appreciate your help
> Update xtab1 set xfield1='xxx' where xkey='123' and xfield1 <> 'xxx' The **filter predicate** is applied before doing the update. So, I would go with option 2 and **let Oracle do the job for you rather than doing it manually to first filter out the rows**. Also, it would be an overhead to do it in two different steps. The filtering of rows should be a part of the same step. Regarding the performance, I think **indexes** would play an important role. You can test it and see: **Without index** **Option 1** ``` SQL> EXPLAIN PLAN FOR 2 UPDATE t SET sal = 9999 WHERE deptno = 20; Explained. SQL> SQL> SELECT * FROM TABLE(dbms_xplan.display); PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- Plan hash value: 931696821 --------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------- | 0 | UPDATE STATEMENT | | 5 | 35 | 3 (0)| 00:00:01 | | 1 | UPDATE | T | | | | | |* 2 | TABLE ACCESS FULL| T | 5 | 35 | 3 (0)| 00:00:01 | --------------------------------------------------------------------------- Predicate Information (identified by operation id): PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- --------------------------------------------------- 2 - filter("DEPTNO"=20) 14 rows selected. SQL> ``` **Option 2** ``` SQL> EXPLAIN PLAN FOR 2 UPDATE t SET sal = 9999 WHERE deptno = 20 AND sal<>9999; Explained. SQL> SQL> SELECT * FROM TABLE(dbms_xplan.display); PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- Plan hash value: 931696821 --------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------- | 0 | UPDATE STATEMENT | | 4 | 28 | 3 (0)| 00:00:01 | | 1 | UPDATE | T | | | | | |* 2 | TABLE ACCESS FULL| T | 4 | 28 | 3 (0)| 00:00:01 | --------------------------------------------------------------------------- Predicate Information (identified by operation id): PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- --------------------------------------------------- 2 - filter("DEPTNO"=20 AND "SAL"<>9999) 14 rows selected. ``` **With Index** ``` SQL> CREATE INDEX t_idx ON t(deptno,sal); Index created. ``` **Option 1** ``` SQL> EXPLAIN PLAN FOR 2 UPDATE t SET sal = 9999 WHERE deptno = 20; Explained. SQL> SQL> SELECT * FROM TABLE(dbms_xplan.display); PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- Plan hash value: 1175576152 --------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------- | 0 | UPDATE STATEMENT | | 5 | 35 | 1 (0)| 00:00:01 | | 1 | UPDATE | T | | | | | |* 2 | INDEX RANGE SCAN| T_IDX | 5 | 35 | 1 (0)| 00:00:01 | --------------------------------------------------------------------------- Predicate Information (identified by operation id): PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- --------------------------------------------------- 2 - access("DEPTNO"=20) 14 rows selected. SQL> ``` **Option 2** ``` SQL> EXPLAIN PLAN FOR 2 UPDATE t SET sal = 9999 WHERE deptno = 20 AND sal<>9999; Explained. SQL> SQL> SELECT * FROM TABLE(dbms_xplan.display); PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- Plan hash value: 1175576152 --------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------- | 0 | UPDATE STATEMENT | | 4 | 28 | 1 (0)| 00:00:01 | | 1 | UPDATE | T | | | | | |* 2 | INDEX RANGE SCAN| T_IDX | 4 | 28 | 1 (0)| 00:00:01 | --------------------------------------------------------------------------- Predicate Information (identified by operation id): PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- --------------------------------------------------- 2 - access("DEPTNO"=20) filter("SAL"<>9999) 15 rows selected. SQL> ``` So, in option 2 in all the cases, the **filter("SAL"<>9999)** is applied.
I don't think there will be significant performance difference between the two options, as both will require looking up rows and performing comparison of the values. And I doubt other options such as pre-update triggers will yield better performance than your option 2. If you really wanted to know how the Oracle optimizer handles your queries, try the [`EXPLAIN PLAN` statement](http://docs.oracle.com/cd/B14117_01/server.101/b10752/ex_plan.htm). For example, to see the plan that the Oracle optimizer formulated to execute your second option, try this: ``` EXPLAIN PLAN FOR UPDATE xtab1 SET xfield1='xxx' WHERE xkey='123' AND xfield1 <> 'xxx' ``` There is more information about what the different columns of a `EXPLAIN PLAN` result means in this SO [post](https://stackoverflow.com/q/860450/1144203). Now, if you are dealing with large number of transactions, I recommend consider other options such as comparing the values at the application level, so as to avoid expensive database I/Os all together where possible :-) or use some form of ETL tools that are optimized to handle large transactions.
What is the best way in Oracle query, to avoid updating a field , if that is unchanged?
[ "", "sql", "oracle", "sql-update", "" ]
I'm wondering what is the best method to store generic classes in a database. For an easy illustration assume the following classes: ``` Interface IShape Center: Point Size: Point Color: Int Class Line : IShape (IShape Members...) LineWidth: Int Orientation: Int Class Circle: IShape (IShape Members...) Fill: Bool FillColor: Int Class Drawing Name: String Elements: List<IShape> ``` I see the following options: 1. Storing all objects in one table which has the columns of all possible IShape objects. Then after getting the lines from the database, check for the type and create an object of the specific class. This will cause an "messy table", no clear seperation between types. 2. Storing the IShape members in one table and the other fields in corresponding tables. Get the IShape elements and check which fields need to be loaded additionally for each object. This will cause a lot of database traffic. 3. Like No.2 but instead of loading each element and look which fields need to be loaded, do one query per type which returns the joined elements with the correct fields. This will also cause more database traffic(or at least queries) than No.1 and also may need resorting. How would you recommend storing them in the database? Is there some best practice? Interestingly I couldn't find any recommendation.
The generic `List<IShape>` is something of a red herring here since it is nothing more than the representation of a one-to-many link, or perhaps a slice of a many-to-many link table; there is no uniquely generics-related mapping complexity in your question. There is really no single best practice here due to the depth of the [object-relational impedance mismatch](http://c2.com/cgi/wiki?ObjectRelationalImpedanceMismatch). There are 3 well-established techniques for mapping inheritance hierarchies, and none of them can fully represent the constraints correctly. 1. **Table per type**, your option 2, with 1-to-1 primary key references between subtype and base-type tables. To express mutually exclusive subtypes requires some denormalization (including a discriminant column as part of the primary key, and incorporating a CHECK constraint on it in each subtype table), potentially large number of joins per query, but strongest referential integrity overall. 2. **Table per hierarchy**, sometimes called a "parking lot" table, your option 1 using a column to discriminate object type. Very messy due to large numbers of uninitialized columns, questionable referential integrity due to inability to express a reference to a subtype, but column applicability can be enforced by CHECK constraints. 3. **Table per concrete type**, which you didn't mention: let `Line` and `Circle` be tables, and let `IShape` be a view of the union of the common columns of the two tables. Questionable referential integrity due to inability to express a reference to a base type, but keeps the number of joins down without introducing nullable columns. To obtain the best fidelity in referential integrity you'd need some variation on the discriminated table-per-type construction, perhaps like this: ``` create table Drawing ( DrawingId int not null primary key, Name varchar not null ) create table ShapeKind ( ShapeKindId int primary key, ShapeKindName varchar ) insert into ShapeKind values (1, 'Line') insert into ShapeKind values (2, 'Circle') create table Shape ( ShapeId int not null, ShapeKindId int not null, DrawingId int not null, CenterX numeric not null, CenterY numeric not null, SizeX numeric not null, SizeY numeric not null, Color int not null, primary key (ShapeId, ShapeKindId), foreign key (ShapeKindId) references ShapeKind (ShapeKindId), foreign key (DrawingId) references Drawing (DrawingId) ) create table Line ( ShapeId int not null, ShapeKindId int not null check (ShapeKindId = 1), LineWidth int not null, Orientation int not null, primary key (ShapeId, ShapeKindId), foreign key (ShapeId, ShapeKindId) references Shape (ShapeId, ShapeKindId) ) create table Circle ( ShapeId int not null, ShapeKindId int not null check (ShapeKindId = 2), FillColor int, -- null if not filled primary key (ShapeId, ShapeKindId), foreign key (ShapeId, ShapeKindId) references Shape (ShapeId, ShapeKindId) ) ```
Your option of number 2 stands out the most to me. At the place I work we have a database system that stores meta information about work flows; there is a central table that stores the basic workflow item information (ID, Item Name, Default Person Responsible, etc) and then each type of item has its own table that stores the extra info (Documents have a table with template locations and document types, Processes have a table with their workflow members etc). It sort of works along the line of inheritance in programming; a base table (the type) and then tables of other types that inherit from the base table. So you could end up with: ``` shapeland.BaseShape ( ShapeId INT, ShapeType INT, ShapeDescription VARCHAR ) shapeland.Circle (ShapeId INT, Fill BIT, FillColor INT) etc etc ``` Or of course if you are a more of a code rather than sql person; you could always store the flattened xml version of a class; although that becomes less readable in the database.
Database design for generics
[ "", "sql", "database", "generics", "database-design", "" ]
``` Column_A Column_B Akash.Bansal Deactivate_User Akash.Bansal Deactivate_Int_User ``` I want the output as : ``` Akash.Bansal | Deactivate_User |Deactivate_Int_User |Delete_User |Delete_Inter_User| ``` I am able to do one column as row . Can any one help to get above out put
I'm not sure what you want to do with this data. Are you looking for a single string, or are you looking for the data from Column\_B to be put in columns for each name (Column\_A) ? If so, you could use a PIVOT statement for this (if Column\_B options are finite) ``` declare @Example table ( Column_A nvarchar(20), Column_B nvarchar(20) ) insert into @Example values ('Akash.Bansal','Deactivate_User') insert into @Example values ('Akash.Bansal','Deactivate_Int_User') insert into @Example values ('Akash.Bansal','Delete_User ') insert into @Example values ('Akash.Bansal','Delete_Inter_User') insert into @Example values ('John.Doe','Deactivate_User') insert into @Example values ('Jane.Doe','Deactivate_Int_User') insert into @Example values ('Jane.Doe','Delete_User ') insert into @Example values ('Jane.Doe','Delete_Inter_User') select * from @Example PIVOT ( MIN(Column_B) FOR Column_B IN (Deactivate_User,Deactivate_Int_User,Delete_User,Delete_Inter_User) ) pivotTable ``` This will give the following resultset: ![Resultset1](https://i.stack.imgur.com/AlM8U.png) You could even, if you wish, make a large string from this if you would need this too: ``` select pivotTable.Column_A, STUFF( COALESCE(' | ' + NULLIF(Deactivate_User, ''), '') + COALESCE(' | ' + NULLIF(Deactivate_Int_User, ''), '') + COALESCE(' | ' + NULLIF(Delete_User, ''), '') + COALESCE(' | ' + NULLIF(Delete_Inter_User , ''), ''), 1, 3, '') AS String from @Example PIVOT ( MIN(Column_B) FOR Column_B IN (Deactivate_User,Deactivate_Int_User,Delete_User,Delete_Inter_User) ) pivotTable ``` This would give the following result: ![Resultset2](https://i.stack.imgur.com/Cb9Ep.png)
You can do this by using `STUFF()` function. Try something like this, ``` CREATE TABLE TestUsr(Column_A VARCHAR(50), Column_B VARCHAR(50)) INSERT INTO TestUsr(Column_A, Column_B) VALUES ('Akash.Bansal', 'Deactivate_User'), ('Akash.Bansal', 'Deactivate_Int_User') SELECT Column_B FROM ( SELECT ROW_NUMBER() OVER(PARTITION BY Column_A ORDER BY Column_A) RN, Column_B = Column_A + ' | ' + STUFF((SELECT ' | ' + Column_B FROM TestUsr b FOR XML PATH('')), 1, 2, '') + ' | ' FROM TestUsr a ) AS E WHERE RN = 1 ``` [Sql Fiddle Demo](http://sqlfiddle.com/#!6/9516c/7)
convert the column data to row data
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "" ]
I'm attempting to create a short script which will take all of the fields from Table1 and insert them into Table2. The basic fields (col1, col2, col3, etc.) I have got covered by something like this: ``` INSERT INTO Table1 (col1, col2, col3, col4, col5, col6, col7) SELECT col1, col2, col3, col4, col5, SYSDATE, USER FROM Table2 ``` For `col8` and `col9` in Table1 I'm a little stuck though. `col8/col9` for instance are `DATE` datatypes which need to be inserted by combining `col6/col7/col8` and `col9/col10/col11` from Table1 respectively together. Each of these are defined as follows: ``` col6/col9 -- VARCHAR2 (2Byte) // Month col7/col10 -- VARCHAR2 (2Byte) // Day col8/col11 -- VARCHAR2 (4Byte) // Year ``` Looking online I tried the below, but receive `"not a valid month"`: ``` INSERT INTO Table1 (......) SELECT ...., TO_DATE(TOCHAR(col6, '00') || TO_CHAR(col7, '00') || TO_CHAR(col8, '9999'), 'MM/DD/YYYY'), ..... FROM Table 2 ``` Does anyone know how to get what I'm after here and combine 3 specific VARCHAR fields as a DATE for insertion as 1 field in Table1?
If the invalid values in your columns are always zeros, as you said in a comment, then you can use a fixed value - either a dummy date, or perhaps more reasonably leave the new column null - with a case statement: ``` SELECT ...., CASE WHEN TRIM('0' FROM col6) IS NULL OR TRIM('0' FROM col7) IS NULL OR TRIM('0' FROM col8) IS NULL THEN NULL ELSE TO_DATE(LPAD(col6, 2, '0') || LPAD(col7, 2, '0') || LPAD(col8, 4, '0'), 'MMDDYYYY') END, ... ``` The TRIM removes zeros from the value and then checks if anything is left, which catches both 0 and 00 (and 000 and 0000 for col8) in one go. The LPAD left-pads the value with zeros, so for example a value of 7 would become 07 - this is important because you need the concatenated values to match the date format model, and without leading zeros you'd end up with very confused and invalid results. Note that I'm not using TO\_CHAR at all. Since your source columns are already strings, you're actually doing an implicit conversion to a number, and then an explicit conversion back to a string. With the FM modifier or TRIM that would have the same effect as LPAD, but would be doing more work. As a quick demo with some sample values in a CTE: ``` with t as ( select '0' as col6, '0' as col7, '0' as col8 from dual union all select '00', '00', '0000' from dual union all select '08', '31', '2008' from dual union all select '7', '4', '2012' from dual ) select col6, col7, col8, CASE WHEN TRIM('0' FROM col6) IS NULL OR TRIM('0' FROM col7) IS NULL OR TRIM('0' FROM col8) IS NULL THEN NULL ELSE TO_DATE(LPAD(col6, 2, '0') || LPAD(col7, 2, '0') || LPAD(col8, 4, '0'), 'MMDDYYYY') END as my_date from t; COL6 COL7 COL8 MY_DATE ---- ---- ---- ---------- 0 0 0 00 00 0000 08 31 2008 2008-08-31 7 4 2012 2012-07-04 ``` If you have other invalid values this will still trip over them - say, 09/31/2000, where the number of the days in the month is wrong. It isn't clear if that might be the case for your data. If it is then you could write a function to attempt to convert whatever is passed, and silently return null if it's invalid for any reason.
I can't say for sure that this is what you are running into, but when I run `SELECT TO_CHAR(1,'00') FROM dual;` the output has an extra space in front of the expected `01`. I would wrap the call to `TO_CHAR()` in a call to `TRIM()` to remove any such spaces. `SELECT TRIM(TO_CHAR(1,'00') FROM dual;` I also noticed that your concatenation doesn't match the format mask you provide. Your format mask has separating `/`'s, but you are not concatenating those into your string. Your output at best would be `MMDDYYYY`. Consider adding that to your concatenation, or else changing your mask.
Combining multiple VARCHAR fields as 1 DATE when inserting from one table to another in Oracle?
[ "", "sql", "oracle", "sql-insert", "toad", "oracle12c", "" ]
That's my first day in SQL using PostgreSQL 9.4 and I'm lost with some things. I think that I'm close but not enough: Table definition: ``` CREATE TABLE DOCTOR ( Doc_Number INTEGER, Name VARCHAR(50) NOT NULL, Specialty VARCHAR(50) NOT NULL, Address VARCHAR(50) NOT NULL, City VARCHAR(30) NOT NULL, Phone VARCHAR(10) NOT NULL, Salary DECIMAL(8,2) NOT NULL, DNI VARCHAR(10) UNIQUE CONSTRAINT pk_Doctor PRIMARY KEY (Doc_Number) ); CREATE TABLE VISIT ( Doc_Number INTEGER, Pat_Number INTEGER, Visit_Date DATE, Price DECIMAL(7,2), CONSTRAINT Visit_pk PRIMARY KEY (Doc_Number, Pat_Number, Visit_Date), CONSTRAINT Visit_Doctor_fk FOREIGN KEY (Doc_Number) REFERENCES DOCTOR(Doc_Number), CONSTRAINT Visit_PATIENT_fk FOREIGN KEY (Pat_Number) REFERENCES PATIENT(Pat_Number) ); ``` I need how to join these two queries into one: ``` SELECT d.City, d.Name FROM DOCTOR d, VISIT v WHERE d.Specialty = 'family and comunity' ORDER BY d.Name; SELECT * FROM VISIT WHERE DATE (Visit_Date)<'01/01/2012' OR DATE(Visit_Date)>'31/12/2013'; ``` I tried something like this but it doesn't work. I need the doctors of that specialty that didn't do any visit in 2012 and 2013. ``` SELECT City, Name FROM DOCTOR d WHERE d.Specialty = 'family and comunity' AND NOT IN(SELECT * FROM VISIT WHERE Visit_Date BETWEEN '2012-01-01' and '2013-12-31') ORDER BY d.Name; ``` Can anyone help?
I finally found the solution, this is quite similar to your solutions, I post here to help another people with similar problems ``` SELECT City, Name FROM DOCTOR d, VISIT v WHERE d.Specialty = 'family and comunity' AND not exists(SELECT * FROM visit v WHERE v.doc_number = d.doc_number AND v.visit_date BETWEEN '2012-01-01' AND '2013-12-31') GROUP BY name, city ORDER BY d.Name; ``` Thank you all for your help!
``` SELECT d.name, d.city FROM doctor d LEFT JOIN visit v ON v.doc_number = d.doc_number AND v.visit_date BETWEEN '2012-01-01' AND '2013-12-31' WHERE d.specialty = 'family and community' -- or 'family and comunity'? AND v.doc_number IS NULL ORDER BY d.name; ``` * As commented you need a join condition. How are visits connected to doctors? Typically, you would have a `visit.doctor_id` referencing `doctor.doctor_id`. * Using `LEFT JOIN / IS NULL` to rule out doctors who have visits in said period. This is one of several possible techniques: + [Select rows which are not present in other table](https://stackoverflow.com/questions/19363481/select-rows-which-are-not-present-in-other-table/19364694#19364694) * Dates must be greater than the lower bound **`AND`** smaller than the upper bound. `OR` would be wrong here. * It's better to use ISO 8601 date format which is unambiguous regardless of your locale. + [field value between date range](https://stackoverflow.com/questions/10983146/field-value-between-date-range/11173651#11173651)
SELECT rows from a table that don't have related entries in a second table
[ "", "sql", "postgresql", "join", "left-join", "" ]
How can I join multiple tables together using MySQL? I have 3 separate tables. **Projects** ``` ID Project Name 1 Big Project #1 ``` **Projects Contract Managers** ``` ID Project Id Contract Manager Id 1 1 11 ``` **Contract Managers** ``` ID Name 11 John Smith 26 Bill Smith ``` I want to join the 3 tables above to get a list of projects along with contract manager names.
Just two `INNER JOIN`s seems to be enough : ``` SELECT * FROM `Projects Contract Managers` pcm JOIN `Contract Managers` cm ON pcm.`Contract Manager Id` = cm.`ID` JOIN `Projects` p ON pcm.`Project Id` = p.`ID` ```
Another possiblity from the first suggested I always feel reads better. I think the engine would optimize to the same execution plan. You would want to confirm with EXPLAIN if the tables are large and you are worried about performance. ``` SELECT * FROM `Projects Contract Managers` a, `Contract Managers` b, `Projects` c WHERE a.`Contract Manager Id` = b.`ID` a.`Project Id` = c.`ID` ```
Multiple joins in MySQL
[ "", "mysql", "sql", "join", "" ]
I'm trying to select value from second table which ID is in my first table. I've tried this: ``` select FIRST_NAME, LAST_NAME, c.name status from STUDENTS left join PAYER_STATUS c on Status = c.ID where status like N'%'||'active'||'%';! ``` but it displays nothing. Here's just empty everything. But If I'll change *active* to *0* it displays in status column words *active*. for *1* *inactive* etc. [tables here](https://i.stack.imgur.com/xDGw4.png) How can I solve that?
use your column name in where clause. coz you have `status` column present in table. so it will check the passed value with that column instead of c.name ``` select s.FIRST_NAME, s.LAST_NAME, c.name as status from STUDENTS s left join PAYER_STATUS c on s.Status = c.ID where c.name like N'%'||'active'||'%';! ```
You can't use column aliases in `WHERE`/`ON`, Also, when `LEFT JOIN` move `WHERE` clause conditions for outer table to `ON` clause, else it will be a regular `INNER JOIN`. ``` select FIRST_NAME, LAST_NAME, c.name status from STUDENTS s left join PAYER_STATUS c on s.somecolumn = c.ID and c.name like N'%'||'active'||'%';! ```
select value from second table which ID is in my first table
[ "", "sql", "oracle", "plsql", "" ]
So I have a table with 7 items. Lets call them: Car, Boat, House, Bike, Wheels, Sink, Bed. We can simply call the table Table Each with the following assigned numbers: ``` (ID) (Item) (#) (parent_id) 1234 - Car - 1 - null 0000 - Boat - 2 - null 2222 - House - 4 - null 6545 - Bike - 5 - null 6547 - Wheels - 0 - 1234 4442 - Bed - 1 - 2222 1474 - Sink - 0 - 2222 ``` Wheels is a child of car (or bike, doesnt matter) and sink & bed are children of House. The parent information is stored in the column called parent\_id. It is null for everything that doesn't have a parent. I need to use MySQL to sort this table via the assigned number order (called ordinal, the # in my tables here), however, retaining the parent-child order information. So the list should be sorted as follows: ``` (ID) (Item) (#) (parent_id) 1234 - Car - 1 - null 6547 - Wheels - 0 - 1234 0000 - Boat - 2 - null 2222 - House - 4 - null 1474 - Sink - 0 - 2222 4442 - Bed - 1 - 2222 6545 - Bike - 5 - null ``` How can I do this with mysql? We can assume all this information is on one table.
You should better keep `parent_id` but not the parent name. Here is quick solution to order your table <http://sqlfiddle.com/#!9/2a1fb/3> ``` SELECT * FROM table1 ORDER BY CASE WHEN parent_id IS NULL THEN CAST(ID AS CHAR) ELSE CONCAT(CAST(parent_id AS CHAR),'-', CAST(ID AS CHAR)) END ``` **EDIT 1** Variant #2 :-) <http://sqlfiddle.com/#!9/76dcb/23> ``` SELECT t1.* FROM table1 t1 LEFT JOIN table1 t2 ON t2.ID = t1.parent_id ORDER BY CASE WHEN t2.ord_idx IS NULL THEN CAST(t1.ord_idx AS CHAR) ELSE CONCAT(CAST(t2.ord_idx AS CHAR),'-',CAST(t1.ord_idx AS CHAR)) END ``` **EDIT 2** to see how this order works you can just add this field to select part like: ``` SELECT t1.*, CASE WHEN t2.ord_idx IS NULL THEN CAST(t1.ord_idx AS CHAR) ELSE CONCAT(CAST(t2.ord_idx AS CHAR),'-',CAST(t1.ord_idx AS CHAR)) END as my_order FROM table1 t1 LEFT JOIN table1 t2 ON t2.ID = t1.parent_id ORDER BY CASE WHEN t2.ord_idx IS NULL THEN CAST(t1.ord_idx AS CHAR) ELSE CONCAT(CAST(t2.ord_idx AS CHAR),'-',CAST(t1.ord_idx AS CHAR)) END ```
Something like that should work. I assumed the following field id, (#) = order\_num, and parent\_id: ``` SELECT * FROM table as t ORDER BY CASE WHEN parent_id IS NOT NULL THEN (SELECT id FROM table WHERE id = t.parent_id LIMIT 1) ELSE id END, CASE WHEN parent_id IS NULL THEN -1 ELSE order_num END; ```
MySQL Order By Fields & Parents
[ "", "mysql", "sql", "" ]
I have a table that has a the following columns: ``` ID(primary key), USER, ACTION TIME LOCATION ``` I am trying to *remove duplicate entries* using the columns: `USER, ACTION, TIME, LOCATION` together. I wrote the following query : ``` DELETE FROM test.testlogins WHERE id IN (SELECT * FROM (SELECT id FROM test.testlogins GROUP BY USER, ACTION, TIME, LOCATION HAVING (COUNT(*) > 1) ) AS A ); ``` However, when I execute it I am only getting 1 row deleted per run. My test data has approximately 40+ rows that are duplicates each assigned a separate `id`.
``` DELETE t1.* FROM testlogins t1 INNER JOIN testlogins t2 ON t1.user=t2.user AND t1.action=t2.action AND t1.time=t2.time AND t2.location=t2.location AND t1.id>t2.id ``` you can use `t1.id>t2.id` if you want to keep the row with the minimum ID or `t1.id<t2.id` if you want to keep the one with the maximum.
Easiest solution is adding a Unique index to the table using `ALTER IGNORE`. This will avoid the issue in future, if the table size is not huge. ``` ALTER IGNORE TABLE testlogins ADD UNIQUE KEY (USER, ACTION, TIME, LOCATION) ``` **OR** Create a new table in some other database with new unique index and load all the data into new table with INSERT IGNORE
MySQL removing Duplicate Rows with Primary Key
[ "", "mysql", "sql", "primary-key", "" ]
``` Animal Count Color ------ ----- ----- Dog 2 brown Cat 4 black Result Animal Color ------ Dog brown Dog brown Cat black Cat black Cat black Cat black ```
You can achieve It with `Common Table Expression` in following: ``` CREATE TABLE #Test ( Animal NVARCHAR(20), CountAnimals INT, Color NVARCHAR(20) ) INSERT INTO #Test VALUES ('Dog', 2, 'brown'), ('Cat', 4, 'black'); WITH CTE AS ( SELECT Animal,CountAnimals,Color FROM #Test UNION ALL SELECT Animal,CountAnimals-1,Color FROM CTE WHERE CountAnimals >= 2 ) SELECT Animal,Color FROM CTE ORDER BY Animal DESC OPTION (MAXRECURSION 0); DROP TABLE #Test ``` **OUTPUT** ``` Animal Color Dog brown Dog brown Cat black Cat black Cat black Cat black ``` **[SQL FIDDLE](http://sqlfiddle.com/#!6/44c51/1)**
You need to introduce an artificial table (or view) the\_row\_holder\_table with row count>=max count in the initial table. Then just ``` select gr.Animal, gr.Color from grouped gr join the_row_holder_table on gr.count<the_row_holder_table.row ``` UPDATE: Suppose the the\_row\_holder\_table has just one column `row` with values ``` row ----- 1 2 3 ... ``` Then each row of the grouped table is connected (by gr.countcount rows of the the\_row\_holder\_table table
How to reverse a GROUP BY like table?
[ "", "sql", "sql-server-2008", "" ]
``` Select * from TableName where columnA = value1 AND columnB = value2 and column3 IN (list of ids); ``` **How will the above query work in either of the DBs.** How is it different from the following loop query:- ``` for x in list_of_ids: Select * from TableName where columnA = value1 AND columnB = value2 and column3 = x; ```
The second example is simply a loop where select will be done X number of times. The execution first example: 1. If number of values are small IN will be transformed to ORed conditions. 2. If number of values are huge some db engines will optimize it and create a spool with all the values and do a join with your table In any case first example will be faster. Note: SQL queries are not valid in NOSQL databases. The above scenario is for traditional dbs only.
The later example is not a query. You have a loop (probably Python) that is executing a specific query (with a specific value for column3) each time. Is this a fictitious example or taken from a specific DB ?
How does IN query work in SQL
[ "", "sql", "nosql", "" ]
I am trying to combine the multiple rows into a single row , i was able to get the half of it i tried many ways get the remaining half , below is the sql i wrote. any help will be appreciated. ``` DROP TABLE #TEST_DEPT_NAME CREATE TABLE #TEST_DEPT_NAME ([ID] [varchar](255) NULL,[CSN_ID] [varchar](50) NULL,[NOTE_ID] [varchar](50) NULL,[DEPARTMENT_NAME] [varchar](255) NULL, [NOTE_CSN_ID] [varchar](50) NULL,[LINE] [varchar](50) NULL,[NOTE_TEXT] [nvarchar](max) NULL,[AUTHOR_USER_ID] [varchar](50) NULL,[AUTHOR_USER_NAME] [varchar](255) NULL, [NOTE_TYPE_NAME] [varchar](255) NULL,[IS_ARCHIVED_YN] [varchar](255) NULL,[NOTE_STATUS_NAME] [varchar](255) NULL) INSERT INTO #TEST_DEPT_NAME VALUES ('123456','1234567','12345678' ,'TEST','001234' ,1 ,'NOTES_1.1' ,'1234','TEST','Clinic Note','N','Signed') ,('123456','1234567','12345678' ,'TEST','001234' ,1 ,'NOTES_1.1.1' ,'1234','TEST','Clinic Note','N','Signed') ,('123456','1234567','12345678' ,'TEST','0012345' ,2 ,'NOTES_1.2' ,'999999','TEST 1','Clinic Note','N','Signed') ,('123456','1234567','12345678' ,'TEST','00123456' ,3 ,'NOTES_1.3' ,'999999','TEST 1','Clinic Note','N','Signed') ,('123456','1234567','12345678' ,'TEST','66666' ,1 ,'NOTES_2.1' ,'1234','TEST','Clinic Note','N','Signed') ,('123456','1234567','12345678' ,'TEST','66666' ,2 ,'NOTES_2.2' ,'1234','TEST','Clinic Note','N','Signed') ,('123456','1234567','12345678' ,'TEST','66666' ,3 ,'NOTES_2.3' ,'1234','TEST','Clinic Note','N','Signed') SELECT distinct ID,[CSN_ID],[NOTE_ID],[DEPARTMENT_NAME],[NOTE_TYPE_NAME],[IS_ARCHIVED_YN] ,[NOTE_TEXT] = (select ' '+ case when [NOTE_TEXT]= '' then null else [NOTE_TEXT]end from #TEST_DEPT_NAME P1 WHERE P1.[ID] = P2.[ID]AND P1.[CSN_ID] =P2.[CSN_ID] AND P1.NOTE_ID = P2.NOTE_ID AND P1.DEPARTMENT_NAME = P2.DEPARTMENT_NAME AND P1.LINE = P2.LINE AND P1.NOTE_TYPE_NAME =P2.NOTE_TYPE_NAME AND P1.NOTE_CSN_ID = P2.NOTE_CSN_ID AND P1.IS_ARCHIVED_YN =P2.IS_ARCHIVED_YN FOR XML PATH('')) FROM #TEST_DEPT_NAME P2 WHERE [ID] = '123456' GROUP BY ID,CSN_ID,[NOTE_ID],[DEPARTMENT_NAME],[NOTE_TYPE_NAME],[IS_ARCHIVED_YN],LINE,NOTE_CSN_ID ``` when i run above SQL i am getting result set as below ``` ID CSN_ID NOTE_ID DEPARTMENT_NAME NOTE_TEXT 123456 1234567 12345678 TEST NOTES_1.1 NOTES_1.1.1 123456 1234567 12345678 TEST NOTES_1.2 123456 1234567 12345678 TEST NOTES_1.3 123456 1234567 12345678 TEST NOTES_2.1 123456 1234567 12345678 TEST NOTES_2.2 123456 1234567 12345678 TEST NOTES_2.3 ``` But I want the result as shown below ``` ID CSN_ID NOTE_ID DEPARTMENT_NAME NOTE_TEXT 123456 1234567 12345678 TEST NOTES_1.1 NOTES_1.1.1 NOTES_1.2 NOTES_1.3 123456 1234567 12345678 TEST NOTES_2.1 NOTES_2.2 NOTES_2.3 ```
you can do it by using `left` and `stuff` functions with `group by`: ``` select t.ID ,t.CSN_ID,t.NOTE_ID,t.DEPARTMENT_NAME, STUFF(( select ' ' + t1.NOTE_TEXT from #TEST_DEPT_NAME t1 where t.ID =t1.ID and t.CSN_ID=t1.CSN_ID and t.NOTE_ID=t1.NOTE_ID and t.DEPARTMENT_NAME=t1.DEPARTMENT_NAME and left(t.NOTE_TEXT,8)=left(t1.NOTE_TEXT,8) for xml path(''), type ).value('.', 'varchar(max)'), 1, 1, '') NOTE_TEXT_ from #TEST_DEPT_NAME t group by t.ID ,t.CSN_ID,t.NOTE_ID,t.DEPARTMENT_NAME,left(t.NOTE_TEXT,8) ``` *Output:* ``` ID CSN_ID NOTE_ID DEPARTMENT_NAME NOTE_TEXT_ 123456 1234567 12345678 TEST NOTES_1.1 NOTES_1.1.1 NOTES_1.2 NOTES_1.3 123456 1234567 12345678 TEST NOTES_2.1 NOTES_2.2 NOTES_2.3 ``` **Edit:** if you have more than 9 `NOTE_TEXT` I mean `NOTES_1..,NOTES_2...,NOTES_10...,NOTES_100...` then you have to use `charindex('.',t.NOTE_TEXT)` instead of `8` in above query: ``` select t.ID ,t.CSN_ID,t.NOTE_ID,t.DEPARTMENT_NAME, STUFF(( select ' ' + t1.NOTE_TEXT from #TEST_DEPT_NAME t1 where t.ID =t1.ID and t.CSN_ID=t1.CSN_ID and t.NOTE_ID=t1.NOTE_ID and t.DEPARTMENT_NAME=t1.DEPARTMENT_NAME and left(t.NOTE_TEXT,charindex('.',t.NOTE_TEXT)) =left(t1.NOTE_TEXT,charindex('.',t1.NOTE_TEXT)) for xml path(''), type ).value('.', 'varchar(max)'), 1, 1, '') NOTE_TEXT_ from #TEST_DEPT_NAME t group by t.ID ,t.CSN_ID,t.NOTE_ID,t.DEPARTMENT_NAME, left(t.NOTE_TEXT,charindex('.',t.NOTE_TEXT)) ```
I think this query is the desired one: ``` SELECT distinct ID,[CSN_ID],[NOTE_ID],[DEPARTMENT_NAME],[NOTE_TEXT] = ( select ' '+ case when [NOTE_TEXT]= '' then null else [NOTE_TEXT]end from #TEST_DEPT_NAME P1 WHERE P1.[ID] = P2.[ID] AND P1.[CSN_ID] =P2.[CSN_ID] AND P1.NOTE_ID = P2.NOTE_ID AND P1.DEPARTMENT_NAME = P2.DEPARTMENT_NAME AND P1.NOTE_TYPE_NAME =P2.NOTE_TYPE_NAME AND P1.IS_ARCHIVED_YN =P2.IS_ARCHIVED_YN AND SUBSTRING(P1.NOTE_TEXT, 0, CHARINDEX('.', P1.NOTE_TEXT)) = SUBSTRING(P2.NOTE_TEXT, 0, CHARINDEX('.', P2.NOTE_TEXT)) FOR XML PATH('') ) FROM #TEST_DEPT_NAME P2 WHERE [ID] = '123456' GROUP BY ID,CSN_ID,[NOTE_ID],[DEPARTMENT_NAME],[NOTE_TYPE_NAME],[IS_ARCHIVED_YN],LINE,NOTE_CSN_ID,SUBSTRING([NOTE_TEXT], 0, CHARINDEX('.', [NOTE_TEXT])) ``` Result: ``` ID CSN_ID NOTE_ID DEPARTMENT_NAME NOTE_TEXT 123456 1234567 12345678 TEST NOTES_1.1 NOTES_1.1.1 NOTES_1.2 NOTES_1.3 123456 1234567 12345678 TEST NOTES_2.1 NOTES_2.2 NOTES_2.3 ```
Combining the multiple records(rows) into single record(row)
[ "", "sql", "sql-server", "t-sql", "ssms", "" ]
I have a couple of tables which have keywords as column names. I am not able to write a subquery using them. Even if I use non-existing column names, the query works. For example ``` CREATE TABLE tgt("view" int); CREATE TABLE src(id int); select * from tgt where view in (select view from src); view ------ (0 rows) select * from tgt where view in (select "view" from src); view ------ (0 rows) select * from tgt where "view" in (select "view" from src); view ------ (0 rows) select "view" from src; ERROR: column "view" does not exist LINE 1: select "view" from src; insert into tgt values(1); insert into src values(2); select * from tgt where "view" in (select "view" from src); view ------ 1 (1 row) select * from tgt where view in (select id from src); view ------ (0 rows) ``` I have a few questions - when I use a non-existing column name in a subquery, it works. But if I use the same column name in a query, I get an error. How can I use key-word column names in a subquery without running into surprises like this? I know that using keywords as column names is not a good idea, but the system has been around for 2 decades and changing column names is not an option now.
> when I use a non-existing column name in a subquery, it works. Well, but not the way you think it works. It will resolve to the column in some other part of the query (here the other table). ``` select * from tgt where view in (select view from src); ``` is the same as ``` select * from tgt where view in (select tgt.view from src); ``` This is not related to "view" being a bad choice for a column name, the same thing happens if your column is called "x".
Another alternative is to use an inner join instead of the subquery. The ambiguity in column names is more obvious here, and you can use aliasing to remove the ambiguity, e.g.: ``` select DISTINCT t.* from tgt t inner join src s on t.view = s.view; ```
Columns with keywords as names in PostgreSQL
[ "", "sql", "postgresql", "keyword", "" ]
I have a very basic statement, e.g.: ``` SELECT pet, animal_type, number_of_legs FROM table ``` However, where `table` currently is, I want to insert some fake data, along the lines of: ``` rufus cat 3 franklin turtle 1 norm dog 5 ``` Is it possible to "generate" these fake records, associating each value with the corresponding field, from within a query so that they are returned as the result of the query?
``` SELECT pet, animal_type, number_of_legs FROM table union select 'rufus', 'cat', 3 union select 'franklin', 'turtle', 1 union select 'norm', 'dog', 5 ``` This gives you the content of `table` plus the 3 records you want, avoiding duplicates, if duplicates are OK, then replace `union` with `union all` edit: per your comment, for tsql, you can do: ``` select top 110 'franklin', 'turtle', 1 from sysobjects a, sysobjects b -- this cross join gives n^2 records ``` Be sure to chose a table where n^2 is greater than the needed records or cross join again and again
I'm not entirely sure what you're trying to do, but MySQL is perfectly capable of selecting "mock" data and printing it in a table: ``` SELECT "Rufus" AS "Name", "Cat" as "Animal", "3" as "Number of Legs" UNION SELECT "Franklin", "Turtle", "1" UNION SELECT "Norm", "Dog", "5"; ``` Which would result in: ``` +----------+--------+----------------+ | Name | Animal | Number of Legs | +----------+--------+----------------+ | Rufus | Cat | 3 | | Franklin | Turtle | 1 | | Norm | Dog | 5 | +----------+--------+----------------+ ``` Doing this query this way prevents actually having to save information in a temporary table, but I'm not sure if it's the *correct* way of doing things.
Generating "Fake" Records Within A Query
[ "", "mysql", "sql", "" ]
I simply want to prevent users from running UPDATE statement without WHERE clause. I provided my PLSQL below ``` create or replace PROCEDURE secure_update(update_query IN varchar2) IS msg varchar2(30000); flag char(1); qry varchar2(30000); BEGIN IF upper(update_query) LIKE 'UPDATE%SET%WHERE%=%' THEN flag := '1'; ELSE flag := '0'; END IF; IF (flag = '1') THEN --qry := update_query; execute immediate update_query into msg; END IF; dbms_output.put_line(msg); END; ``` That's how I execute it ``` EXEC secure_update (' UPDATE dummy_table SET col1 = ''whatever'' WHERE pk = ''1234'' ') ``` I keep getting this message: > Invalid SQL type: sqlKind = UNINITIALIZED **Can you please help me find out how to overcome this error?**
This works, please see the changes, dont use the into clause in execute immediate ``` create or replace PROCEDURE secure_update(update_query IN varchar2) IS msg varchar2(30000); flag char(1); qry varchar2(30000); BEGIN IF upper(update_query) LIKE 'UPDATE%SET%WHERE%=%' THEN flag := '1'; dbms_output.put_line('updated succesfully'); ELSE flag := '0'; dbms_output.put_line('no where clause in update'); END IF; IF (flag = '1') THEN --qry := update_query; execute immediate update_query ; END IF; END; ``` if you want to use varchar in update then please see this ``` SCOTT@research 16-APR-15> select * from test2; A B ----- ----- a b code to execute procedure : declare lsql varchar2(100):= 'update test2 set a=''z'' where b=''b'' '; begin secure_update(lsql); end; output: updated succesfully SCOTT@research 16-APR-15> select * from test2; A B ----- ----- z b declare lsql varchar2(100):= 'update test2 set a=''z'''; begin secure_update(lsql); end; output no where clause in update ``` another example ``` SCOTT@research 16-APR-15> select * from test1; VAL1 VAL2 VAL3 ---------- ---------- ---------- 2 2 4 3 2 4 123 2 3 42 3 SCOTT@research 16-APR-15> exec secure_update('update test1 set val1=555 where val1=2'); updated succesfully PL/SQL procedure successfully completed. SCOTT@research 16-APR-15> select * from test1; VAL1 VAL2 VAL3 ---------- ---------- ---------- 555 2 4 3 2 4 123 2 3 42 3 SCOTT@research 16-APR-15> exec secure_update('update test1 set val1=555'); no where clause in update PL/SQL procedure successfully completed. ```
SQL Developer (and SQL\*Plus, but possibly not other clients!) needs the `exec` command to be on a single line. You didn't show the earlier errors you would have got from that invocation: ``` EXEC secure_update Error report - ORA-06550: line 1, column 7: PLS-00306: wrong number or types of arguments in call to 'SECURE_UPDATE' ORA-06550: line 1, column 7: PL/SQL: Statement ignored 06550. 00000 - "line %s, column %s:\n%s" *Cause: Usually a PL/SQL compilation error. *Action: Error starting at line : 25 in command - (' UPDATE dummy_table SET col1 = ''whatever'' WHERE pk = ''1234'' ') Error at Command Line : 25 Column : 2 Error report - SQL Error: Invalid SQL type: sqlKind = UNINITIALIZED ``` The procedure is being called with no arguments, because there are none on the same line as the actual `exec`, so you get a PLS-00306 from that line. Then the remainder is interpreted as a separate command, which gets the 'invalid SQL type' error you reported. (In SQL Developer, that is; in SQL\*Plus you'd get the same PLS-00306 for the first part, but `ORA-00928: missing SELECT keyword` for the rest). You can either move the whole statement into one line: ``` EXEC secure_update ('UPDATE dummy_table SET col1 = ''whatever'' WHERE pk = ''1234'''); ``` Or use an explicit anonymous block rather than the `exec` shorthand: ``` BEGIN secure_update ('UPDATE dummy_table SET col1 = ''whatever'' WHERE pk = ''1234'' '); dbms_output.put_line('after'); END; / ``` Note also that I've had to move the `UPDATE` up a line anyway, as your check doesn't allow for any whitespace (including a line break) at the start of the command. The `into msg` in your procedure isn't doing anything but doesn't seem to be causing a problem; if you want to see how many rows were updated use SQL%ROWCOUNT after the `execute immediate` instead.
Invalid SQL type: sqlKind = UNINITIALIZED - PLSQL Error
[ "", "sql", "oracle", "plsql", "" ]
Given the following table, ``` PersonID Year ---------- ---------- 1 1991 1 1992 1 1993 1 1993 2 1990 2 1991 3 1991 3 1992 3 1994 ``` Is there a way with a SQL select query to get the PersonID where it has at least 3 rows with consecutive years in the range 1990 to 1995? In this case, it should only find PersonID 1. Thanks for your help.
I think something like the following should work: ``` SELECT personID FROM ( SELECT CASE WHEN year - min(year) OVER (PARTITION BY PersonID ORDER BY YEAR ASC ROWS BETWEEN 2 PRECEDING AND 2 PRECEDING) = 2 THEN 'x' END as ConsecutiveFlag, PersonID FROM (SELECT personID, year FROM table GROUP BY personID,year) subquery WHERE year BETWEEN 1990 and 1995 ) t1 WHERE ConsecutiveFlag = 'x' GROUP BY personID ``` That uses a Windowing Function to order by date and look two rows back. It compares the current year to the value from two rows previous and if the difference is 2 then we know we have consecutive years. The outer query filters records that are flagged and returns the personid.
Try: ``` select a.PersonID from Table a left join Table b on a.Personid = b.Personid and a.year = b.year + 1 left join Table c on a.Personid = c.Personid and b.year = c.year + 1 where b.year is not null and c.year is not null and a.year <= 1995 and c.year >= 1990 ```
Determine if field values meet minimum consecutive years within a year range
[ "", "sql", "sql-server", "sql-server-2012", "" ]
I need some hints.. I got a table which looks like that: ``` id fruit delicious 1 apple 1 2 apple 0 3 apple 1 4 cherry 1 5 cherry 1 6 cherry 1 7 banana 0 8 banana 0 9 banana 0 ``` For this table I want to create a statement which will only return the fruits which are completely delicious but not the fruits which are partially delicious. In this example only "cherry" should be returned. Currently I just got this statement: ``` SELECT * FROM fruits WHERE delicious <> 0 ``` but this would also return apple and banana. I'm a bit clueless how I could archive that and looking forward to some hints. **EDIT: Thank you very much for your replys. Unfortunately the flag "delicious" is a char and not an integer**
Group by the `fruit` and take only those having zero records with `delicious = 0` ``` SELECT fruit FROM fruits GROUP BY fruit HAVING sum(case when delicious = 0 then 1 else 0 end) = 0 ```
Just look for those that have a min delicious value of 1 ``` SELECT Id, Fruit FROM fruits GROUP BY Id, Fruit HAVING min(delicious) = 1 ```
SQL select only rows which have same column value
[ "", "mysql", "sql", "db2", "" ]
I have a table that looks like this: ``` RN ID1 ID2 Name Source 1 W76544 945297 1_W_HO HO 2 W76544 945297 1_W_INT Int 1 W76547 945299 3_W_HO HO 2 W76547 945678 3_W_INT Int 1 W76561 NULL Dev_U_W AD 2 W76561 207283 Dev_W_HO HO 3 W76561 207283 Dev_W_INT Int 1 W76562 207284 Dev_R_HO HO 2 W76562 207284 Dev_R_INT Int 3 W76562 NULL Dev_U_R AD 1 W76563 NULL Prd_U_W AD 2 W76563 NULL Prd_W_HO HO 3 W76563 NULL Prd_W_INT Int ``` I am trying to figure our how to determine exact matches between sets of ID1 and ID2. For instance, this example is an exact match: ``` RN ID1 ID2 Name Source 1 *W76544 945297* 1_W_HO HO 2 *W76544 945297* 1_W_INT Int ``` I would like the results to look like this: ``` RN ID1 ID2 Matched Name Source 1 W76544 945297 Yes 1_W_HO HO 2 W76544 945297 Yes 1_W_INT Int 1 W76547 945299 No 3_W_HO HO 2 W76547 945678 No 3_W_INT Int 1 W76561 NULL No Dev_U_W AD 2 W76561 207283 No Dev_W_HO HO 3 W76561 207283 No Dev_W_INT Int 1 W76562 207284 No Dev_R_HO HO 2 W76562 207284 No Dev_R_INT Int 3 W76562 NULL No Dev_U_R AD 1 W76563 NULL Empty Prd_U_W AD 2 W76563 NULL Empty Prd_W_HO HO 3 W76563 NULL Empty Prd_W_INT Int ``` To claify... Match = 'Yes' when all groups with the same ID1 match up to groups that all have the same ID2. Match = 'No' when Groups with the same ID1 either don't all match up to a like ID2 or some ID1's match but other's in the set do not match to an ID2 at all. Match = 'Empty' when all groups with the same ID1 don't match up to an ID2 at all. p.s. RN is row\_number partitioned by and ordered by ID1 Thanks!!! --- @BaconBits , this seems to fall through your query: ``` 1 W10151820 NULL No DEV_U_W AD 2 W10151820 212405 Yes DEV_W_HO HO 3 W10151820 212405 Yes DEV_W_INTL Int ```
Try with this: ``` Create table t(id int, c char(1)) Insert into t values (1, 'a'), (1, 'a'), (2, 'b'), (2, null), (3, null), (3, null), (4, 'c'), (4, 'd') ;with cte as( select id, count(*) c1, count(c) c2, count(distinct c) c3 from t group by id) select t.id, t.c, ca.m from t Cross apply(select case when c2 = 0 and c3 = 0 then 'empty' when c1 = c2 and c3 = 1 then 'yes' else 'no' end as m from cte where cte.id = t.id) ca ``` Output: ``` id c m 1 a yes 1 a yes 2 b no 2 (null) no 3 (null) empty 3 (null) empty 4 c no 4 d no ```
Can you do something like this: ``` SELECT m.RN, m.ID1, m.ID2, CASE WHEN NOT EXISTS (SELECT 1 FROM MyTable WHERE ID1 = m.ID1 AND ID2 IS NOT NULL) THEN 'Empty' ELSE CASE WHEN COUNT(*) OVER(PARTITION BY m.ID1, m.ID2) > 1 THEN 'Yes' ELSE 'No' END END "Matched", m.Name, m.Source FROM MyTable m ``` Ugly as sin, but I think that should work.
SQL: Compare Columns for Exact Match of sets
[ "", "sql", "sql-server", "t-sql", "analytics", "ssms", "" ]
This query gives me the desired result but i can't run this query every time.The 2 loops is costing me.So i need to implement something like view.But the logic has temp tables involved which isn't allowed in views as well.so, is there any other way to store this result or change the query so that it will cost me less. ``` DECLARE @Temp TABLE ( [SiteID] VARCHAR(100) ,[StructureID] INT ,[row] DECIMAL(4, 2) ,[col] DECIMAL(4, 2) ) DECLARE @siteID VARCHAR(100) ,@structureID INT ,@struct_row INT ,@struct_col INT ,@rows_count INT ,@cols_count INT ,@row INT ,@col INT DECLARE structure_cursor CURSOR FOR SELECT StructureID ,SiteID ,Cols / 8.5 AS Cols ,Rows / 11 AS Rows FROM Structure WHERE SellerID = 658 --AND StructureID = 55 OPEN structure_cursor FETCH NEXT FROM structure_cursor INTO @structureID ,@siteID ,@struct_col ,@struct_row SELECT @rows_count = 1 ,@cols_count = 1 ,@row = 1 ,@col = 1 WHILE @@FETCH_STATUS = 0 BEGIN WHILE @row <= @struct_row BEGIN WHILE @col <= @struct_col BEGIN --PRINT 'MEssage'; INSERT INTO @Temp ( SiteID ,StructureID ,row ,col ) VALUES ( @siteID ,@structureID ,@rows_count ,@cols_count ) SET @cols_count = @cols_count + 1; SET @col = @col + 1; END SET @cols_count = 1; SET @col = 1; SET @rows_count = @rows_count + 1; SET @row = @row + 1; END SET @row = 1; SET @col = 1; SET @rows_count = 1; FETCH NEXT FROM structure_cursor INTO @structureID ,@siteID ,@struct_col ,@struct_row END CLOSE structure_cursor; DEALLOCATE structure_cursor; SELECT * FROM @Temp ``` ![Image 1](https://i.stack.imgur.com/JdELS.png) ![Image 2](https://i.stack.imgur.com/nNgch.png)
You can generate the number of rows and columns and then CROSS APPLY with those, like below. I've left out your SellerID condition. ``` ;WITH Cols AS ( SELECT StructureID, SiteID, CAST(Cols / 8.5 AS INT) AS Col FROM Structure UNION ALL SELECT s.StructureID, s.SiteID, Col - 1 FROM Structure s INNER JOIN Cols c ON s.StructureID = c.StructureID AND s.SiteID = c.SiteID WHERE Col > 1 ) , Rows AS ( SELECT StructureID, SiteID, CAST(Rows / 11 AS INT) AS Row FROM Structure UNION ALL SELECT s.StructureID, s.SiteID, Row - 1 FROM Structure s INNER JOIN Rows r ON s.StructureID = r.StructureID AND s.SiteID = r.SiteID WHERE Row > 1 ) --INSERT INTO @Temp (SiteID, StructureID, row, col) SELECT s.SiteID, s.StructureID, r.Row, c.Col FROM Structure s CROSS APPLY Cols c CROSS APPLY Rows r WHERE s.StructureID = c.StructureID AND s.SiteID = c.SiteID AND s.StructureID = r.StructureID AND s.SiteID = r.SiteID ```
Do this with a set-based operation. I think you just want `insert . . . select`: ``` INSERT INTO @Temp (SiteID, StructureID, row, col) SELECT StructureID, SiteID, Cols / 8.5 AS Cols, Rows / 11 AS Rows FROM Structure WHERE SellerID = 658; ``` You should avoid cursors, unless you really need them for some reason (such as calling a stored procedure or using dynamic SQL on each row). EDIT: Reading the logic, it looks like you want to insert rows for based on the limits in each row. You still don't want to use a cursor. For that, you need a number generator and `master..spt_values` is a convenient one, if it has enough rows. So: ``` with n as ( select row_number() over (order by (select null)) as n from master..spt_values ) INSERT INTO @Temp (SiteID, StructureID, row, col) SELECT StructureID, SiteID, ncol.n / 8.5 AS Cols, nrow.n / 11 AS Rows FROM Structure s JOIN n ncol ON ncol.n <= s.struct_col CROSS JOIN n nrow ON nrow <= s.struct_row WHERE SellerID = 658; ```
How to tune the following query?
[ "", "sql", "performance", "sql-server-2008", "sql-tuning", "query-tuning", "" ]
I have table with such structure ``` CREATE TABLE UsersHistory ( Id INT IDENTITY, UserID INT, StatusId INT, CreateTime DATETIME, ChangedTime DATETIME ) INSERT INTO UsersHistory(UserID, StatusId, CreateTime, ChangedTime) SELECT 1,1,'20150414','20150414' UNION ALL SELECT 1,2,'20150414','20150415' UNION ALL SELECT 1,3,'20150414','20150416' UNION ALL SELECT 2,1,'20150413','20150413' UNION ALL SELECT 2,3,'20150413','20150416' ``` and query ``` ;WITH k AS ( SELECT uh.UserID,MAX(uh.ChangedTime) AS Dt FROM UsersHistory AS uh WHERE uh.ChangedTime<'20150416' GROUP BY uh.UserID ) SELECT k.UserID,uh.StatusId FROM k INNER JOIN UsersHistory AS uh ON k.UserID = uh.UserID AND k.Dt = uh.ChangedTime ``` Query is too easy and not needs more explanation. I want to simplify my it. (Remove join with datetime type column). Any Suggestion?
You can use `ROW_NUMBER()` with `PARTITION` to achieve this. Something like this ``` ;WITH CTE as ( SELECT UserID, StatusId, CreateTime, ChangedTime,ROW_NUMBER()OVER(PARTITION BY UserID ORDER BY ChangedTime DESC) r FROM UsersHistory WHERE ChangedTime < '20150416' ) SELECT UserID, StatusId FROM CTE WHERE r = 1 ```
In SQL Server 2012+, you can use `first_value()`: ``` SELECT uh.UserID, FIRST_VALUE(uh.StatusId) OVER (PARTITION BY uh.UserId ORDER BY ChangedTIme DESC) MAX(uh.ChangedTime) AS Dt FROM UsersHistory AS uh WHERE uh.ChangedTime < '20150416'; ```
Simplify SQL select statement
[ "", "sql", "t-sql", "" ]
I have looked through many of the questions already asked on here but have not yet found one that I can adapt to my own situation. I currently have a table filled with values that are used for dropdowns on various webpages I have assembled. I am attempting to add support for SMS messaging. To do this I have added a column to my table of dropdown values called SMSID. This value needs to be an integer from 0-999 unique to a specific dropdown. So for instance, I have an html select with multiple options, I want each option to have a uniqueID from 0-999 alphabetically, loaded from the database. Is there a way to use an update statement to go through every record and add a uniqueID for each dropdown option? I.E. Table example: ``` FoodType | ingredients | smsID Pizza | pepperoni | 003 Pizza | onions | 002 Pizza | mushrooms | 000 Pizza | olives | 001 Sandwich | ham | 000 Sandwich | roast beef | 001 Sandwich | turkey | 002 ```
This should do it: ``` SELECT FoodType, Ingredients, ROW_NUMBER() OVER (PARTITION BY FoodType ORDER BY FoodType) AS smsID FROM dbo.Dropdowns ```
The CTE update method is much faster than subqueries or self-joins [**SQL Fiddle Demo**](http://sqlfiddle.com/#!6/347fc/2) ``` ;WITH t(old,new) AS ( SELECT smsID ,REPLACE(STR(ROW_NUMBER() OVER(PARTITION BY FoodType ORDER BY ingredients)-1,3),' ','0') FROM MyTable ) UPDATE t SET old = new ```
Increment integer MSSQL
[ "", "sql", "sql-server", "" ]
How to update a row to replace percent character. ``` UPDATE MPOLITICAS SET NOM_POLITICA = REPLACE(NOM_POLITICA, '%' , 'PERCENT '); ``` The row content is `NG DEBT 100% - 2014` but should be `NG DEBT 100 PERCENT - 2014`
Replace function should work fine for this: ``` UPDATE MPOLITICAS SET NOM_POLITICA = REPLACE(NOM_POLITICA, '%' , ' PERCENT'); ``` <http://sqlfiddle.com/#!4/cc2c4/1>
Assuming your locale (NLS settings) have ASCII 37 as the % character: ``` UPDATE MPOLITICAS SET NOM_POLITICA = REPLACE(NOM_POLITICA, CHR(37) , ' '); ```
How to replace percent (%) character
[ "", "sql", "oracle", "replace", "sql-update", "" ]
Let's say you have a model for Items with an attribute of Votes. Here might be some sample data: ``` Item, Votes ----------- Red 4 Blue 5 Black 5 Green 4 Cyan 5 Yellow 4 Orange 4 ``` I want a scope that first sorts it by Votes: ``` Item, Votes ----------- Orange 4 Green 4 Yellow 4 Red 4 Blue 5 Black 5 Cyan 5 ``` And then randomizes it within each vote cohort: ``` Item, Votes ----------- Green 4 Red 4 Orange 4 Yellow 4 Black 5 Blue 5 Cyan 5 ``` And then limits it to the first two results. I've tried this: ``` scope :options, -> { order('random()').order("votes asc").limit 2 } ``` and this: ``` scope :options, -> { order("votes asc").order('random()').limit 2 } ``` But neither works as expected. The first version seems completely random and the second version is not random at all.
In the item model I made this method: ``` def rand_value self.votes + Random.rand end ``` And then in the controller I am using this query: ``` @items = @experiment.items.sort_by { |item| [item.rand_value] } ``` This seems to be working like I wanted. The first items on the list are always the ones with the least votes, but within each cohort they are randomized. The only disadvantage of this is that it's not limited to two results.
Depending on exactly what you want, you have two options. If you want the first two in the complete result (highlighted below), your second solution works. If however, you want the first row of each group, you'll need to modify your query somewhat (as shown below:) --- **First rows of complete result:** ``` scope :options, -> { order('votes ASC, random()').limit(2) } ``` will select the following rows: ``` Item, Votes ----------- Green 4 * Red 4 * Orange 4 Yellow 4 Black 5 Blue 5 Cyan 5 ``` --- **First row of each group:** ``` scope :options, -> { select('DISTINCT ON (votes) *').order('votes ASC, random()').limit(2) } ``` will select the following rows: ``` Item, Votes ----------- Green 4 * Red 4 Orange 4 Yellow 4 Black 5 * Blue 5 Cyan 5 ```
Sort database query by column and then randomly
[ "", "sql", "ruby-on-rails", "postgresql", "" ]
I am working with a database with a structure similar to the illustration below (except with more columns). Basically, each person has a unique person\_id and alt\_id. However, the only thing connecting table A to table C is table B, and table B has one to many rows for each person/alt\_id. I need to get rows with a person\_id, their alt id and their associated shapes. I could do this: ``` SELECT DISTINCT a.person_id, a.color, b.alt_id, c.shape FROM a JOIN b ON a.person_id = b.person_id JOIN c ON b.alt_id = c.alt_id ``` However, that seems inefficient as it will take a Cartesian product of rows from B and C with the same alt\_id before finally using DISTINCT to narrow the results down. What's the best/most efficient way to do this query? ``` Table A +-----------+-------+ | person_id | color | +-----------+-------+ | 10 | red | | 11 | blue | | 12 | green | +-----------+-------+ Table B +-----------+--------+ | person_id | alt_id | +-----------+--------+ | 10 | 225 | | 10 | 225 | | 11 | 226 | | 11 | 226 | | 11 | 226 | | 12 | 227 | +-----------+--------+ Table C +--------+----------+ | alt_id | shape | +--------+----------+ | 225 | square | | 226 | circle | | 226 | rhombus | | 226 | ellipse | | 227 | triangle | +--------+----------+ ```
Join to `(select distinct * from b) b` rather than just the base table `b`. ``` SELECT a.person_id, a.color, b.alt_id, c.shape FROM a INNER JOIN (select distinct * from b) b ON a.person_id = b.person_id INNER JOIN c ON b.alt_id = c.alt_id ```
You could use aggregation along with a common table expression (or subquery, but a CTE might be neater): ``` WITH ab AS ( SELECT a.person_id, a.color, MAX(b.alt_id) AS alt_id FROM a INNER JOIN b ON a.person_id = b.person_id GROUP BY a.person_id, a.color ) SELECT ab.person_id, ab.color, ab.alt_id, c.shape FROM ab INNER JOIN c ON ab.alt_id = c.alt_id; ```
Three table join where middle table has duplicate foreign keys
[ "", "sql", "oracle", "join", "" ]
I have two tables. * Purchase Orders(PURCHASE\_ORDER) * Purchase Order Lines(PURC\_ORDER\_LINE) I need the price and currency for a specific part and I also need the row returned to be the latest record in the database. I figure since I want the latest record I need to select the max(date) the only issue is that since I need the 'group by', my results get grouped and it doesn't just return the single row with the latest date and it's price and currency. ``` SELECT MAX(PURC_ORDER_LINE.LAST_RECEIVED_DATE) AS 'Latest Order', UNIT_PRICE, PURCHASE_ORDER.CURRENCY_ID FROM PURC_ORDER_LINE LEFT JOIN PURCHASE_ORDER ON PURC_ORDER_LINE.PURC_ORDER_ID = PURCHASE_ORDER.ID WHERE PART_ID = 'some part' GROUP BY UNIT_PRICE, PURCHASE_ORDER.CURRENCY_ID ``` Results: ``` Latest Order UNIT_PRICE CURRENCY_ID 2015-01-20 20 CHF 2015-04-06 30 CHF 2012-02-23 40 USD 2012-02-17 50 USD 2011-12-08 20 USD ``` What I'm trying to get ``` UNIT_PRICE CURRENCY_ID 30 CHF ```
Maybe something like this: (see also <https://msdn.microsoft.com/ru-ru/library/ms189463.aspx>) ``` SELECT TOP(1) PURC_ORDER_LINE.LAST_RECEIVED_DATE, UNIT_PRICE, PURCHASE_ORDER.CURRENCY_ID FROM PURC_ORDER_LINE LEFT JOIN PURCHASE_ORDER ON PURC_ORDER_LINE.PURC_ORDER_ID = PURCHASE_ORDER.ID WHERE PART_ID = 'some part' ORDER BY PURC_ORDER_LINE.LAST_RECEIVED_DATE DESC ```
Why use GROUP BY when you only need TOP 1 and ORDER BY? ``` SELECT TOP 1 LAST_RECEIVED_DATE, UNIT_PRICE, CURRENCY_ID FROM PURC_ORDER_LINE LEFT JOIN PURCHASE_ORDER ON(PURC_ORDER_LINE.PURC_ORDER_ID = PURCHASE_ORDER.ID) WHERE PART_ID = 'some part' ORDER BY LAST_RECEIVED_DATE DESC ```
SQL Select line with most recent date
[ "", "sql", "sql-server-2008", "" ]
I have a table like so ``` Fld1 Fld2 Fld3 ------------ 0 1234 ABC 0 1235 DEF 1 1236 GHI 2 1236 JKL 3 1236 MNO 4 1237 PQR 5 1237 STU 6 1237 VWX ``` Note that neither column is unique. There may be many rows with Fld1 = 0, but for all other values Fld1 will be unique and there may be many rows with the same value for Fld2. I need to select a single row for each value of Fld2 with the highest value in Fld 1. So the result based on the above data would be ``` Fld1 Fld2 Fl4 ------------------ 0 1234 ABC 0 1235 DEF 3 1236 MNO 6 1237 VWX ```
An alternative to `GROUP BY` would be to use a windowing function like [`row_number()`](https://msdn.microsoft.com/en-us/library/ms186734.aspx) to get the result. This function creates a unique number by partitioning your data by `Fld2` and ordering it by `Fld1 desc`: ``` select Fld1, Fld2, Fld3 from ( select Fld1, Fld2, Fld3, rn = row_number() over(partition by fld2 order by fld1 desc) from yourtable ) d where rn = 1; ``` See [SQL Fiddle with Demo](http://sqlfiddle.com/#!6/ea3a5/3). There are times using `row_number` will be easier when you have additional columns that are unique. This gives a result: ``` | Fld1 | Fld2 | Fld3 | |------|------|------| | 0 | 1234 | ABC | | 0 | 1235 | DEF | | 3 | 1236 | MNO | | 6 | 1237 | VWX | ```
Use `group by`: ``` select max(fld1), fld2 from table t group by fld2; ```
Selecting single row with no unique key
[ "", "sql", "sql-server", "" ]
I am struggling with a SQL function for a report I am writing. The function polls an audit table for the original value of a particular field (SecondarySchoolCode). If it finds a value in the audit table for the current row it should return the value, if there is no value for the current row then the original parameter I supplied to the function should be returned instead. I have tried using a Case statement but it is not returning the parameter back if there is no match in the audit table. Any suggestions on how to accomplish this? ``` ALTER FUNCTION [dbo].[fn_AuditOriginalHSAttendingCode] ( @StudentID VARCHAR(255), @SecondarySchoolCode VARCHAR(255), @ColumnName VARCHAR(255) ) RETURNS VARCHAR(255) AS BEGIN DECLARE @Result AS VARCHAR(255); RETURN (SELECT TOP (1) CASE WHEN @ColumnName <> 'SecondarySchoolCode' THEN @SecondarySchoolCode ELSE dbo.GDSAuditDetail.ValueBeforeChange END FROM dbo.GDSAuditDetail INNER JOIN dbo.StudentSchool INNER JOIN dbo.Student ON dbo.StudentSchool.StudentId = dbo.Student.ID INNER JOIN dbo.SecondarySchool ON dbo.StudentSchool.SecondarySchoolId = dbo.SecondarySchool.ID INNER JOIN dbo.GDSAudit ON dbo.Student.ID = dbo.GDSAudit.EntityId ON dbo.GDSAuditDetail.GDSAuditId = dbo.GDSAudit.ID WHERE (dbo.Student.ID = @studentID) and dbo.GDSAuditDetail.GDSColumn='SecondarySchoolCode' ORDER BY dbo.GDSAudit.InsertedDate ASC) ``` The call to the function looks like this: ``` dbo.fn_AuditOriginalHSAttendingCode(dbo.Student.ID , dbo.SecondarySchool.SecondarySchoolCode , dbo.GDSAuditDetail.GDSColumn) ```
I'm not sure what all of your relationships are, but based on your stated requirements and your current query, you are joining to some tables you don't need. The reason you don't get anything back when the audit doesn't exist is because there are no rows from which to select a `TOP 1`. This should always return at least one row for an existing Student. ``` SELECT TOP (1) ISNULL(dbo.GDSAuditDetail.ValueBeforeChange, @SecondarySchoolCode) FROM dbo.Student LEFT JOIN dbo.GDSAudit ON dbo.Student.ID = dbo.GDSAudit.EntityId LEFT JOIN dbo.GDSAuditDetail ON dbo.GDSAuditDetail.GDSAuditId = dbo.GDSAudit.ID AND dbo.GDSAuditDetail.GDSColumn='SecondarySchoolCode' WHERE (dbo.Student.ID = @studentID) ORDER BY dbo.GDSAudit.InsertedDate ASC ```
Biscuits provided the easiest to implement solution: I mean, something like ISNULL(dbo.fn\_AuditOriginalHSAttendingCode(...), 'SecondarySchoolCode') – Biscuits 59 mins ago
Multiple SQL or Case statements in Scalar function
[ "", "sql", "sql-server", "function", "" ]
I have a table of people records like this: ``` Tableid PeopleID DeptID StartMonth StartYear -------------------------------------------- 1 101 1 3 2014 2 101 2 2 2015 3 102 2 5 2015 4 102 5 4 2013 5 103 6 8 2015 6 103 7 9 2015 ``` I want only the lastest workhistory for each `peopleid` with the corresponding `tableid`, meaning I want to have the following output: ``` tableid peopleid 2 101 3 102 6 103 ``` Here I require only `tableid` and `peopleid` in output.
Use `row_number()`: ``` select t.* from (select t.*, row_number() over (partition by peopleid order by startyear desc, startmonth desc ) as seqnum from table t ) t where seqnum = 1; ```
You can use `ROW_NUMBER()` with `PARTITION BY`. Something like this. ``` CREATE TABLE People ( Tableid INT, PeopleID INT, DeptID INT, StartMonth INT, StartYear INT ) INSERT INTO People SELECT 1, 101, 1, 3, 2014 UNION ALL SELECT 2, 101, 2, 2, 2015 UNION ALL SELECT 3, 102, 2, 5, 2015 UNION ALL SELECT 4, 102, 5, 4, 2013 UNION ALL SELECT 5, 103, 6, 8, 2015 UNION ALL SELECT 6, 103, 7, 9, 2015 ;WITH CTE as ( SELECT Tableid,PeopleID,ROW_NUMBER()OVER(PARTITION BY PeopleID ORDER BY StartYear DESC,StartMonth DESC) r FROM People ) SELECT Tableid, PeopleID FROM CTE WHERE r = 1 ```
SQL Server Query output
[ "", "sql", "sql-server", "" ]
as you can see I am working in SQL Developer and trying to get some data out of my tables. I'm trying to get out the item with its biggest price, but don't know how to manage it. For price I was trying SELECT MAX(price) FROM price but I get return 95€, when the biggest price is 350€. Can someone help me, that I Will get the biggest price (from table Price) and the name of item (from table Item) ? EDIT: Here are my tables, if that helps you ``` INSERT INTO ITEM VALUES (1, 'Item one', 1); INSERT INTO ITEM VALUES (2, 'Item two', 1); INSERT INTO ITEM VALUES (3, 'Item three', 2); INSERT INTO ITEM VALUES (4, 'Item four', 2); INSERT INTO ITEM VALUES (5, 'Item five', 3); INSERT INTO ITEM VALUES (6, 'Item six', 3); /* CREATE TABLE ITEM( ID INTEGER NOT NULL, NAME VARCHAR2(30) NOT NULL, TK_ORG INTEGER NOT NULL ); */ INSERT INTO PRICE VALUES (1, 'normal', '50 €', 1); INSERT INTO PRICE VALUES (2, 'special offer', '45 €', 1); INSERT INTO PRICE VALUES (3, 'normal', '80 €', 2); INSERT INTO PRICE VALUES (4, 'special offer', '150 €', 2); INSERT INTO PRICE VALUES (5, 'normal', '40 €', 3); INSERT INTO PRICE VALUES (6, 'special offer', '25 €', 3); INSERT INTO PRICE VALUES (7, 'normal', '70 €', 4); INSERT INTO PRICE VALUES (8, 'special offer', '45 €', 4); INSERT INTO PRICE VALUES (9, 'normal', '95 €', 5); INSERT INTO PRICE VALUES (10, 'special offer', '320 €', 5); /* CREATE TABLE PRICE( ID INTEGER NOT NULL, PRICE_TYPE VARCHAR2(50) NOT NULL, PRICE VARCHAR2(10) NOT NULL, TK_ITEM INTEGER NOT NULL ); */ ```
First, change the column type to integer. Right now, I assume, it is VARCHAR or TEXT. As it sorts by the first symbol, of course 9 > 3, that's why you get the weird results. Remove the € from the row's content and move it to another column.
You would seem to be storing the price as a string and not as a number. In MySQL, you can use silent conversion to get what you want: ``` SELECT MAX(price + 0) FROM price; ``` In Oracle, for this example, you can just do: ``` SELECT MAX(to_number(replace(price, '€', ''))) FROM price; ``` although a regular expression would be more flexible if you had more currencies.
How to select item with the biggest price?
[ "", "mysql", "sql", "oracle", "" ]
How can I accomplish something like this in a query. If I have the total amount of 673000 that is passed as a parameter in my stored proc then, I need to do this: ``` Declare @TotalAmount money @TotalAmount = 673000 Col1 Col2 Col3 Test1 45 672955 --(I want to subtract 673000 from Col2 data) 673000-45 Test2 30 672925 --(I want to subtract 30 from the remaining amount of col3) Test3 100 672825 --(I want to subtract 100 from the remaining amount of col3) ``` any help will be appreciated. I am using sql server 2012
Assuming you want to `ORDER` based on `Col1`, in SQL Server 2012 and above you can do something like this ``` DECLARE @TotalAmount money SET @TotalAmount = 673000 ;WITH CTE as ( SELECT 'Test1' col1 ,45 col2 UNION ALL SELECT 'Test2' ,30 UNION ALL SELECT 'Test3' ,100 ) SELECT col1,col2,@TotalAmount - SUM(col2)OVER(ORDER BY col1 ) col3 FROM CTE ORDER BY col1; ``` For any version prior to SQL Server 2012, you can use a correlated query like this ``` DECLARE @TotalAmount MONEY SET @TotalAmount = 673000 ;WITH CTE as ( SELECT 'Test1' col1 ,45 col2 UNION ALL SELECT 'Test2' ,30 UNION ALL SELECT 'Test3' ,100 ) SELECT col1,col2,(SELECT @TotalAmount - SUM(col2) FROM CTE c2 WHERE c2.col1 <= c1.col1 ) col3 FROM CTE c1 ORDER BY col1; ```
``` DECLARE @amount decimal DECLARE db_cursor CURSOR FOR SELECT amount FROM TABLE OPEN db_cursor FETCH NEXT FROM db_cursor INTO @amount WHILE @@FETCH_STATUS = 0 BEGIN @TotalAmount = @TotalAmount - @amount FETCH NEXT FROM db_cursor INTO @amount END CLOSE db_cursor DEALLOCATE db_cursor ```
subtracting the current row remaining balance with previous row balance
[ "", "sql", "sql-server", "" ]
I have a MySQL query with this line ``` ORDER BY -order_line_groups.sequence DESC, order_line_groups.id; ``` I'm trying to find out what the hyphen is for just after the order by. I have tried executing with and without it but it does not seem to make a difference. I have also looked at it with explain and cant find anything different. I was not able to find any MySQL documentation to help. I did find this: <https://dev.mysql.com/doc/refman/5.0/en/order-by-optimization.html> where the hyphen is listed under "You use ORDER BY with an expression that includes terms other than the key column name:" It does not give much info. Can someone explain what this does or point me towards some docs? Thank you.
The answer is in the question: > I did find this: <https://dev.mysql.com/doc/refman/5.0/en/order-by-optimization.html> where the hyphen is listed under *"You use ORDER BY with an **expression**..."* The keyword here is **"expression"**. You can use expressions in the `ORDER BY` clause. ``` ORDER BY -order_line_groups.sequence DESC ``` is an arithmetical expression and the "hyphen" is not a hyphen but a minus; an unary minus mathematical operator. ### How it works? Regarding the outcome, it works as expected: * If the type of column `order_line_groups.sequence` is a [numeric type](http://dev.mysql.com/doc/refman/5.7/en/numeric-types.html) or a [date & time type](http://dev.mysql.com/doc/refman/5.7/en/date-and-time-types.html) (they are internally stored as numbers too) then the minus reverses the order of the returned rows. The clause is equivalent with ``` ORDER BY order_line_groups.sequence ASC ``` * If the type of the column is a [string type](http://dev.mysql.com/doc/refman/5.7/en/string-types.html) then MySQL attempts to [convert the values](http://dev.mysql.com/doc/refman/5.7/en/type-conversion.html) of the column to numbers (because the subtraction is an arithmetic operation and it is not defined for strings). When MySQL converts strings to numbers it stops on the first character that is not a digit. Depending on the data you store in column `order_line_groups.sequence`, more or less of its string values will convert to number `0`. Regarding the speed, as explained in the documentation page provided in the question MySQL cannot use indexes to resolve the `ORDER BY`. This makes the query slow. ### Conclusion Using `ORDER BY -order_line_groups.sequence` makes the query slow. More, it does not provide a good way to sort the result set when the type of column `sequence` is a string type.
you are negating the value of column 'sequence' and using it in order by .. ``` order by -(number) ``` ordering by negation of a number.
hyphen in MySQL order by clause
[ "", "mysql", "sql", "" ]
I am new to Sql server, I tried datediff several times but failed. All I need is the exact or precise number of minutes including fraction between two dates. E.g if the gap between two datetime fields is 40 seconds than it gives me 0.67 minutes and not 1 minute. Please help.
Take the difference in seconds and divide by 60.0: ``` select datediff(second, date1, date2) / 60.0 as diff_in_minutes ``` This should be close enough for most work. If you really want, you could use milliseconds instead of seconds.
``` select datediff(millisecond, date1, date2) / 60000.0 as diff_in_minutes ```
How can I find exact number of minutes without rounding between two dates in sql server
[ "", "sql", "sql-server", "sql-server-2008", "sql-server-2005", "" ]
I want to get the unique values from two small lists and use the result to select those from a big list. However, only the first result is returned. What am I doing wrong? Query: ``` SELECT DISTINCT b.keys, b.names FROM biglist b WHERE b.keys = (SELECT keys FROM partial_list_1 UNION SELECT keys FROM partial_list_2) ``` Query with a normal list for testing, this works as expected: ``` SELECT DISTINCT b.keys, b.names FROM biglist b, smalllist s WHERE b.keys = s.keys ``` **//EDIT**: I'm using SpatiaLite.
It is generally much more efficient to represent this as two different operations using `or`: ``` SELECT DISTINCT b.keys, b.names FROM biglist b WHERE b.keys IN (SELECT keys FROM partial_list_1) OR b.keys IN (SELECT keys FROM partial_list_2); ``` These can take advantage of indexes on `partial_list_1(keys)` and `partial_list_2(keys)` -- both of which are recommended. (Some databases might need this expressed as `exists` to take advantage of the index.)
You should use `IN`: ``` SELECT DISTINCT b.keys, b.names FROM biglist b WHERE b.keys IN (SELECT keys FROM partial_list_1 UNION SELECT keys FROM partial_list_2) ```
SQL select from union only returns the first match
[ "", "sql", "select", "union", "" ]
I have this data. ![enter image description here](https://i.stack.imgur.com/SDrJK.png) this is my query ``` SELECT transaction_date FROM tenant1.txn_transaction_record where '2015-04-14' ``` The said query renders empty result. Is it possible to query timestamp field using only date?
When you do: ``` transaction_date = '2015-04-14' ``` PG will convert string '2015-04-14' to timestamp '2015-04-14 00:00:00' value. If you do: ``` transaction_date::date = '2015-04-14' ``` PG will convert both values to date (wich is only date part, without time part), and it'll work. **BUT... BE CAREFUL WHEN CASTING COLUMNS IN WHERE CLAUSE, because PG will not be able to take advantage of an index that contains that column, unless you've created the index with same cast on the column.** If you create only this index: ``` create index i_foo_1 on foo ( timestamp_field ); ``` This query WILL NOT use that index: ``` select * from foo where timestamp_field::date = '2015-04-15'; ``` So, or you'll need to create an aditional index: ``` create index i_foo_2 on foo ( timestamp_field::date ); ``` Or you'll have to change your original "where clause": ``` select * from foo where timestamp_field >= ('2015-04-15'::timestamp) and timestamp_field < (('2015-04-15'::date)+1)::timestamp; ```
No, If you do like `transaction_date = '2015-04-14'` It will automatically search for `transaction_date = '2015-04-14T00:00:00'` So you wont yield any result. Therefore if you want to search the date try `transaction_date::date = '2015-04-14'` So the final query is, ``` SELECT transaction_date FROM tenant1.txn_transaction_record where transaction_date::date = '2015-04-14' ```
Postgresql querying timestamp data with date only
[ "", "sql", "postgresql", "" ]
I'm struggling a little bit with syntax. I'm trying to have the following line of code, with the upper boundary determined by a hard value and a sub-query or two sub-queries, picking the minimum value of the result. ``` AND ELIGTY_MO_YYYYMM BETWEEN '201407' AND '201412' ``` with the intended change being: ``` AND ELIGTY_MO_YYYYMM BETWEEN '201407' AND min('201412', select max(contract boundary) from table blah) ``` or replacing the `'201412'` with another sub-query. I'm hoping this is a simple question. Any insights would be greatly appreciated =)
A couple of things... 1. When comparing two values, use `LEAST` and `GREATEST`. `MIN` and `MAX` are for aggregating only. 2. When using a subquery as a value in a column or expression, enclose the subquery in parentheses. ``` AND ELIGTY_MO_YYYYMM BETWEEN '201407' AND LEAST('201412, (SELECT MAX(whatever) FROM table)) ```
Perhaps you are looking to use `least` instead. Very generic example: ``` select * from yourtable where someval between 0 and least((select min(someval) from yourtable), (select max(someval) from yourtable)) ``` * [Condensed Fiddle](http://sqlfiddle.com/#!4/76c62/5)
Min Value Oracle with results of subqueries
[ "", "sql", "oracle", "" ]
I have an app that has the built in initial Select option and only allows me to enter from the Where section. I have rows with duplicate values. I'm trying to get the list of just one record for each distinct value but am unsure how to get the statement to work. I've found one that almost does the trick but it doesn't give me any rows that had a dup. I assume due to the = so just need a way to get one for each that matches my where criteria. Examples below. Initial Data Set ``` Date | Name | ANI | CallIndex | Duration --------------------------------------------------------- 2/2/2015 | John | 5555051000 | 00000.0001 | 60 2/2/2015 | John | | 00000.0001 | 70 3/1/2015 | Jim | 5555051001 | 00000.0012 | 80 3/4/2015 | Susan | | 00000.0022 | 90 3/4/2015 | Susan | 5555051002 | 00000.0022 | 30 4/10/2015 | April | 5555051003 | 00000.0030 | 35 4/11/2015 | Leon | 5555051004 | 00000.0035 | 10 4/15/2015 | Jane | 5555051005 | 00000.0050 | 20 4/15/2015 | Jane | 5555051005 | 00000.0050 | 60 4/15/2015 | Kevin | 5555051006 | 00000.0061 | 35 ``` What I Want the Query to Return ``` Date | Name | ANI | CallIndex | Duration --------------------------------------------------------- 2/2/2015 | John | 5555051000 | 00000.0001 | 60 3/1/2015 | Jim | 5555051001 | 00000.0012 | 80 3/4/2015 | Susan | 5555051002 | 00000.0022 | 30 4/10/2015 | April | 5555051003 | 00000.0030 | 35 4/11/2015 | Leon | 5555051004 | 00000.0035 | 10 4/15/2015 | Jane | 5555051005 | 00000.0050 | 20 4/15/2015 | Kevin | 5555051006 | 00000.0061 | 35 ``` Here is what I was able to get but when i run it I don't get the rows that did have dups callindex values. duration doesn't mattern and they never match up so if it helps to query using that as a filter that would be fine. I've added mock data to assist. ``` use Database SELECT * FROM table WHERE Date between '4/15/15 00:00' and '4/15/15 23:59' and callindex in (SELECT callindex FROM table GROUP BY callinex HAVING COUNT(callindex) = 1) ``` Any help would be greatly appreciated. Ok with the assistance of everyone here i was able to get the query to work perfectly within SQL. That said apparently the app I'm trying this on has a built in character limit and the below query is too long. This is the query i have to use as far as the restrictions and i have to be able to search both ID's at the same time because some get stamped with one or the other rarely both. I'm hoping someone might be able to help me shorten it? ``` use Database select * from tblCall WHERE flddate between '4/15/15 00:00' and '4/15/15 23:59' and fldAgentLoginID='1234' and fldcalldir='incoming' and fldcalltype='external' and EXISTS (SELECT * FROM (SELECT MAX(fldCallName) AS fldCallName, fldCallID FROM tblCall GROUP BY fldCallID) derv WHERE tblCall.fldCallName = derv.fldCallName AND tblCall.fldCallID = derv.fldCallID) or flddate between '4/15/15 00:00' and '4/15/15 23:59' and '4/15/15 23:59' and fldPhoneLoginID='56789' and fldcalldir='incoming' and fldcalltype='external' and EXISTS (SELECT * FROM (SELECT MAX(fldCallName) AS fldCallName, fldCallID FROM tblCall GROUP BY fldCallID) derv WHERE tblCall.fldCallName = derv.fldCallName AND tblCall.fldCallID = derv.fldCallID) ```
There are two keywords you can use to get non-duplicated data, either `DISTINCT` or `GROUP BY`. In this case, I would use a `GROUP BY`, but you should read up on both. This query groups all of the records by CallIndex and takes the `MAX` value for each of the other columns and should give you the results you want: ``` SELECT MAX(Date) AS Date, MAX(Name) AS Name, MAX(ANI) AS ANI, CallIndex FROM table GROUP BY CallIndex ``` **EDIT** Since you can't use `GROUP BY` directly but you can have any SQL in the `WHERE` clause you can do: ``` SELECT * FROM table WHERE EXISTS ( SELECT * FROM ( SELECT MAX(Date) AS Date, MAX(Name) AS Name, MAX(ANI) AS ANI, CallIndex FROM table GROUP BY CallIndex ) derv WHERE table.Date = derv.Date AND table.Name = derv.Name AND table.ANI = derv.ANI AND table.CallIndex = derv.CallIndex ) ``` This selects all rows from the table where there exists a matching row from the `GROUP BY`. It won't be perfect, if any two rows match exactly, you'll still have duplicates, but that's the best you'll get with your restriction.
If the constraint is that we can only add to the WHERE clause, I don't think it's possible, due to there being 2 absolutely identical rows: ``` 4/15/2015 | Jane | 5555051005 | 00000.0050 4/15/2015 | Jane | 5555051005 | 00000.0050 ``` Is it possible that you can add HAVING or GROUP BY to the WHERE? or possibly UNION the SELECT to another SELECT statement? That may open up some additional possibilities.
SQL Where Query to Return Distinct Values
[ "", "sql", "distinct", "where-clause", "" ]
I am importing working with data imported from excel files. There is a column with a string that can contain multiple numbers. I am trying to extract the largest number in the string or a 0 if there is no string. The strings are in formats similar to: "100% post-consumer recycled paper, 50% post-consumer recycled cover, 90% post-consumer recycled wire." "Paper contains 30% post-consumer content." or sometimes a empty string or null. Given the irregular formatting of the string I am having trouble and any help would be appreciated.
Here's a scalar function that will take a string as an input and return the largest whole number it finds (up to a maximum of 3 digits, but from your question I've assumed you're dealing with percentages. If you need more digits, repeat the IF statements ad infinitum). Paste this into SSMS and run it to create the function. To call it, do something like: ``` SELECT dbo.GetLargestNumberFromString(MyStringField) as [Largest Number in String] FROM MyMessedUpData ``` Function: ``` CREATE FUNCTION GetLargestNumberFromString ( @s varchar(max) ) RETURNS int AS BEGIN DECLARE @LargestNumber int, @i int SET @i = 1 SET @LargestNumber = 0 WHILE @i <= LEN(@s) BEGIN IF SUBSTRING(@s, @i, 3) like '[0-9][0-9][0-9]' BEGIN IF CAST(SUBSTRING(@s, @i,3) as int) > @LargestNumber OR @LargestNumber IS NULL SET @LargestNumber = CAST(SUBSTRING(@s, @i,3) as int); END IF SUBSTRING(@s, @i, 2) like '[0-9][0-9]' BEGIN IF CAST(SUBSTRING(@s, @i,2) as int) > @LargestNumber OR @LargestNumber IS NULL SET @LargestNumber = CAST(SUBSTRING(@s, @i,2) as int); END IF SUBSTRING(@s, @i, 1) like '[0-9]' OR @LargestNumber IS NULL BEGIN IF CAST(SUBSTRING(@s, @i,1) as int) > @LargestNumber SET @LargestNumber = CAST(SUBSTRING(@s, @i,1) as int); END SET @i = @i + 1 CONTINUE END RETURN @LargestNumber END ```
1. Pull the data into SQL as-is 2. Write a query to get a distinct list of options in that column 3. Add a new column to store the desired value 4. Write an update statement to populate the new column As far as determining the largest size, I think you need to look at your data set first, but the update could be as simple as: ``` DECLARE @COUNTER INT=1000 While EXISTS (SELECT * FROM <Table> WHERE NewColumn is NULL) AND @COUNTER>=0 BEGIN UPDATE <Table> SET NewColumn=@COUNTER WHERE <SearchColumn> LIKE '%' + CONVERT(VARCHAR,@COUNTER) + '%' AND NewColumn is NULL SET @COUNTER=@COUNTER-1 END ```
Extract largest number from a string in T-SQL
[ "", "sql", "sql-server", "t-sql", "string-matching", "extract", "" ]
I have the following entries in the table: ``` SQL> select TO_CHAR(startdate, 'DAY') from ragreement; TO_CHAR(STARTDATE,'DAY') ------------------------------------ WEDNESDAY THURSDAY SUNDAY FRIDAY MONDAY SUNDAY MONDAY MONDAY THURSDAY SATURDAY TUESDAY SUNDAY FRIDAY WEDNESDAY FRIDAY SUNDAY THURSDAY MONDAY THURSDAY WEDNESDAY TUESDAY TUESDAY MONDAY SUNDAY WEDNESDAY FRIDAY SUNDAY MONDAY WEDNESDAY 29 rows selected. ``` I would like to count the dates that are 'Mondays', 'Tuesdays' and so on... However, I get the following: ``` SQL> select count(*) from ragreement where UPPER(TO_CHAR(startdate, 'DAY')) = 'TUESDAY'; COUNT(*) ---------- 0 SQL> select count(*) from ragreement where UPPER(TO_CHAR(startdate, 'DAY')) = 'MONDAY'; COUNT(*) ---------- 0 ``` Can someone tell me what am I doing wrong?
Try: ``` SELECT COUNT(*) FROM ragreement WHERE TRIM(TO_CHAR(startdate, 'DAY')) = 'TUESDAY' ``` I think there is space padding space padding. Don't need `UPPER` since the formatting model used already is forcing upper case. Please see [Number Format Models](http://docs.oracle.com/cd/B28359_01/server.111/b28286/sql_elements004.htm#SQLRF00216) > The character elements MONTH, MON, DAY, and DY are padded with > trailing blanks to the width of the longest full month name, the > longest abbreviated month name, the longest full date name, or the > longest abbreviated day name, respectively, among valid names > determined by the values of NLS\_DATE\_LANGUAGE and NLS\_CALENDAR > parameters.
For completeness sake, here is an alternative: ``` SELECT t.day, COUNT(*) FROM (SELECT TRIM(TO_CHAR(startdate, 'DAY')) day FROM ragreement) t GROUP BY t.day ``` Naturally since no string comparison is involved in this solution, this could (and should) be simplified: ``` SELECT TO_CHAR(startdate, 'DAY'), COUNT(*) FROM ragreement GROUP BY TO_CHAR(startdate, 'DAY') ```
Oracle SQL String Compare
[ "", "sql", "oracle", "select", "count", "" ]
I am trying to get the percent and it just shows up as zero. I want to show two decimal places such as 0.65 Here is a piece of the query I am selecting: ``` select count(numbers)/count(othernumbers) decimal(3,2) as"rate" ``` if I use this it shows up as 0 and gets rid of the rest ``` select count(numbers)/count(othernumbers) as"rate" ```
``` SELECT FORMAT(this / that, 2) as rate ``` Meanwhile, I question whether `COUNT(numbers)` is what you want. That counts how many rows have a non-NULL value in the column `numbers`. Also, fraction is usually more like `x / (x+y)` -- meaning the fraction of the total (x+y) that is x. A "percentage" needs `100*` somewhere. 13/20 is the *fraction* 0.65 or the *percentage* 65.00 .
need to convert both of your "count(numbers)" and "count(othernumbers)" to decimal also. ``` select convert(decimal(5,2), count(numbers)) / convert(decimal(5,2), count(othernumbers)) as"rate" ``` Here's an example that works in SSMS (Sql Server): ``` select Convert(decimal(3,2), convert(decimal(4,2), 1.0) / convert(decimal(4,2), 10.0)) as [rate] ```
how to show decimals in sql instead of 0
[ "", "mysql", "sql", "decimal", "" ]
let's say this is the result of my Select query : ``` ID Name Status Edit Date 1 n1 closed edt1 01/01/2005 1 n1 closed edt2 15/01/2005 1 n1 closed edt3 20/01/2005 ``` What I'm trying to do, is to get the diff between the date of edit1 and the date of edit2 (how many days the edit took) and put it on a separate column. keep in mind that I get a result (for testing purpose), something like this : ``` select x.id, x.name, x.status, y.edit, y.date from (table1 x left join table2 y on x.id = y.id) where x.id = 1 ``` So how can I get the number of days spent between each edit and put it on a decicated column? of course there won't be a `where x.id = 1` anymore...my query result should look like this : ``` id name status edit_1(edt2 - edt1) edit_2(edt3 - edt2) 1 n1 closed 14 5 ``` Thanks for helping ^^
You can use the `LEAD()` analytic function: ``` SELECT id, name, status, edit, date, LEAD(date) OVER ( PARTITION BY id ORDER BY date ) - date AS edit_diff FROM mytable; ``` If you want to put a certain number of edits into their own columns, you could then PIVOT the date returned from the above query; if the # of edits is arbitrary then you'll need to use PL/SQL to do it. Note that I'm assuming that `name` and `status` won't vary by `id`; if you want to track edits to these then you'll want to add those columns to the `LEAD()` partition. --- Assuming that you'll always have a fixed number of dates (let's assume three as in your example, giving two date differences), you can pivot by using conditional aggregation. We'll use our query above as a common table expression (CTE); you can also use it as a subquery if that's more your style. I think this way is cleaner though. I'm going to add a call to `ROW_NUMBER()`: ``` WITH mydata AS ( SELECT id, name, status, edit, date , LEAD(date) OVER ( PARTITION BY id ORDER BY date ) - date AS edit_diff , ROW_NUMBER() OVER ( PARTITION BY id ORDER BY date ) AS rn FROM mytable ) SELECT id, name, status, edit, date , MAX(CASE WHEN rn = 1 THEN edit_diff END) AS edit_1 , MAX(CASE WHEN rn = 2 THEN edit_diff END) AS edit_2 FROM mydata GROUP BY id, name, status, edit, date; ``` (By the way, `date` is not a good name for a column. I'm assuming that's just an example and not the actual name.)
I would add an incrementing id column like this: <http://www.orafaq.com/wiki/AutoNumber_and_Identity_columns> and then write your query like this: ``` select x.id, x.name, x.status, y.edit, y.date, y.date - z.date from (table1 x left join table2 y on x.id = y.id left join table2 z on y.newid = z.newid - 1) where x.id = 1 ```
difference between two fields from different rows
[ "", "sql", "oracle", "" ]
I have a table with column data in below format(Row# just to indicate row number). ``` Row#1 :test.doc#delimiter#1234,test1.doc#delimiter#1235,test2.doc#delimiter#1236<br> Row#2 :fil1.txt#delimiter#1456,fil1.txt#delimiter#1457 ``` I want to split the string using comma(,) as delimiter, list all in one single column and insert into a new table. output should be something like this (Row# just to indicate row number): ``` Row#1:test.doc#delimiter#1234<br> Row#2:test1.doc#delimiter#1235<br> Row#3:test2.doc#delimiter#1236<br> Row#4: fil1.txt#delimiter#1456 ``` Can anyone help me do this?
``` WITH data AS ( SELECT 'test.doc#delimiter#1234,test1.doc#delimiter#1235,test2.doc#delimiter#1236' AS "value" FROM DUAL UNION ALL SELECT 'fil1.txt#delimiter#1456,fil1.txt#delimiter#1457' AS "value" FROM DUAL ) SELECT REGEXP_SUBSTR( data."value", '[^,]+', 1, levels.COLUMN_VALUE ) FROM data, TABLE( CAST( MULTISET( SELECT LEVEL FROM DUAL CONNECT BY LEVEL <= LENGTH( regexp_replace( "value", '[^,]+')) + 1 ) AS sys.OdciNumberList ) ) levels; ```
An easy way would be to, for each row, split the string into multiple rows and insert it to the destination table. There are multiple ways to split a single string into multiple rows on this site. A simple search should give you some answers. Assuming your table is called origtable, the column that has those strings is called mycol and destination table is called destTable. ``` BEGIN FOR orig_row IN (SELECT mycol from origTable) LOOP INSERT INTO destTable(outCol) SELECT REGEXP_SUBSTR (orig_row.mycol,'(.*?)(,|$)',1,LEVEL,NULL,1) FROM dual CONNECT BY LEVEL <= REGEXP_COUNT (orig_row.mycol, ',') + 1; END LOOP; END; ```
Split string oracle into a single column and insert into a table
[ "", "sql", "oracle", "oracle10g", "oracle-sqldeveloper", "" ]
I have a two tables with alarming information from multiple different computers. I would like to return a list of specific alarms for a specific computer, and also include the previous alarm for that same computer. Table 1: Computer Info ``` ComputerInfoId| ComputerName| [Desc] 1 | Desktop1 | Front Desk 2 | Laptop1 | Work Station 1 ``` Table 2: AlarmData ``` AlarmDataId|ComputerInfoId|AlarmId|AlarmDate 1 2 1 03Mar2014 01:12:29 2 2 3 03Mar2014 01:12:30 3 1 7 03Mar2014 01:12:33 4 1 2 03Mar2014 01:12:36 5 1 6 03Mar2014 01:14:29 6 2 12 03Mar2014 01:15:30 7 1 2 03Mar2014 01:16:12 8 2 1 03Mar2014 01:19:40 ``` I would like a query for ComputerInfoID 1 and AlarmId 2. A query would return those records as well as the 2 records for that station that happened previous. Expected Data: ``` ComputerInfoId|ComputerName|AlarmDataId|AlarmId| AlarmDate 1 | Desktop1 | 3 | 7 |03Mar2014 01:12:33 --Prev Record 1 | Desktop1 | 4 | 2 |03Mar2014 01:12:36 --Record fits Criteria 1 | Desktop1 | 5 | 6 |03Mar2014 01:14:29 --Prev Record 1 | Desktop1 | 7 | 2 |03Mar2014 01:16:12 --Record fist Criteria ``` The comments after each line are not required in the output data, they are included here for clarity. The query that I have been working with is: ``` SELECT * FROM AlarmData AD LEFT OUTER JOIN (SELECT * FROM AlarmData AD2 INNER JOIN (SELECT MAX(AlarmDatId)[MaxADID] FROM AlarmData AD3 WHERE AD3.AlarmDataId < AD.AlarmDataId AND AD3.ComputerInfoID = AD.ComputerInfoID) AMax ON AD2.AlarmDataId = AMax.MaxADID) PrevAl ON PrevAl.ComputerInfoID = AD.ComputerInfoID WHERE AD.ComputerInfoID = 1 AND AD.AlarmId = 2 ``` But I am getting an error that the AD.AlarmDataId and AD.ComputerInfoID could not be bound on the "AD3.AlarmDataId < AD.AlarmDataId AND AD3.ComputerInfoID = AD.ComputerInfoID" line.
Your sample output data doesn't match the input data,and I can't tell if you want two previous errors for each matching error, or one previous error for each of 2 errors in the sample data. I think that this query will work on a larger dataset where the previous alarm for a machine might not be the previous AlarmDataID. I have indicated with a comment where to change the number of surrounding rows you select. ``` with Errors as ( select *, ROW_NUMBER() over( order by alarmdate) as OrdCnt from alarmdata AD WHERE AD.ComputerInfoID = 1 ) select c.ComputerName, f.* from ComputerInfo c left outer join errors e on c.computerInfoId = e.ComputerInfoID left outer join errors f on f.OrdCnt between (e.OrdCnt - 1 )and (e.OrdCnt + 0) -- Number of rows before and after where (e.AlarmID = 2) ```
I think this work for you : ``` Select ComputerInfo.ComputerInfoId ,ComputerInfo.ComputerName ,AlarmData.AlarmDataId ,AlarmData.AlarmId ,AlarmData.AlarmDate ,TB.AlarmId As PreAlarmId ,TB.AlarmDate As PreAlarmDate From ComputerInfo inner join AlarmData On AlarmData.ComputerInfoId =ComputerInfo.ComputerInfoId Left Outer join AlarmData TB On TB.ComputerInfoId = ComputerInfo.ComputerInfoId And TB.AlarmDataId < AlarmData.AlarmDataId And TB.AlarmDataId in ( Select Max(a_1.AlarmDataId) From AlarmData As a_1 Where a_1.ComputerInfoId = ComputerInfo.ComputerInfoId And a_1.AlarmDataId < AlarmData.AlarmDataId) where ComputerInfo.ComputerInfoId = 1 And AlarmData.AlarmId = 2 ``` this is sql fiddle link : [Sql Fiddle Link](http://sqlfiddle.com/#!6/d2837/11)
SQL Server - Find previous record based upon ID and Computer Name
[ "", "sql", "sql-server-2008", "t-sql", "" ]
Given that each GUID is represented by 16 bytes. So at best there are 2^128 possibilities = 3.4028237e+38 possible GUIDs. Will it be possible that it will all be used up?
No. Even if you assume *extremely* high usage of GUIDs in some area, and *extremely* long time scales, the key point about GUIDs is their uniqueness. As soon as you start to get repetitions with any probability that is of practical relevance, people will stop using GUIDs, and therefore won't use up any more of them. Sure, they might use some numbers which look like GUIDs in some areas, where sufficiently low usage can still help ensure local uniqueness, but those would only be LUIDs, and with a bit of luck people will also call them this way.
To show you how big 2^128 GUIDs is: 1 GUID = 16 bytes. Therefore 2^128 GUIDS would each need 16 bytes of storage, that is: `2^4 * 2^128 = 2^132 bytes` [This works out to be](https://www.wolframalpha.com/input?i2d=true&i=Power%5B2%2C4%5D*Power%5B2%2C128%5D) `5,444,517,870,735,015,415,413,993,718,908,291,383,296 bytes`; that's a number with *40 digits*, `5.445 x 10^39`, number of bytes. There is no unit of storage big enough to quantify this number– the largest standardized unit of data is the "yottabyte", and if we use it, [we still end up with a 16 digit value](https://www.wolframalpha.com/input?i=2**132%20bytes%20to%20yottabytes), or 5.445 *quadrillion* yottabytes (`5.445 x 10^15`). So, first you would need to worry about being able to *store* that many GUIDs, before you could even consider running out of them. **Basically: it is functionally impossible for you to run out.** --- ## Collisions Just because I've heard people worry about collision so many times, I will expand my answer to include an analysis of the possibilities of collisions. There's a caveat to note here– (the majority of) randomly-generated UUIDs that are [RCC-spec compliant](https://datatracker.ietf.org/doc/html/rfc4122#section-4.4) actually have *122* bits of randomness instead of 128, because 6 bits are reserved to denote the [version](https://datatracker.ietf.org/doc/html/rfc4122#section-4.1.3) and [variant](https://datatracker.ietf.org/doc/html/rfc4122#section-4.1.1) of the UUID being generated. This reduces the amount of generatable UUIDs, and increases the potential for collisions, but it doesn't matter very much in practice because there are still so many possible UUIDs (but it does alter the collision math slightly). According to the [birthday paradox](https://en.wikipedia.org/wiki/Birthday_problem#Probability_of_a_shared_birthday_(collision)) (when applied to hash functions), the expected number of hashes you would need to generate until you get a collision can be roughly approximated as `2^(n/2)`, where `n` is the total random bits. Thus, for our UUIDs with 122 random bits: ``` 2^(122 / 2) = 2^61 ``` This means that, by the time you generate 2^61 UUIDs, there is a ~50% chance that you will get a collision with each UUID generation. As a an expressed value this is roughly **2.3 quintillion**: ``` 2,305,843,009,213,693,952 UUIDs ``` This is roughly equivalent to generating 1 billion UUIDs per second for **73 years**: ``` 1,000,000,000 UUIDs/second 60 seconds/minute 60 minutes/hour 24 hours/day 365.25 days/year * 73 years ------------------------------------------ 2,303,704,800,000,000,000 Total UUIDs ``` (This is also a *lot* of storage– 2.304 Exabytes, or 2.3 million Terabytes) Also note that this is all assuming the UUIDs are generated with a good algorithm for Randomness and a good source of Entropy (i.e. all possible UUIDs are equally likely to be generated at any given time). All in all, UUIDs/ GUIDs are designed specifically to minimize the chance of collision, and it's almost certainly not something you need to worry about in practice. It is far, *far* more likely that there's an error in an algorithm or code somewhere to cause collisions than it is to generate a collision "naturally". Related reading: * [Birthday Problem](https://en.wikipedia.org/wiki/Birthday_problem#Probability_of_a_shared_birthday_(collision)) * [Birthday Attack](https://en.wikipedia.org/wiki/Birthday_attack) * [Universally Unique Identifier](https://en.wikipedia.org/wiki/Universally_unique_identifier#Collisions) * [Probability of hash collision | Answer by @templatetypedef](https://stackoverflow.com/a/62667633)
Is there any possibility that GUID will all be used up?
[ "", "sql", "math", "guid", "uuid", "" ]
I've got a table with a few columns, and for each row I want the maximum: ``` -- Table: +----+----+----+----+----+ | ID | C1 | C2 | C3 | C4 | +----+----+----+----+----+ | 1 | 1 | 2 | 3 | 4 | | 2 | 11 | 10 | 11 | 9 | | 3 | 3 | 1 | 4 | 1 | | 4 | 0 | 2 | 1 | 0 | | 5 | 2 | 7 | 1 | 8 | +----+----+----+----+----+ -- Desired result: +----+---------+ | ID | row_max | +----+---------+ | 1 | 4 | | 2 | 11 | | 3 | 4 | | 4 | 2 | | 5 | 8 | +----+---------+ ``` With two or three columns, I'd just write it out in `iif` or a `CASE` statement. ``` select ID , iif(C1 > C2, C1, C2) row_max from table ``` But with more columns this gets cumbersome fast. Is there a nice way to get this row-wise maximum? In R, this is called a "parallel maximum", so I'd love something like ``` select ID , pmax(C1, C2, C3, C4) row_max from table ```
What about unpivoting the data to get the result? You've said tsql but not what version of SQL Server. In SQL Server 2005+ you can use [`CROSS APPLY`](https://technet.microsoft.com/en-us/library/ms175156%28v=sql.105%29.aspx) to convert the columns into rows, then get the max value for each row: ``` select id, row_max = max(val) from yourtable cross apply ( select c1 union all select c2 union all select c3 union all select c4 ) c (val) group by id ``` See [SQL Fiddle with Demo](http://sqlfiddle.com/#!6/5d60c/3). Note, this could be abbreviated by using a [table value constructor](https://msdn.microsoft.com/en-us/library/dd776382.aspx). This could also be accomplished via the [UNPIVOT](https://msdn.microsoft.com/en-us/library/ms177410(v=sql.105).aspx) function in SQL Server: ``` select id, row_max = max(val) from yourtable unpivot ( val for col in (C1, C2, C3, C4) ) piv group by id ``` See [SQL Fiddle with Demo](http://sqlfiddle.com/#!6/5d60c/4). Both versions gives a result: ``` | id | row_max | |----|---------| | 1 | 4 | | 2 | 11 | | 3 | 4 | | 4 | 2 | | 5 | 8 | ```
You can use the following query: ``` SELECT id, (SELECT MAX(c) FROM ( SELECT c = C1 UNION ALL SELECT c = C2 UNION ALL SELECT c = C3 UNION ALL SELECT c = C4 ) as x(c)) maxC FROM mytable ``` [**SQL Fiddle Demo**](http://sqlfiddle.com/#!6/7ca50/1)
Row-wise maximum in T-SQL
[ "", "sql", "t-sql", "unpivot", "" ]
Given the following table, ``` PersonID Year ---------- ---------- 1 1991 1 1992 1 1993 1 1994 1 1996 1 1997 1 1998 1 1999 1 2000 1 2001 1 2002 1 2003 2 1999 2 2000 ... ... ``` Is there a way with a SQL select query to get the first year of the most recent range of consecutive years meeting a minimum number, as well as the total consecutive years? In this case, for 4 year minimum, for personID 1, it would return 1996 and 8. This will be joined to another table on personID, so the personID is not specific. Thanks for your help.
Using two CTEs to create row number groupings allows you to group by PersonID and display all personIDs that it applies to: ``` Declare @MinimumConsecutiveYears int=4 ;With YearGroupings as ( Select PersonID ,year ,row_number() over(partition by personid order by year asc) rown From @years ) , ConsecutiveYears as ( Select PersonID ,min(year) as MinYear ,count(rown) as ConsecutiveYears ,row_number() over(partition by PersonID order by count(rown) desc) rown From YearGroupings Group By PersonID,year-rown Having Count(rown)>@MinimumConsecutiveYears ) Select PersonID,MinYear,ConsecutiveYears From ConsecutiveYears Where Rown=1 ``` Alternatively, without CTEs: ``` Declare @MinimumConsecutiveYears int=4 Select PersonID ,year ,row_number() over(partition by personid order by year asc) rown Into #YearGroupings From #years Select PersonID ,min(year) as MinYear ,count(rown) as ConsecutiveYears ,row_number() over(partition by PersonID order by count(rown) desc) rown Into #ConsecutiveYears From YearGroupings Group By PersonID,year-rown Having Count(rown)>@MinimumConsecutiveYears Select PersonID,MinYear,ConsecutiveYears From #ConsecutiveYears Where Rown=1 ```
You can create [islands of years](https://www.simple-talk.com/sql/t-sql-programming/the-sql-of-gaps-and-islands-in-sequences/) in the cte and check your conditions: ``` declare @PersonId int = 1, @cnt int = 4 ;with cte_numbered as ( select PersonID, [Year], row_number() over(partition by PersonID order by [Year]) as rn from Table1 ), cte_grouped as ( select PersonID, min([Year]) as [Year], count(*) as cnt from cte_numbered group by PersonID, [Year] - rn ) select top 1 * from cte_grouped where PersonId = @PersonId and cnt >= @cnt order by [Year] desc ``` **`sql fiddle demo`** You also could do something more optimized, like this ``` declare @PersonId int = 1, @cnt int = 4 ;with cte_numbered as ( select PersonID, [Year], row_number() over(partition by PersonID order by [Year]) as rn from Table1 where personId = @personId ), cte_grouped as ( select row_number() over(partition by [year] - rn order by year) as cnt, year from cte_numbered ) select top 1 cnt, year - cnt + 1 from cte_grouped where cnt >= @cnt order by [Year] desc, cnt desc ``` **`sql fiddle demo`**
Determine first year of minimum consecutive year range and count of consecutive years
[ "", "sql", "sql-server", "sql-server-2012", "" ]
What is the usage of `@@` in SQL Server?
[According to MSDN](https://msdn.microsoft.com/en-us/library/ms187786.aspx), the correct name for these is `system functions`. The naming confusion (global variable, system function, global function) stems from different terminology used throughout SQL Server's history. From the [MSDN Transact-SQL Variables article](https://technet.microsoft.com/en-us/library/ms187953%28v=sql.105%29.aspx): > The names of some Transact-SQL system functions begin with two at > signs (@@). Although in earlier versions of Microsoft SQL Server, the > @@functions are referred to as global variables, they are not > variables and do not have the same behaviors as variables. The > @@functions are system functions, and their syntax usage follows the > rules for functions. Thus, two 'at' symbols (@@) are used to denote some system functions. The use of the phrase "global variable" was deprecated (though you will still see [some people use it](http://www.codeproject.com/Articles/39131/Global-Variables-in-SQL-Server)), most likely because in the programming world a global variable is a single value that is visible everywhere, and as already pointed out that isn't what is happening here (e.g., `@@IDENTITY`). Further confusion is likely caused by the way temporary tables are named. A single hash sign prefixing a table name indicates a locally-scoped temporary table (e.g., `#MyLocalTable`), much like a single at symbol indicates a locally-scoped variable (e.g., `@MyLocalVariable`). Adding a second hash sign to a temporary table makes it globally-scoped (e.g., `##MyGlobalTable`), but trying to add two at symbols to a variable [does not produce the same effect](https://stackoverflow.com/a/22372569/1225845).
`@` is for a local variable `@@` is for a global variable or function. There are several standard global variables or functions, e.g.: `@@IDENTITY`, `@@ROWCOUNT`, `@@TRANCOUNT`
What is the usage of '@@' in SQL Server
[ "", "sql", "sql-server", "special-characters", "" ]
I have two tables. One is the `Course` table and the second is the `Teacher` table. I want to get all `Teacher` who does not teach `'Math'`. How can I do this? Course Table ``` course_id course teacher_id marks 1 Physics 1 60 2 Math 1 60 3 Chemestry 1 60 4 English 2 60 5 Hindi 2 60 6 Physics 2 60 7 Chemestry 3 60 8 English 4 60 9 Math 5 60 10 Math 6 60 ``` Teacher Table ``` teacher_id name salary gender 1 Teacher1 20 1 2 Teacher2 30 1 3 Teacher3 40 2 4 Teacher4 50 2 5 Teacher5 60 1 6 Teacher6 70 2 ```
> I want to get all teacher who does not teachs math. You need to join both the tables on **teacher\_id** and then filter out the rows based on the **course**. ``` SQL> SELECT DISTINCT t.name 2 FROM course c, 3 teacher t 4 WHERE c.teacher_id = t.teacher_id 5 AND c.course <> 'Math'; NAME -------- Teacher2 Teacher1 Teacher4 Teacher3 SQL> ``` **EDIT** Since you have teachers teaching multiple courses, you need to filter out further: ``` SQL> WITH DATA AS 2 (SELECT c.*, 3 t.name 4 FROM course c, 5 teacher t 6 WHERE c.teacher_id = t.teacher_id 7 AND c.course <> 'Math' 8 ) 9 SELECT DISTINCT name 10 FROM data 11 WHERE teacher_id NOT IN 12 (SELECT teacher_id FROM course WHERE course = 'Math' 13 ) 14 / NAME -------- Teacher2 Teacher4 Teacher3 SQL> ``` **NOTE** Please keep in mind that the other solution using **NOT EXISTS** clause is better in terms of performance, since the table scans are less and even index scans. With proper indexes, the not exists query would be an optimal method.
``` select * from teacher t where not exists (select 1 from course c where c.teacher_id = t.teacher_id and c.course = 'Math') ``` @LalitKumarB Ben is absolutely right
Give the name of all the teacher who does not teach math
[ "", "mysql", "sql", "oracle", "" ]