Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I have a table with 2 fields, `val1` and `val2`, that contain same type. `val1` is mandatory and `val2` is optional -- but if present, should count as much as `val1`. ``` CREATE TABLE VALS ( id INT NOT NULL AUTO_INCREMENT, val1 INT NOT NULL, val2 INT DEFAULT NULL, timesign TIMESTAMP); ``` For that, I want to retrieve all values, whether coming from `val1` or `val2` field, into one field `res`, so that this ``` INSERT INTO VALS (val1, val2) VALUES (1, null), (2, null), (3, 4), (5, null), (6, 7), (8, null); ``` can be retrieved to one field only: ``` +------+ | res | +------+ | 1 | | 2 | | 3 | | 4 | | 5 | | 6 | | 7 | | 8 | +------+ ``` **How to do this?**
Try this: ``` SELECT val1 as res FROM VALS UNION SELECT val2 as res FROM VALS WHERE val2 is notNULL; ``` You don't need "distict", Union is itself give set.
One query with union ``` SELECT a AS f FROM t UNION SELECT b AS f FROM t HAVING f IS NOT NULL ORDER BY f ``` Works when both columns can be `NULL`
Select different fields as one field (no CONCAT)
[ "", "mysql", "sql", "field", "" ]
An application which we have built has undergone a large change in its database schema, particularly in the way financial data is stored. We have functions that calculate the total amount of billing, based on various scenarios; and the change is causing huge performance problems when the functions must be run many times in a row. I'll include an explanation, the function and the relevant schema, and I hope someone sees a much better way to write the function. This is SQL Server 2008. First, the business basis: think of a medical Procedure. The healthcare Provider performing the Procedure sends one or more Bills, each of which may have one or more line items (BillItems). That Procedure is the re-billed to another party. The amount billed to the third party may be: 1. The total of the Provider's billing 2. The total of the Provider's billing plus a Copay amount,or 3. A completely separate amount (a Rebill amount) The current function for calculating the billing for a Procedure looks at all three scenarios: ``` CREATE FUNCTION [dbo].[fnProcTotalBilled] (@PROCEDUREID INT) RETURNS MONEY AS BEGIN DECLARE @billed MONEY SELECT @billed = (SELECT COALESCE((SELECT COALESCE(sum(bi.Amount),0) FROM BillItems bi INNER JOIN Bills b ON b.BillID=bi.BillID INNER JOIN Procedures p on p.ProcedureID=b.ProcedureID WHERE b.ProcedureID=@PROCEDUREID AND p.StatusID=3 AND b.HasCopay=0 AND b.Rebill=0),0)) -- the total of the provider's billing, with no copay and not rebilled + (SELECT COALESCE((SELECT sum(bi.Amount) + COALESCE(b.CopayAmt,0) FROM BillItems bi INNER JOIN Bills b ON b.BillID=bi.BillID INNER JOIN Procedures p on p.ProcedureID=b.ProcedureID WHERE b.ProcedureID=@PROCEDUREID AND p.StatusID=3 AND b.HasCopay=1 GROUP BY b.billid,b.CopayAmt),0)) -- the total of the provider's billing, plus a Copay amount + (SELECT COALESCE((SELECT sum(COALESCE(b.RebillAmt,0)) FROM Bills b INNER JOIN Procedures p on p.ProcedureID=b.ProcedureID WHERE b.ProcedureID=@PROCEDUREID AND p.StatusID=3 AND b.Rebill=1),0)) -- the Rebill amount, instead of the provider's billing RETURN @billed END ``` I'll omit the DDL for the Procedure. Suffice to say, it must have a certain status (shown in the function as p.StatusID= 3). Here are the DDLs for Bills and related BillItems: ``` CREATE TABLE dbo.Bills ( BillID int IDENTITY(1,1) NOT NULL, InvoiceID int DEFAULT ((0)), CaseID int NOT NULL, ProcedureID int NOT NULL, TherapyGroupID int DEFAULT ((0)) NOT NULL, ProviderID int NOT NULL, Description varchar(1000), ServiceDescription varchar(255), BillReferenceNumber varchar(100), TreatmentDate datetime, DateBilled datetime, DateBillReceived datetime, DateBillApproved datetime, HasCopay bit DEFAULT ((0)) NOT NULL, CopayAmt money, Rebill bit DEFAULT ((0)) NOT NULL, RebillAmt money, IncludeInDemand bit DEFAULT ((1)) NOT NULL, CreateDate datetime DEFAULT (getdate()) NOT NULL, CreatedByID int, ChangeDate datetime, ChangeUserID int, PRIMARY KEY (BillID) ); CREATE TABLE dbo.BillItems ( BillItemID int IDENTITY(1,1) NOT NULL, BillID int NOT NULL, ItemDescription varchar(1000), Amount money, WillNotBePaid bit DEFAULT ((0)) NOT NULL, CreateDate datetime DEFAULT (getdate()), CreatedByID int, ChangeDate datetime, ChangeUserID varchar(25), PRIMARY KEY (BillItemID) ); ``` I fully realize how complex the function is; but I couldn't find another way to account for all the scenarios. I'm hoping that a far better SQL programmer or DBA will see a more performant solution. Any help will be greatly appreciated. Thanks, Tom **UPDATE:** Thanks to everyone for their replies. I tried to add a little clarification in comments, but I'll do so here, too. First, a definition: a Procedure is medical service from a Provider on a single Date of Service. We only concern ourselves with the total amount billed for a procedure; multiple persons do not receive bills. A "Case" can have many Procedures. Generally, a single Procedure will have a single Bill - but not always. A Bill may have one or more BillItems. The Copay (if one exists) is added to the sum of the BillItems. A Rebill Amount trumps everything. The performance issue comes into play at a higher level, when calculating the totals for an entire Case (many Procedures) and when needing to display grid data that shows hundreds of Cases at once. My query was at the Procedure level, because it was simpler to describe the problem. As to sample data, the data in @Serpiton's SQL Fiddle is an excellent, concise example. Thank you very much for it. In reviewing the answers, it seems to me that both the CTE approach of @Serpiton and @GarethD's view approach both are strong improvements on my original. For the moment, I'm going to work with the CTE approach, simply to avoid the necessity of dealing with the multiple results from the SELECT. I have modified @Serpiton's CTE to work at the Case level. If he or others would please take a look at it, I'd appreciate it. It's working well in my testing, but I'd appreciate other eyes on it. It goes like this: ``` WITH Normal As ( SELECT b.BillID , b.CaseID , sum(coalesce(n.Amount * (1 - b.Rebill), 0)) Amount FROM Procedures p INNER JOIN Bills b ON p.ProcedureID = b.ProcedureID LEFT JOIN BillItems n ON b.BillID = n.BillID WHERE b.CaseID = 3444 AND p.StatusID = 3 GROUP BY b.CaseID,b.BillID, b.HasCopay ) SELECT Amount = Sum(b.Amount) + Sum(Coalesce(c.CopayAmt, 0)) + Sum(Coalesce(r.RebillAmt, 0)) FROM Normal b LEFT JOIN Bills c ON b.BillID = c.BillID And c.HasCopay = 1 LEFT JOIN Bills r ON b.BillID = r.BillID And r.Rebill = 1 GROUP BY b.caseid ```
**Answer to the update** To increase the performance you can create a view with the same definition of the CTE, so that the query plan will be stored and reused. If you have to calculate more than one total amount don't try to get them individually, a better plan would be to get all of them with a single query, writing a condition like ``` WHERE b.CaseID IN (list of cases) ``` or some other condition that fit your needs, and adding some more information in the main query, at least the CaseID. **Update** @DRapp pointed out a problem with my previous solution (that I write without testing, sorry pals), to remove the trouble I had removed BillItems from the main query, that now works only with the Bills. ``` WITH Normal As ( SELECT b.BillID , b.ProcedureID , sum(coalesce(n.Amount * (1 - b.Rebill), 0)) Amount FROM Procedures p INNER JOIN Bills b ON p.ProcedureID = b.ProcedureID LEFT JOIN BillItems n ON b.BillID = n.BillID WHERE p.ProcedureID = @PROCEDUREID AND p.StatusID = 3 GROUP BY b.ProcedureID, b.BillID, b.HasCopay ) SELECT @Billed = Sum(b.Amount) + Sum(Coalesce(c.CopayAmt, 0)) + Sum(Coalesce(r.RebillAmt, 0)) FROM Normal b LEFT JOIN Bills c ON b.BillID = c.BillID And c.HasCopay = 1 LEFT JOIN Bills r ON b.BillID = r.BillID And r.Rebill = 1 GROUP BY b.ProcedureID ``` ***How it works*** The `Normal` CTE get all the bills related to the ProcedureID, and calculate the Bill Total, the `Amount * (1 - Rebill)` set the Amount to 0 if the Bill is to rebill. In the main query the `Normal` CTE is joined to the special type of bill, as `Normal` contains all the Bills for the selected `ProcedureID`, the table `Procedures` is not there. [Demo](http://www.sqlfiddle.com/#!6/c3c90/24) with random data. **Old Query** Without data to test our query this is a blind fly ``` SELECT @billed = Sum(Coalesce(n.Amount, 0)) + Sum(Coalesce(c.CopayAmt, 0)) + Sum(Coalesce(r.RebillAmt, 0)) FROM Procedures p on INNER JOIN Bills b ON p.ProcedureID = b.ProcedureID And b.Rebill = 0 INNER JOIN BillItems n ON b.BillID = n.BillID INNER JOIN Bills c ON p.ProcedureID = b.ProcedureID And c.HasCopay = 1 INNER JOIN Bills r ON p.ProcedureID = b.ProcedureID And r.Rebill = 1 Where p.ProcedureID = @PROCEDUREID AND p.StatusID = 3 ``` Where `b` is the alias for the "normal" bill (with `n` for the bill items), `c` for the copayed bill and `r` for the rebilled. The `JOIN` condition of `b` check only for `b.Rebill = 0` to get the bill items for both the "normal" bills and the copaid ones. I assume that no bill can have both `HasCopay` and `Rebill` to 1
A very quick win is to use a (TABLE VALUED) (INLINE) FUNCTION instead of a (SCALAR) (MULTI-STATEMENT) FUNCTION. ``` CREATE FUNCTION [dbo].[fnProcTotalBilled] (@PROCEDUREID INT) AS RETURN ( SELECT (sub-query1) + (sub-query2) + (sub-query3) AS amount ); ``` This can then be used as follows: ``` SELECT something.*, totalBilled.* FROM something CROSS APPLY -- Or OUTER APPLY [dbo].[fnProcTotalBilled](something.procedureID) AS totalBilled ``` Over larger data-sets this is *significantly* faster than using scalar functions. - It must be INLINE *(Not Multi-Statement)* - It must be TABLE-VALUED *(Not Scalar)* If you work out better business logic for the calculation, you'll get even more performance benefits again. **EDIT :** This may be functionally the same as you have described, but it's hard to tell. Please add comments to my question to investigate further. ``` SELECT SUM( CASE WHEN b.HasCopay = 0 AND b.Rebill = 0 THEN COALESCE(bi.TotalAmount, 0) WHEN b.HasCopay = 1 THEN b.CopayAmt + COALESCE(bi.TotalAmount, 0) WHEN b.Rebill = 1 THEN b.RebillAmt ELSE 0 END ) AS Amount FROM Procedures p INNER JOIN Bills b ON b.ProcedureID = p.ProcedureID LEFT JOIN ( SELECT BillID, SUM(Amount) AS TotalAmount FROM BillItems GROUP BY BillID ) AS bi ON bi.BillID = b.BillID WHERE p.ProcedureID=@PROCEDUREID AND p.StatusID=3 ``` The 'trick' that makes this simpler is the sub-query to aggregate all the `BillItems` together in to one record per `BillID`. The optimiser won't actually do that for the *whole* table, but only for the relevant records based on your `JOIN`s and `WHERE` clause. This then means that `Bill`:`BillItem` is `1`:`0..1`, and everything simplifies. I believe ;)
Poor performance in SQL Server function
[ "", "sql", "sql-server", "stored-procedures", "" ]
Is there a way to create a new table based on a SELECT query in SQL Server ? I do not want to manually enter all the fields (the table should be auto generated based on the fields in the query). I already tried : ``` CREATE TABLE new_table AS (SELECT * FROM ...); ``` but it did not work.
I had to alias the columns in the select, but I would have to think your current query aliases the fields if they don't have name. > ``` > select > * > into #tempabc > from ( > Select 'a' [test] > union > Select 'b' [test] > ) a > > select * from #tempabc > ```
You are using the wrong syntax for SQL Server. Instead, try: ``` SELECT * INTO new_table FROM old_table ```
Create table using select query in SQL Server
[ "", "sql", "sql-server", "" ]
I'm drawing a blank and need some help figuring out the right syntax for a between statement in mysql I'm doing a report and need to display the Year to Date information. I would like to have the query figure out the year and put it in the BETWEEN statement. if I do the following it works: ``` AND aa.entry_date BETWEEN "2014-01-01" AND "2014-04-30" ``` But I want the query to figure out what the year is and use that instead...kind of like this: ``` AND aa.entry_date BETWEEN Year(CURDATE())+ "-01-01" AND "2014-04-30" ``` Except this doesn't work...what is the right syntax to accomplish this in mysql? Thanks for the help
You need to use proper MySQL syntax. `+` the is SQL Server way to concat strings. `concat()` is MySQL, '+' is just numeric addition: ``` AND aa.entry_date BETWEEN date(concat(Year(CURDATE()), '-01-01')) AND date('2014-04-30') ``` Notice that I also switched to using single quotes for the string and date constants. This usually prevents other confusion in the code.
One option is to use the DATE\_FORMAT function, and supply a literal for the month and day. For example: ``` aa.entry_date BETWEEN DATE_FORMAT(NOW(),'%Y-01-01') AND NOW() ```
mysql between statement using the current year
[ "", "mysql", "sql", "" ]
I am currently testing out something, and encountering an issue when trying to insert some values in a table. I have 3 tables as following; Devices, Outdoor, indoor. ``` **outdoor** ---> **devices** <----- **indoor** net_id(pk) net_id(fk) net_id(pk) ``` I wanted to have a relationship so that a device can be either outdoor or indoor, by having either the net\_id of indoor or outdoor. Thus I did the following: ``` ALTER TABLE devices ADD CONSTRAINT o_relationship FOREIGN KEY (net_id) REFERENCES outdoor (net_id); ALTER TABLE devices ADD CONSTRAINT i_relationship FOREIGN KEY (net_id) REFERENCES indoor (net_id); ``` Now my problem is, when i am trying to insert a value in devices, i am unable to do so unless the net\_id values belongs to both indoor and outdoor table. This means that let's say "net\_001" exists in indoor and outdoor table it allows me to insert it in devices, but if "net\_001"exists in indoor and "net\_002" exists in outdoor and let's say I want to add "net\_001" or "net\_002" in devices it won't allow me to do so. Thus I want to be able to do insertion in devices table that as explained above. Hope i was clear enough to explain the issue. Thank you in advance!
Maintain a relation table among devices and in & out devices. **Solution 1**: For **Device\_Relation** Table ``` +------------------+--------------+ | Of Table | Column | +------------------+--------------+ | Indoor | net_id | -- <--- pk | Outdoor | net_id | -- <--- pk | Devices | device_id | -- <--- pk | Device_relations | d.device_id, | -- <--- pk | | od.net_id, | -- <--- fk | | id.net_id | -- <--- fk +------------------+--------------+ ``` **Solution 2**: For **In\_Out\_Relation** Table ``` +------------------+--------------+ | Of Table | Column | +------------------+--------------+ | Indoor | net_id | -- <--- fk | Outdoor | net_id | -- <--- fk | in_out_id | in_out_id | -- <--- pk +------------------+--------------+ ``` For **Devices** Table ``` +------------------+--------------+ | Of Table | Column | +------------------+--------------+ | Devices | device_id | -- <--- pk | In_Out_Relation | io.in_out_id | -- <--- fk +------------------+--------------+ ```
Use two columns in table device, one for indoor and one for outdoor, otherwise you can't distinguish which is indoor and which is outdoor because they may have same id as you describe. ``` ALTER TABLE devices ADD CONSTRAINT o_relationship FOREIGN KEY (net_indoor_id) REFERENCES indoor (net_id); ALTER TABLE devices ADD CONSTRAINT o_relationship FOREIGN KEY (net_outdoor_id) REFERENCES outdoor (net_id); ```
Insert issue on foreign key column
[ "", "mysql", "sql", "database", "foreign-keys", "sql-insert", "" ]
I am trying to load data into a table of my MySQL database, and getting this error. ``` LOAD DATA LOCAL INFILE 'C:\Users\Myself\Desktop\Blah Blah\LOAD DATA\week.txt' INTO TABLE week; ``` Reference: [this](https://dev.mysql.com/doc/refman/5.5/en/optimizing-innodb-bulk-data-loading.html) The path is hundred percent correct, I copied it by pressing shift and clicking "copy path as" and checked it many times. **So any tips on this will be much appreciated**. . **My research:** Seeing [this](https://stackoverflow.com/questions/9149819/mysql-loader-error-ctest-txt-not-found-errcode-2) answer, I tried by changing `C:\Users` to `C:\\Users`. It did not work for me. Secondly, **is there a way to use some kind of a relative path (rather than an absolute path) here?**
I don't know what version of MySQL you are using but a quick Google search found possible answers to both your questions. Below are excerpts from the [MySQL 5.1 Reference Manual](http://dev.mysql.com/doc/refman/5.1/en/load-data.html): > The file name must be given as a literal string. On Windows, specify > backslashes in path names as forward slashes or doubled backslashes > > The LOCAL keyword affects where the file is expected to be found: > > If LOCAL is specified, the file is read by the client program on the > client host and sent to the server. The file can be given as a full > path name to specify its exact location. If given as a relative path > name, the name is interpreted relative to the directory in which the > client program was started. Regards.
I spent 2 days on this and finally got my mistake, Just changing backslashes by forward ones, as one contributor previously said. And finally worked for me. so was: ``` LOAD DATA LOCAL INFILE 'C:/ProgramData/MySQL/MySQL Server 5.7/Data/menagerie/pet.txt' INTO TABLE pet; ``` I just can say thanks a lot. p.s. don't waste time on ytb...
MySQL LOAD DATA Error (Errcode: 2 - "No such file or directory")
[ "", "mysql", "sql", "bulkinsert", "load-data-infile", "" ]
Hi everyone I need some help, please. I have to do the following with pl/sql: For every Request\_number, assign the v\_id on each voucher a number starting with 1 and incrementing by 1. For example, If Request number 786530 has 3 vouchers associated to it, the first voucher should v\_id 1, the second voucher should have v\_id 2, the third voucher should have v\_id 3. Table looks like this ``` Request_NO Voucher_no V_id 200 22211 null 200 22212 null 200 22213 null 201 22214 null 202 22215 null 202 22216 null 203 22217 null 204 22218 null ```
You can use a merge for efficiency: ``` create table vouchers (request_no number not null ,voucher_no number primary key ,v_id number); insert into vouchers values (200,22211,null); insert into vouchers values (200,22212,null); insert into vouchers values (200,22213,null); insert into vouchers values (201,22214,null); insert into vouchers values (202,22215,null); insert into vouchers values (202,22216,null); insert into vouchers values (203,22217,null); insert into vouchers values (204,22218,null); merge into vouchers t using (select voucher_no ,row_number() over (partition by request_no order by 1) as new_v_id from vouchers) s on (t.voucher_no = s.voucher_no) when matched then update set v_id = new_v_id; select * from vouchers; req vouch v_id === ===== ==== 200 22211 1 200 22212 2 200 22213 3 201 22214 1 202 22215 1 202 22216 2 203 22217 1 204 22218 1 ```
In Oracle, you can do this with a correlated subquery: ``` update table t set v_id = (select count(*) from table t2 where t2.Request_NO = t.Request_NO and t2.Voucher_no <= t.Voucher_no ); ```
For every Request_number, assign the v_id on each voucher a number starting with 1 and incrementing by 1
[ "", "sql", "oracle", "plsql", "" ]
I get the error from this line ``` SELECT table.field FROM table WHERE table.month = 'october' AND DATEDIFF(day, table.start_date, table.end_date) < 30 ``` The dates in my column are in the format m-d-yy Do I need to convert this to a different format? If so how? Using MariaDB
According to the documentation for MariaDB [`DATEDIFF`](https://mariadb.com/kb/en/datediff/) only takes *two* arguments: > **Syntax** > > `DATEDIFF(expr1,expr2)` > > **Description** > > `DATEDIFF()` returns `(expr1 – expr2)` > expressed as a value in days from one date to the other. `expr1` and > `expr2` are date or date-and-time expressions. Only the date parts of > the values are used in the calculation.
@alex\_b it is a very common observation to get confused between the syntaxes of the DATEDIFF & TIMESTAMPDIFF functions. Following [link](http://www.w3schools.com/sql/func_datediff_mysql.asp) will certainly help with the syntax of DATEDIFF & [this page](https://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html) will list all the others date related functions available in MariaDB's parent MySQL.\ Referencing the links above below is a summary - ``` TIMEDIFF(expr1,expr2) expr1 - '2000:01:01 00:00:00' expr2 - '2000:01:01 00:00:00.000001' ``` > TIMEDIFF() returns expr1 − expr2 expressed as a time value. expr1 and > expr2 are time or date-and-time expressions, but both must be of the > same type. ``` DATEDIFF(expr1,expr2) expr1 - '2007-12-31 23:59:59' expr2 - '2007-12-30' ``` > DATEDIFF() returns expr1 − expr2 expressed as a value in days from one > date to the other. expr1 and expr2 are date or date-and-time > expressions. Only the date parts of the values are used in the > calculation. Below is the scenario I used it for - * using CURDATE for current date as 'argument1' * using existing varchar column as 'argument2' * using SET command to update a column SET output\_date = DATEDIFF(CURDATE(),input\_date), ... above worked for me. Good luck!
Incorrect parameter count in the call to native function 'DATEDIFF'
[ "", "sql", "mariadb", "" ]
In my database, have 2 different fields i.) Employee ii.) Department In my `employee` table, ``` NAME Department --------------------- John IT Siti Research Jason Research ``` In my `Department`, ``` Name ------------ IT Research Computer ``` Using statement ``` SELECT DEPARTMENT.DNAME FROM DEPARTMENT, EMPLOYEE WHERE DEPARTMENT.DNAME = EMPLOYEE.DNAME AND (SELECT COUNT(*) FROM EMPLOYEE.DNAME)=0; ``` when no employee in the `department` then will display ``` Name -------------- Computer ``` Keep trying but having some error on it
**Two alternatives:** 1. Using `IN`: ``` SELECT name FROM Department WHERE name NOT IN (SELECT DISTINCT Department FROM Employee) ``` 2. Using `Left Join`: ``` SELECT D.NAME FROM DEPARTMENT D LEFT JOIN EMPLOYEE ON D.NAME = EMPLOYEE.Department WHERE EMPLOYEE.Department IS NULL ``` An example in [**Fiddle**](http://sqlfiddle.com/#!2/189487/6). This method will show higher performance than the other if you have thousands of records in your table.
Try `NOT IN`. Sub query need to be `DISTINCT` to avoid performance issue in future: ``` SELECT name FROM Department WHERE name NOT IN ( SELECT DISTINCT Department FROM Employee); ``` Or `NOT EXIST`, faster in most cases: ``` SELECT name FROM Department WHERE NOT EXIST ( SELECT 1 FROM Employee WHERE Employee.Department = Department.name); ```
SQL count with another table in SELECT clause
[ "", "mysql", "sql", "sql-server", "oracle", "" ]
I'm trying to find the devices that are stuck in a certain status, and group them by their types. I query the database for the `count()` of devices in that status, and then do a couple of checks to see if those devices meet the criteria. However, it is not showing the types that do not have any `count()`. I need all types to be shown, even if there is nothing to `count()`. For example, currently I have this: ``` TYPE COUNT(DEVICE.NAME) TYPE1 200 TYPE2 100 TYPE3 50 ``` I need it to come back like this: ``` TYPE COUNT(DEVICE.NAME) TYPE1 200 TYPE2 100 TYPE3 50 TYPE4 0 TYPE5 0 ``` Here's what I have so far: ``` SELECT device.type, count(device.name) FROM device LEFT OUTER JOIN (SELECT type, name FROM device) b ON device.name = b.name WHERE device.changedon < (((SYSDATE - DATE '1970-01-01') * 24 * 60 * 60) - (90 * 24 * 60 * 60)) AND device.status != 6 AND device.status != 8 AND device.status != 13 GROUP BY device.type ORDER BY count(device.name) DESC; ``` It was later realized that the join on table `b` was unnecessary, but I will leave it in the question so that the answers still accurately reflect the question. I have tried all types of joins on the subquery and the results are the same.
If you use `left outer join`, you need to use it for all the joins in the query (typically). Otherwise, you might be "undoing" the earlier join. However, I don't think that is your only issue. The expression `count(d.name)` has to return at least 1 (unless `name` can be `NULL`). ``` SELECT d.type, count(h471.id) FROM device d INNER JOIN (SELECT type, name FROM device ) b ON d.name = b.name WHERE d.changedon < (((SYSDATE - DATE '1970-01-01') * 24 * 60 * 60) - (90 * 24 * 60 * 60)) and d.status != 6 AND d.status != 8 AND d.status != 13 GROUP BY d.type; ``` When using outer joins, you also have to pay attention to the `where` clause. In this case, though, you are only refering to the `device` table. If the switch to `left outer join` doesn't fix the problem, then the issue is in the `where` clause -- it is filtering out the additional devices. If that is the case, then move the `where` clause to a conditional aggregation: ``` SELECT d.type, sum(case when d.changedon < (((SYSDATE - DATE '1970-01-01') * 24 * 60 * 60) - (90 * 24 * 60 * 60)) and d.status != 6 AND d.status != 8 AND d.status != 13 then 1 else 0 end) FROM device d LEFT OUTER JOIN (SELECT type, name FROM device ) b ON d.name = b.name GROUP BY d.type; ``` You might be able to move some of the conditions back to the `where` clause for performance reasons. Also, I don't get the join to table `b`. It would just be multiplying the number of rows (if multiple rows in `device` have the same name) or doing nothing.
I think you need something like ``` SELECT type, SUM(COUNT_RESULT) FROM ( SELECT device.type, 0 COUNT_RESULT FROM device UNION ALL SELECT device.type, count(1) FROM device, h471 WHERE device.recnum = h471.id AND device.changedon < (((SYSDATE - DATE '1970-01-01') * 24 * 60 * 60) - (90 * 24 * 60 * 60)) AND device.status != 6 AND device.status != 8 AND device.status != 13 ) GROUP BY type ```
Display 0 when COUNT() returns nothing
[ "", "sql", "oracle", "join", "count", "subquery", "" ]
I am building a system that contains consultant profiles. In this system, each consultant can choose which certifications he or she has. Right now I am expanding the system to contain titles when a consultant has a certain set of certifications. The interesting tables is: ``` Person(stores ID, Firstname etc. of a person) Certification(stores ID, Name of a certification) PersonCertification(stores PersonID and CertificationID as a linking table (name?) ) Title(stores the title: ID, Shortname and Fullname) TitleCertifications(stores which certifications is needed for a title) ``` As an example, say we have a title with ID 1 called MCSA. The certifications required for this title is certifications with ID 1 and 2. TitleCertifications-table could look like this: ``` TitleID | CertificationID 1 | 1 1 | 2 ``` If a consultant has both certification with ID 1 and 2, he or she would be granted the title MCSA with ID 1. If the consultant does not have any of the certifications, he does not earn this title. The problem I have is that I do not know how to check if the user has all the requred certifications that is defined in the TitleCertifications-table. i have started with this, but it lacks any check against if the person has all certifications that is needed. ``` SELECT t.Fullname FROM Title t JOIN TitleCertifications tc ON t.ID = tc.TitleID JOIN PersonCertification pc ON tc.CertificationID = pc.CertificationID ``` The result of the query above will yield the titlename for each rows with a match on certification, as an example it will yield two rows if a user has two of the three required certifications to gain a title. Does anyone know how to write a query that will match the required certifications and only give an answer if the user has all the required certifications for a title? I am using a SQL Server 2012 using T-SQL (in Azure if that matters). Sorry for writing kinds of fuzzy, I am not sure about all the terms in English.
Try this approach: ``` ;with pc as ( select p.personid, tc.titleid, count(*) as cnt_pc from person p inner join personcertification pc on p.personid = pc.personid inner join titlecertifications tc on pc.certificationid = tc.certificationid group by p.personid, tc.titleid ), tc as ( select t.titleid, count(*) as cnt_tc from title t inner join titlecertifications tc on t.titleid = tc.titleid group by t.titleid ) select p.firstname, t.shortname from pc inner join tc on pc.titleid = tc.titleid inner join person p on pc.personid = p.personid inner join title t on pc.titleid = t.titleid where cnt_pc = cnt_tc ``` The general idea is to select how many certs for a certain title is needed, and how many certs one have for that title -- if it's a match, than we assume that one has the title as well. Counting could be done in numerous ways.
There are a number of ways of doing this. Here is one: ``` select p.ID, p.Name, tcount.ID from Person p inner join ( -- Get the count of certs for each title for each person select pc.PersonID, t.ID TitleId, count(*) CertCount from PersonCertification pc inner join Certification c on c.ID = pc.CertificationID inner join TitleCertification tc on tc.CertificationId = c.ID inner join Title t on t.ID = tc.TitleID group by pc.PersonID, t.ID ) cntByTitle on cntByTitle.PerdonID = p.ID left outer join ( select t.TitleID, count(*) CertCount from Title t inner join TitleCertification tc on tc.TitleId = t.ID group by t.ID ) tcount on tcount.TitleID = cntByTitle.TitleID and tcount.PersonID = p.ID and tcount.CertCount = cntByTitle.CertCount ``` Notes: the cntByTitle sub-query might need to be a left outer rather than an inner join: if a person has to Certs then I think they won't be returned by this unless you make it a left outer. I also assume all indexes are in place. If a Person has no Titles, the query will return the Person.ID and null. You could wrap this up as a View and then just use the View in a more straight-forward query.
Matching lists in sql
[ "", "sql", "sql-server", "sql-server-2012", "" ]
if i try the below code for eg: i will set the date as Fri July 25 2014 10:00 AM. which gives date in milliseconds as `1402080056000`, now if i try to read the same milliseconds to date as below ``` long time = 1402080056000; Date mydate = new Date(time); ``` mydate variable shows date as `Sat Jun 25 00:10:56 IST 2014` ``` String DateTimeString = DateFormat.getDateTimeInstance().format(new Date(time)); ``` with the above statement in DateTimeString i get date as `Jun 25 , 2014 12:10:56 AM` How to read the datetime present in `1402080056000 to Fri July 25 2014 10:00 AM`
Just need to work on the format string, ``` String dateTimeString= String.valueOf(new SimpleDateFormat("dd-MM-yyyy hh:mm").format(new Date(time))); ``` Explicitly set time zone: ``` SimpleDateFormat dateFormat = new SimpleDateFormat("dd-MM-yyyy hh:mm"); dateFormat.setTimeZone(TimeZone.getTimeZone("GMT")); String result = String.valueOf(dateFormat.format(millis)); ``` Also, this would be useful [Regarding Timezones and Java](https://stackoverflow.com/questions/230126/how-to-handle-calendar-timezones-using-java)
Try this: ``` String date= DateFormat.format("dd/MM/yyyy hh:mm:ss", new Date(date_in_milis)).toString(); ```
How to get standard format date in android
[ "", "android", "sql", "date", "android-date", "" ]
This question must be really silly, but I haven't found an answer for it yet. I'm making a program in C # that dynamically writes a script to run on SQL Server. I declared two variables that receive the values returned from two calls *exec 'procedure\_name'*. In the next block of the script, I want these variables to be set to zero. How to do this using a **SET**? would be something like this: **SET @ a, @ b = 0?**
You can do it via SELECT: ``` SELECT @a = 0, @b = 0 ``` With SET you need 2 SET commands: ``` SET @a = 0; SET @b = 0 ```
Method 1 ``` set @a = 0 set @b = 0 ``` Method 2 ``` Select @a = 0, @b = 0 ```
How to set two local variables with the same value in sql server?
[ "", "sql", "sql-server", "variables", "local", "" ]
Having two tables Department table ``` //Department D# DNAME ------------------- 1 SALES 2 ACCOUNTING 3 GAMES 5 SPORTS ``` Project table ``` //Project P# D# ----------- 1001 1 1002 3 1003 5 1004 5 ``` When output display it should be something like: ``` Department Total Project --------------------------- 1 1 2 0 3 1 5 2 ``` Currently my statement ``` SELECT D# FROM DEPARTMENT WHERE (SELECT COUNT(*) FROM PROJECT WHERE DEPARTMENT.D# = PROJECT.D#); ``` but what should i display 0 if no any project in that D# ?
``` SELECT D.D#, COUNT(p.P#) FROM Department d LEFT JOIN Project p ON Project.D#=Department.D GROUP BY d.D#; ```
Try This: ``` SELECT D# , CASE WHEN A.COUNT > 0 THEN D# ELSE 0 END AS TOTAL_PROJECT FROM DEPARTMENT JOIN (SELECT D# , COUNT(*) FROM PROJECT GROUP BY PROJECT.D#) A ON DEPARTMENT.D# = A.D#; ```
SELECT using GROUP BY and display the total with two tables
[ "", "mysql", "sql", "oracle", "" ]
I am using the following line within a Select which returns a number with decimals, e.g. 33.33333. How can I round this within the Select and convert to integers so that I don't have decimals, e.g. in the above example it should return 33 ? ``` 100 * AVG(CASE WHEN col2 = col3 THEN 1.0 ELSE 0.0 END) AS matchPercent ```
You can use `ROUND` function to round the value to integer: ``` ROUND(INT, 100 * AVG(CASE WHEN col2 = col3 THEN 1.0 ELSE 0.0 END), 0) AS matchPercent ``` This will retain the type, e.g rounded `float` will stay `float`. If you also need to return `int` data type (or other integer data types), you need to also convert it: ``` CONVERT(INT, ROUND(INT, 100 * AVG(CASE WHEN col2 = col3 THEN 1.0 ELSE 0.0 END), 0)) AS matchPercent ```
Use the `round` function to round the number: ``` ROUND(100 * AVG(CASE WHEN col2 = col3 THEN 1.0 ELSE 0.0 END), 0) AS matchPercent ```
SQL Server: round decimal number and convert to int (within Select)
[ "", "sql", "sql-server", "select", "int", "rounding", "" ]
I have a table of employees Salary named as **empSalary**. ![empSalary](https://i.stack.imgur.com/mOFVj.png) I want to calculate sum of salaries issued by each department. What comes to my mind is ``` Select sum(Salary) from empSalary where deptId = (Select deptId from empSalary) ``` This statement gives me **5100** which is the sum of Salary where **deptId = 1**. How is this possible using only sql query? Sorry for the question title as i was unable to find words.
You want to `GROUP BY` the Department: ``` SELECT Dept_ID ,sum(Salary) Total_Salary FROM empSalary GROUP BY Dept_ID ```
This is exact use of group by , you make groups and and each group you can calculate sum, average, min , max or count ``` select DeptID, Sum(Salary) as SumSalary From empSalary Group by DeptID ```
Sum of a field with respect to another field
[ "", "sql", "group-by", "" ]
I have two identical tables that need to be updated on daily basis. Both tables have Exact same structure. They have 100+ columns. So, is there a way in SQL to update all the columns from TableA based on TableB without writing a (Set columnname = '') update statement for every single column? Both table have identical column names, so I thought there might be a smarter way to do it.
It is a bit hacky but it can be done. I have created two identical tables with each 2 records. (Table\_1 and Table\_2) **Table\_1** ![Table_1](https://i.stack.imgur.com/qVgaj.png) **Table\_2** ![Table_2](https://i.stack.imgur.com/ojgBl.png) SQL: ``` DECLARE @LeftTable VARCHAR(MAX) = 'Table_1'; DECLARE @RightTable VARCHAR(MAX) = 'Table_2'; DECLARE @UpdateStatement VARCHAR(MAX); SELECT @UpdateStatement = COALESCE(@UpdateStatement + ', ', '') + 't2.' + COLUMN_NAME + ' = t1.' + COLUMN_NAME FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = @LeftTable AND COLUMN_NAME <> 'ID' SET @UpdateStatement = 'UPDATE t2 SET ' + @UpdateStatement + ' ' + 'FROM ' + @RightTable + ' t2 JOIN ' + @LeftTable + ' ' + 't1 ON t2.ID = t1.ID'; EXEC(@UpdateStatement) -- The actual executed query is: -- UPDATE t2 SET t2.Description = t1.Description, -- t2.Extra = t1.Extra -- FROM Table_2 t2 JOIN Table_1 t1 ON t2.ID = t1.ID ``` **Result:** **Table\_1** ![Table_1](https://i.stack.imgur.com/qVgaj.png) **Table\_2** ![Table_2](https://i.stack.imgur.com/qVgaj.png) **Edit** A bit more complex, but this excludes primary key columns: ``` DECLARE @LeftTable VARCHAR(MAX) = 'Table_1'; DECLARE @RightTable VARCHAR(MAX) = 'Table_2'; DECLARE @UpdateStatement VARCHAR(MAX); WITH ColumnNames AS ( SELECT c.name AS COLUMN_NAME from sys.tables t JOIN sys.columns c on t.object_id = c.object_id LEFT JOIN ( SELECT ic.object_id ,ic.column_id ,idx.name AS index_name FROM sys.indexes idx JOIN sys.index_columns ic on idx.index_id = ic.index_id AND idx.object_id = ic.object_id AND idx.is_primary_key = 1 ) idx ON t.object_id = idx.object_id AND c.column_id = idx.column_id WHERE t.name = @LeftTable AND idx.index_name IS NULL ) SELECT @UpdateStatement = COALESCE(@UpdateStatement + ', ', '') + 't2.' + COLUMN_NAME + ' = t1.' + COLUMN_NAME + CHAR(10) FROM ColumnNames SET @UpdateStatement = 'UPDATE t2 SET ' + @UpdateStatement + CHAR(10) + 'FROM ' + @RightTable + ' t2 JOIN ' + @LeftTable + CHAR(10) + 't1 ON t2.ID = t1.ID'; EXEC(@UpdateStatement) ```
Available ways of doing are `UPDATE` statement and `MERGE` statement. Both of them require specyfing column names to update in `SET` clause. If you want, you can write a dynamic SQL that would read the schema of the tables and produce a query with all column names. This way you wouldn't have to write all the column names manually.
SQL Update Two Identical Tables
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "" ]
Consider a sorted table (according to id). How to count number of changes of the values in column 'value'? In the following example, the number of changes is 3 (10 to 20, 20 to 10, 10 to 30). Thx ``` id value 1 10 2 10 3 20 4 20 5 10 6 30 7 30 ```
if ids are sequential with no gaps... ``` Select count(*) From table t1 join table t2 on t2.id = t1.id + 1 where t2.value <> t1.value ``` else... ``` Select count(*) From table t1 join table t2 on t2.id = (Select min(id) From table where id > t1.id) where t2.value <> t1.value ```
You can identify the changes by using a correlated subquery. Then add them up. A change occurs when the value is different from the previous value: ``` select count(*) from (select t.*, (select value from table t2 where t2.id < t.id order by t2.id desc limit 1 ) as prev_value from table t ) t where prev_value <> value; ``` Note that due to `prev_value` being `NULL` for the first row, this ignores that one. If you can guarantee that the `id`s are sequential with no gaps, you can do this more efficiently with a join: ``` select count(*) from table t join table tprev on t.id = tprev.id + 1 where t.value <> tprev.value; ```
Counting value changes in a table column
[ "", "sql", "sqlite", "" ]
How do you access another database from same server? Got server name using SELECT @@SERVERNAME, and then did servername.dbo.mydatabasename.mytablename but query analyzer says Invalid object name. Any ideas? Am doing ``` insert into Myservername.Mydatabasename.Mytablename (Email,Username1) Values ('h','h') ``` Using MS SQL Server 2008, on same server
Assuming you're using MS SQL Server, fully qualified references are in the form: ``` [servername].[databasename].[schema].[object] ``` On the same server you do not need the `[servername]` reference. In your case, you reversed the `databasename` and `schema`. It should be: `servername.mydatabasename.dbo.mytablename` Your INSERT should look like: ``` insert into Mydatabasename.Schema.Mytablename (Email,Username1) Values ('h','h') ``` (probably your Schema here is dbo) You would include the `[servername]` component when performing an operation across a linked server in which case `[servername]` would be the name of the linked server, which incidentally may not actually be the the same as the hostname/instance name of the remote server.
Is it not rather `mydatabasename.DBO.mytablename` ? And if your database is on the same server, you normally don't have to use the `servername`.
Access another database from same server?
[ "", "sql", "sql-server", "" ]
How can I get all the dates between two dates? I have a variable `@MAXDATE` which is storing the maximum date from the table. Now I want to get the all dates between `@Maxdate` and `GETDATE()` and want to store these dates in a cursor. So far I have done as follows: ``` ;with GetDates As ( select DATEADD(day,1,@maxDate) as TheDate UNION ALL select DATEADD(day,1, TheDate) from GetDates where TheDate < GETDATE() ) ``` This is working perfectly but when I am trying to store these values in a cursor ``` SET @DateCurSor = CURSOR FOR SELECT TheDate FROM GetDates ``` Compilation Error > Incorrect syntax near the keyword 'SET'. How to solve this?
My first suggestion would be use your [calendar table](http://web.archive.org/web/20170820091319/http://blog.jontav.com/post/9380766884/calendar-tables-are-incredibly-useful-in-sql), if you don't have one, then create one. They are very useful. Your query is then as simple as: ``` DECLARE @MinDate DATE = '20140101', @MaxDate DATE = '20140106'; SELECT Date FROM dbo.Calendar WHERE Date >= @MinDate AND Date < @MaxDate; ``` If you don't want to, or can't create a calendar table you can still do this on the fly without a recursive CTE: ``` DECLARE @MinDate DATE = '20140101', @MaxDate DATE = '20140106'; SELECT TOP (DATEDIFF(DAY, @MinDate, @MaxDate) + 1) Date = DATEADD(DAY, ROW_NUMBER() OVER(ORDER BY a.object_id) - 1, @MinDate) FROM sys.all_objects a CROSS JOIN sys.all_objects b; ``` For further reading on this see: * [Generate a set or sequence without loops – part 1](http://www.sqlperformance.com/2013/01/t-sql-queries/generate-a-set-1) * [Generate a set or sequence without loops – part 2](http://www.sqlperformance.com/2013/01/t-sql-queries/generate-a-set-2) * [Generate a set or sequence without loops – part 3](http://www.sqlperformance.com/2013/01/t-sql-queries/generate-a-set-3) With regard to then using this sequence of dates in a cursor, I would really recommend you find another way. There is usually a set based alternative that will perform much better. So with your data: ``` date | it_cd | qty 24-04-14 | i-1 | 10 26-04-14 | i-1 | 20 ``` To get the quantity on 28-04-2014 (which I gather is your requirement), you don't actually need any of the above, you can simply use: ``` SELECT TOP 1 date, it_cd, qty FROM T WHERE it_cd = 'i-1' AND Date <= '20140428' ORDER BY Date DESC; ``` If you don't want it for a particular item: ``` SELECT date, it_cd, qty FROM ( SELECT date, it_cd, qty, RowNumber = ROW_NUMBER() OVER(PARTITION BY ic_id ORDER BY date DESC) FROM T WHERE Date <= '20140428' ) T WHERE RowNumber = 1; ```
You can use this script to find dates between two dates. [Reference taken from this Article:](http://www.dbrnd.com/2015/08/list-all-dates-between-two-dates-in-sql-server/) ``` DECLARE @StartDateTime DATETIME DECLARE @EndDateTime DATETIME SET @StartDateTime = '2015-01-01' SET @EndDateTime = '2015-01-12'; WITH DateRange(DateData) AS ( SELECT @StartDateTime as Date UNION ALL SELECT DATEADD(d,1,DateData) FROM DateRange WHERE DateData < @EndDateTime ) SELECT DateData FROM DateRange OPTION (MAXRECURSION 0) GO ```
Get all dates between two dates in SQL Server
[ "", "sql", "sql-server", "" ]
1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''comments' ( 'id' int(10) unsigned NOT NULL AUTO\_INCREMENT, 'article\_i' at line 1 ``` CREATE TABLE IF NOT EXISTS 'comments' ( 'id' int(10) unsigned NOT NULL AUTO_INCREMENT, 'article_id' int(10) NOT NULL, 'comment' varchar(45) NOT NULL, 'time' datetime NOT NULL, 'name' varchar(45) NOT NULL, 'email' varchar(45) NOT NULL, PRIMARY KEY ('id'), KEY 'fk_comments_article'('article_id') ); ``` Does anyone see the syntax error?
Remove the single quotes. Try this: ``` CREATE TABLE IF NOT EXISTS comments ( id int(10) unsigned NOT NULL AUTO_INCREMENT, article_id int(10) NOT NULL, comment varchar(45) NOT NULL, time datetime NOT NULL, name varchar(45) NOT NULL, email varchar(45) NOT NULL, PRIMARY KEY (id), KEY 'fk_comments_article'(article_id) ); ``` or try with back ticks: ``` CREATE TABLE IF NOT EXISTS `comments` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `article_id` int(10) NOT NULL, `comment` varchar(45) NOT NULL, `time` datetime NOT NULL, `name` varchar(45) NOT NULL, `email` varchar(45) NOT NULL, PRIMARY KEY (`id`), KEY fk_comments_article(`article_id`) ); ``` You are getting this error because when you write `'id'` then it is treated as a string not a column which you intend
Use backticks instead of single qoutes .Single qoutes are used for string literals. ``` CREATE TABLE IF NOT EXISTS `comments` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `article_id` int(10) NOT NULL, `comment` varchar(45) NOT NULL, `time` datetime NOT NULL, `name` varchar(45) NOT NULL, `email` varchar(45) NOT NULL, PRIMARY KEY (`id`), KEY fk_comments_article(`article_id`) ); ``` backticks **`** are used to enable identifires be used a column name / table name if they happen to be Keywords in MySQL . It is the recommended way as it is highly unlikely that we know all the keywords beforehand and may end up using one of keyword as our name for column/table like you have done for column **`time`** in your `CREATE` satement . But you should avoid using keywords known to you as identifires.
SQL syntax error, Can't find it?
[ "", "mysql", "sql", "ddl", "" ]
I have the following SQL table that keeps track of a user's score at a particular timepoint. A user can have multiple scores per day. ``` +-------+------------+-------+-----+ | user | date | score | ... | +-------+------------+-------+-----+ | bob | 2014-04-19 | 100 | ... | | mary | 2014-04-19 | 100 | ... | | alice | 2014-04-20 | 100 | ... | | bob | 2014-04-20 | 110 | ... | | bob | 2014-04-20 | 125 | ... | | mary | 2014-04-20 | 105 | ... | | bob | 2014-04-21 | 115 | ... | +-------+------------+-------+-----+ ``` Given a particular user (let's say `bob`), **How would I generate a report of each user's score, but only use the highest submitted score per day?** (Getting the specific row with the highest score is important as well, not just the highest score) ``` SELECT * FROM `user_score` WHERE `user` = 'bob' GROUP BY `date` ``` is the base query that I'm building off of. It results in the following result set: ``` +-------+------------+-------+-----+ | user | date | score | ... | +-------+------------+-------+-----+ | bob | 2014-04-19 | 100 | ... | | bob | 2014-04-20 | 110 | ... | | bob | 2014-04-21 | 115 | ... | +-------+------------+-------+-----+ ``` `bob`'s higher score of `125` from `2014-04-20` is missing. I tried rectifying that with `MAX(score)` ``` SELECT *, MAX(score) FROM `user_score` WHERE `user` = 'bob' GROUP BY `date` ``` returns the highest score for the day, but not the row that has the highest score. Other column values on that row are important, ``` +-------+------------+-------+-----+------------+ | user | date | score | ... | max(score) | +-------+------------+-------+-----+------------+ | bob | 2014-04-19 | 100 | ... | 100 | | bob | 2014-04-20 | 110 | ... | 125 | | bob | 2014-04-21 | 115 | ... | 110 | +-------+------------+-------+-----+------------+ ``` Lastly, I tried ``` SELECT *, MAX(score) FROM `user_score` WHERE `user` = 'bob' AND score = MAX(score) GROUP BY `date` ``` But that results in an invalid use of `GROUP BY`. * [Selecting a row with specific value from a group?](https://stackoverflow.com/questions/16265054) is on the right track with what I am trying to accomplish, but I dont know the specific score to filter by. EDIT: SQLFiddle: <http://sqlfiddle.com/#!2/ee6a2>
If you want all the fields, the easiest (and fastest) way in MySQL is to use `not exists`: ``` SELECT * FROM `user_score` us WHERE `user` = 'bob' AND NOT EXISTS (SELECT 1 FROM user_score us2 WHERE us2.`user` = us.`user` AND us2.date = us.date AND us2.score > us.score ); ``` This may seem like a strange approach. And, I'll admit that it is. What it is doing is pretty simple: "Get me all rows for Bob from `user_score` where there is no higher score (for Bob)". That is equivalent to getting the row with the maximum score. With an index on `user_score(name, score)`, this is probably the most efficient way to do what you want.
You can use a `JOIN`: ``` SELECT a.* FROM `user_score` as a INNER JOIN (SELECT `user`, `date`, MAX(score) MaxScore FROM `user_score` GROUP BY `user`, `date`) as b ON a.`user` = b.`user` AND a.`date` = b.`date` AND a.score = b.MaxScore WHERE a.`user` = 'bob' ```
SELECT specific rows when using GROUP BY
[ "", "mysql", "sql", "" ]
I'm able to unpivot a table but null values are not included in the result. ``` create table pivot_task ( age int null, [a] numeric(8,2), [b] numeric(8,2), [c] numeric(8,2), [d] numeric(8,2), [e] numeric(8,2) ); select * from pivot_task; insert into pivot_task values (18, 0.5, null, 0.6, 1.21, 1.52), (19, 7.51, 6.51, 5.51, null, 3.53), (20, 4.52, 4.52, 6.52, 3.53, null); select age, [over], [av] from pivot_task unpivot ( [av] for [over] in ([a], [b], [c], [d], [e]) ) a; ``` You can see the result on <http://sqlfiddle.com/#!6/2ab59/1> for 18 age [over] b and its null value is missing I want to include null as well for every null encounter. I found replacing null with different value and then replacing all those constant different value approach is not feasible for my work. I want to include in unpivot only.
This is ugly but doesn't rely on having to find an out-of-band replacement for `NULL`: ``` declare @pivot_task table ( age int null, [a] numeric(8,2), [b] numeric(8,2), [c] numeric(8,2), [d] numeric(8,2), [e] numeric(8,2) ); insert into @pivot_task values (18, 0.5, null, 0.6, 1.21, 1.52), (19, 7.51, 6.51, 5.51, null, 3.53), (20, 4.52, 4.52, 6.52, 3.53, null); select a.age, pmu.[over], [av] from (select 'a' as [over] union all select 'b' union all select 'c' union all select 'd' union all select 'e') pmu cross join (select age from @pivot_task) as a left join @pivot_task pt unpivot ( [av] for [over] in ([a], [b], [c], [d], [e]) ) ex on pmu.[over] = ex.[over] and a.age = ex.age ``` Result: ``` age over av ----------- ---- --------------------------------------- 18 a 0.50 18 b NULL 18 c 0.60 18 d 1.21 18 e 1.52 19 a 7.51 19 b 6.51 19 c 5.51 19 d NULL 19 e 3.53 20 a 4.52 20 b 4.52 20 c 6.52 20 d 3.53 20 e NULL ``` But if you're going down this route, you can eliminate the `UNPIVOT` entirely: ``` select a.age, pmu.[over], CASE pmu.[over] WHEN 'a' THEN a.a WHEN 'b' THEN a.b WHEN 'c' THEN a.c WHEN 'd' THEN a.d WHEN 'e' THEN a.e END [av] from (select 'a' as [over] union all select 'b' union all select 'c' union all select 'd' union all select 'e') pmu cross join @pivot_task as a ```
Try this, it will replace all the null values with 10000000 before unpivot that is not an acceptable number in numeric(8,2), so the value will not exist already. Then the value will be replaced by null after unpivot: ``` ;WITH x as ( select age, coalesce(cast(a as numeric(9,2)), 10000000) a, coalesce(cast(b as numeric(9,2)), 10000000) b, coalesce(cast(c as numeric(9,2)), 10000000) c, coalesce(cast(d as numeric(9,2)), 10000000) d, coalesce(cast(e as numeric(9,2)), 10000000) e from pivot_task ) select age, [over], nullif([av], 10000000) av from x unpivot ( [av] for [over] in ([a], [b], [c], [d], [e]) ) a; ```
Handle NULL value in UNPIVOT
[ "", "sql", "sql-server", "sql-server-2008", "sql-server-2012", "unpivot", "" ]
I would say my understanding of SQL is average, but I am wondering if there is any way to more efficiently write this statement? The primary key of table `Company` is `uniqentity` and has the company name `nameof` in that row. In the table `Line`, the primary key is `uniqline` and I have columns `entityCompanyBilling` and `entityCompanyIssuing` (both foreign keys to `uniqentity`). This code below works just fine, I am just trying to make it more efficient. Is it possible? ``` SELECT l.uniqline, b.nameof AS billingcompany, l.UniqEntityCompanyBilling, i.nameof AS issuingcompany, l.UniqEntityCompanyIssuing FROM Line l INNER JOIN Company b ON b.uniqentity = l.uniqentitycompanybilling INNER JOIN Company i ON i.uniqentity = l.uniqentitycompanyissuing ```
Changing the structure of the *query* may not make it more efficient (it seems to be as simple as it can be). But changing the structure of the *data* might: * Add indexes (if not already) on `UniqEntityCompanyBilling` and `UniqEntityCompanyIssuing` * Add index on `Company.UniqEntity` Didn't catch that the column is the PK * If the index on `Company.UniqEntity` isn't clustered, include the `nameof` field in the index on `Company.UniqEntity` so the query doesn't have to go back to the physical table.
This query is perfectly fine as it is. You could make it shorter like below, but this will decrease readability and is generally considered bad practice, so don't. ``` SELECT l.uniqline, b.nameof AS billingcompany, l.UniqEntityCompanyBilling, i.nameof AS issuingcompany, l.UniqEntityCompanyIssuing FROM Line l, Company b, Company i WHERE b.uniqentity = l.uniqentitycompanybilling AND i.uniqentity = l.uniqentitycompanyissuing ```
SQL Combining two joins
[ "", "sql", "join", "" ]
This is how my table looks like: ``` SELECT table2.status, table1.title FROM table1 RIGHT JOIN table2 ON table1.id = table2.id ``` ``` status title EXPIRED USER A EXPIRED USER B ACTIVE USER B EXPIRED USER C ``` And I want to get the titles from the users which status is not ACTIVE, I tried: ``` SELECT table2.status, table1.title FROM table1 RIGHT JOIN table2 ON table1.id = table2.id WHERE table2.status NOT LIKE 'ACTIVE' ``` ``` status title EXPIRED USER A EXPIRED USER B EXPIRED USER C ``` The problem is that User B has a status ACTIVE and EXPIRED and I want a table that shows me everyone who has NO ACTIVE status in the table: ``` status title EXPIRED USER A EXPIRED USER C ```
Here is a statement that gives you one row per non-active user: ``` SELECT CASE WHEN MAX( case when table2.status = 'EXPIRED' then 1 else 0 end ) = 1 THEN 'EXPIRED' WHEN MAX( case when table2.status = 'CANCELLED' then 1 else 0 end ) = 1 THEN 'CANCELLED' WHEN MAX( case when table2.status = 'NEW' then 1 else 0 end ) = 1 THEN 'NEW' ELSE 'NOT ACTIVE' END as status table1.title FROM table1 INNER JOIN table2 ON table1.id = table2.id GROUP BY table1.title HAVING MAX( case when table2.status = 'ACTIVE' then 1 else 0 end ) = 0; ``` The trick is here to get all rows, group by user and kind of create a flag if there exists an ACTIVE status for the user. If so, remove the user from the list in the HAVING clause. Then we decide which status to show. Again we use aggregated flags to find out if our preferred flag 'EXPIRED' exists or 'CANCELLED' otherwise etc. BTW: I removed the right outer join, because it doesn't make sense to me. If it really happens that there are users without an entry in table1, then outer join, but have table2.id in the result columns and group by this instead of by table1.title.
It would help if you showed your table structure. But here is one way to approach this: ``` select t1.* from table1 t1 where not exists (select 1 from table2 t2 where t2.id = t1.id and t2.status = 'ACTiVE' ); ```
MySQL select row with NOT LIKE
[ "", "mysql", "sql", "" ]
``` | user_id | name | =================== | 1 | marc | | 2 | paul | | 3 | glen | | 4 | kyle | | 5 | jayc | | 6 | ken | | 7 | raff | ``` I have a sample table shown above. I want to retrieve the row of user id 1,2 and 5 only in a single query. Can you guys tell me what is the query?
Try this [**IN**](http://technet.microsoft.com/en-us/library/ms177682.aspx) ``` SELECT * FROM tablename WHERE user_id IN (1,2,5); ``` # [**Working Fiddle**](http://www.sqlfiddle.com/#!2/06bef/1) ![enter image description here](https://i.stack.imgur.com/VyyCr.png)
``` Select * from users where user_id in (1,2,5) ```
Get multiple rows in a single query
[ "", "mysql", "sql", "" ]
I have a query operation as below, it is from a single table but too much sub-query. Anybody could optimizing on it? ``` SELECT t.order_no , user_assign , t.busi_code , t.inst_addr4 AS district, t.inst_addr3 AS estate ,listagg(to_char(date_appoint,'yyyy-mm-dd'),',') within group(order by date_appoint asc) as "日期排序" FROM mtce_detail t WHERE order_no IN (SELECT order_no FROM (SELECT m.order_no, user_assign , COUNT(order_no) AS total FROM mtce_detail m WHERE order_no IN (SELECT order_no FROM mtce_detail WHERE date_appoint = to_date('2014-04-17', 'yyyy-mm-dd') ) AND date_appoint <= to_date('2014-04-17', 'yyyy-mm-dd') GROUP BY order_no, user_assign ) WHERE total > 2 ) group by t.order_no,t.user_assign,t.busi_code,t.inst_addr4,t.inst_addr3 ```
The first step you can take, is to remove the subquery with the "total > 2", and replace that with the more elegant HAVING clause, like this: ``` SELECT t.order_no , user_assign , t.busi_code , t.inst_addr4 AS district , t.inst_addr3 AS estate , listagg(to_char(date_appoint,'yyyy-mm-dd'),',') within group(order by date_appoint asc) as "something_chinese" FROM mtce_detail t WHERE order_no IN ( SELECT m.order_no FROM mtce_detail m WHERE order_no IN ( SELECT order_no FROM mtce_detail WHERE date_appoint = to_date('2014-04-17', 'yyyy-mm-dd') ) AND date_appoint <= to_date('2014-04-17', 'yyyy-mm-dd') GROUP BY order_no , user_assign having count(order_no) > 2 ) group by t.order_no , t.user_assign , t.busi_code , t.inst_addr4 , t.inst_addr3 ``` The next step is to eliminate a table access by removing the innermost subquery. Just select all mtce\_details on or before 17th April 2014 and count the number of occurrences on that day and before, in one shot. Now you can use those calculated number in the having clause, like this: ``` SELECT t.order_no , user_assign , t.busi_code , t.inst_addr4 AS district , t.inst_addr3 AS estate , listagg(to_char(date_appoint,'yyyy-mm-dd'),',') within group (order by date_appoint asc) as "something_chinese" FROM mtce_detail t WHERE order_no IN ( SELECT m.order_no FROM mtce_detail m WHERE date_appoint <= date '2014-04-17' GROUP BY order_no , user_assign having count(order_no) > 2 and count(case date_appoint when date '2014-04-17' then 1 end) >= 1 ) group by t.order_no , t.user_assign , t.busi_code , t.inst_addr4 , t.inst_addr3 ``` Performance should have improved slightly, but your query still contains two table accesses of the same table. If you need the last table access out, you can use analytic functions with the partition clause. Your query will be a bit harder to read, but faster. This is left as a fun exercise for the reader :-). (And because I don't have testdata)
Replace `WHERE t.order_no IN` with ``` JOIN (subselect) sub ON t.order_no =sub.order_no ``` Replace the calls `AND date_appoint <= to_date('2014-04-17', 'yyyy-mm-dd')` with `AND date_appoint <= :date_parameter` to avoid to\_date() convert for each row. Add indexes for the columns used in `WHERE`
How to optimize this sql in oracle?
[ "", "sql", "oracle", "" ]
I have a problem regarding a select query, I'm using CTE for execute a query, my query is how to apply condition in CTE before `Where` clause, Table `Temp`: ``` Id | Title -------------- 1 | ABCD 2 | ABCD 3 | AB 4 | CD 5 | DA Declare @Count int,@search nvarchar(50) Select @search ='AB' set @Count =1 WITH TempResult as ( Select * from Temp ) SELECT * from TempResult ``` Now I want when Count =1 then Record will search on basis of @search variable otherwise not, Canany one help me on this but remember i want any conditional statement within the CTE not out side means no repetition of CTE
I think this is what you are after: ``` WITH TempResult AS ( SELECT * FROM Temp WHERE (@Count<>1 OR Title LIKE '%'+@Search+'%') ) SELECT * FROM TempResult ``` **Explanation:** The records will be filtered with `Title LIKE '%'+@Search+'%'` **only** when @Count != 1. To be more precise: When `@Count` = 1, it will go for the second part of `WHERE` clause. When `@Count` !=1, it will not go for the second part since `WHERE` clause returns `true` already.
Looks like ``` WITH TempResult as ( Select * from Temp WHERE @count <> 1 OR Title = @Search ) SELECT * from TempResult ```
How to apply condition before where clause within CTE
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "" ]
In Oracle, to retrieve the SQL used to create a Function, Package, etc, the user\_source view can be queried. However, views are not included in this view - nor do they exist in the underlying `sys.source$`. To access the text of views, the `user_views.text` column can be used, but this is not exact because Oracle will re-write some parts of the query, for example it will do glob expansion. How can I retrieve the SQL used to create a view, exactly as it was entered, without glob expansion?
I think the original text is lost: ``` create table t1(id number) / create view t1_vw as select * from t1 / alter table t1 add val varchar2(20) / alter view t1_vw compile / select * from t1_vw / ``` will return only id column. Interesting, but for materialized views original text is preserved.
You can use the following query: ``` SELECT VIEW_NAME, TEXT FROM USER_VIEWS; ``` or you can use ALL\_VIEWS, as in ``` SELECT VIEW_NAME, TEXT FROM ALL_VIEWS; ``` **References**: [ALL\_VIEWS on Oracle® Database Reference](http://docs.oracle.com/cd/B19306_01/server.102/b14237/statviews_2117.htm#i1593583)
How to retrieve the SQL used to create a view in Oracle?
[ "", "sql", "oracle", "" ]
I have tables a, b, c, and d whereby: ``` There are 0 or more b rows for each a row There are 0 or more c rows for each a row There are 0 or more d rows for each a row ``` If I try a query like the following: ``` SELECT a.id, SUM(b.debit), SUM(c.credit), SUM(d.other) FROM a LEFT JOIN b on a.id = b.a_id LEFT JOIN c on a.id = c.a_id LEFT JOIN d on a.id = d.a_id GROUP BY a.id ``` I notice that I have created a cartesian product and therefore my sums are incorrect (much too large). I see that there are other SO questions and answers, however I'm still not grasping how I can accomplish what I want to do in a single query. Is it possible in SQL to write a query which aggregates all of the following data: ``` SELECT a.id, SUM(b.debit) FROM a LEFT JOIN b on a.id = b.a_id GROUP BY a.id SELECT a.id, SUM(c.credit) FROM a LEFT JOIN c on a.id = c.a_id GROUP BY a.id SELECT a.id, SUM(d.other) FROM a LEFT JOIN d on a.id = d.a_id GROUP BY a.id ``` in a single query?
``` SELECT a.ID, debit, credit, other FROM a LEFT JOIN (SELECT a_id, SUM(b.debit) as debit FROM b GROUP BY a_id) b ON a.ID = b.a_id LEFT JOIN (SELECT a_id, SUM(b.credit) as credit FROM c GROUP BY a_id) c ON a.ID = c.a_id LEFT JOIN (SELECT a_id, SUM(b.other) as other FROM d GROUP BY a_id) d ON a.ID = d.a_id ```
Your analysis is correct. Unrelated `JOIN` create cartesian products. You have to do the sums separately and then do a final addition. This is doable in one query and you have several options for that: * Sub-requests in your SELECT: `SELECT a.id, (SELECT SUM(b.debit) FROM b WHERE b.a_id = a.id) + ...` * `CROSS APPLY` with a similar query as the first bullet then `SELECT a.id, b_sum + c_sum + d_sum` * `UNION ALL` as you suggested with an outer `SUM` and `GROUP BY` on top of that. * `LEFT JOIN` to similar subqueries as above. And probably more... The performance of the various solutions might be slightly different depending on how many rows in `A` you want to select.
SQL summations with multiple outer joins
[ "", "sql", "" ]
I am trying to show only the highest selling product per work center, however, it keeps telling me that LIMIT 1 is the incorrect syntax. I cant seem to find an answer on here that has helped so I decided to ask the question. Here's my query. ``` SELECT WorkCenter.WorkCenterCode, Product.Name, SUM(CustomerOrderLine.Quantity*CustomerOrderLine.ActualPrice) AS 'Total Sales' FROM WorkCenter INNER JOIN Product ON WorkCenter.WorkCenterCode = Product.WorkCenterCode INNER JOIN CustomerOrderLine ON Product.ProductID = CustomerOrderLine.ProductID GROUP BY WorkCenter.WorkCenterCode, Product.Name ORDER BY 'Total Sales' DESC LIMIT 1 ```
Here is your query cleaned up a bit: ``` SELECT wc.WorkCenterCode, p.Name, SUM(col.Quantity*col.ActualPrice) AS "Total Sales" FROM WorkCenter wc INNER JOIN Product p ON wc.WorkCenterCode = p.WorkCenterCode INNER JOIN CustomerOrderLine col ON p.ProductID = col.ProductID GROUP BY wc.WorkCenterCode, p.Name ORDER BY "Total Sales" DESC LIMIT 1 ``` Note the important change from double quotes to single quotes. This is especially important for the `order by` clause so the clause actually does something, rather than sorting by a constant. The addition of table aliases makes the query easier to read. If you are using Visual Studio, you should use `top` rather than `limit`: ``` SELECT TOP 1 wc.WorkCenterCode, p.Name, SUM(col.Quantity*col.ActualPrice) AS "Total Sales" FROM WorkCenter wc INNER JOIN Product p ON wc.WorkCenterCode = p.WorkCenterCode INNER JOIN CustomerOrderLine col ON p.ProductID = col.ProductID GROUP BY wc.WorkCenterCode, p.Name ORDER BY "Total Sales" DESC; ``` EDIT: For one row per work center, use this as a subquery with `row_number()`: ``` SELECT WorkCenterCode, Name, "Total Sales" FROM (SELECT wc.WorkCenterCode, p.Name, SUM(col.Quantity*col.ActualPrice) AS "Total Sales", row_number() over (partition by wc.WorkCenterCode order by SUM(col.Quantity*col.ActualPrice) desc) as seqnum FROM WorkCenter wc INNER JOIN Product p ON wc.WorkCenterCode = p.WorkCenterCode INNER JOIN CustomerOrderLine col ON p.ProductID = col.ProductID GROUP BY wc.WorkCenterCode, p.Name ) t WHERE seqnum = 1 ORDER BY "Total Sales" DESC; ```
For SQL Server use `SELECT TOP 1` ``` SELECT TOP 1 WorkCenter.WorkCenterCode, Product.Name, SUM(CustomerOrderLine.Quantity*CustomerOrderLine.ActualPrice) AS [Total Sales] FROM WorkCenter INNER JOIN Product ON WorkCenter.WorkCenterCode = Product.WorkCenterCode INNER JOIN CustomerOrderLine ON Product.ProductID = CustomerOrderLine.ProductID GROUP BY WorkCenter.WorkCenterCode, Product.Name ORDER BY [Total Sales] DESC ```
SQL LIMIT 1 DESC order
[ "", "sql", "sql-server", "ssms", "" ]
``` delimiter // CREATE TRIGGER Discount BEFORE INSERT ON ORDER_TABLE FOR EACH ROW BEGIN DECLARE OrderNum INT; DECLARE Membership BOOLEAN; DECLARE Disc DECIMAL(10, 2); DECLARE Cost DECIMAL(10, 2); SELECT OrderCount INTO OrderNum FROM Customer WHERE CustomerID = NEW.CustomerID; SELECT Member INTO Membership FROM Customer WHERE CustomerID = NEW.CustomerID; SELECT Discount INTO Disc FROM Order_Table WHERE OrderID = NEW.OrderID; SELECT Price INTO Cost FROM Order_Table WHERE OrderID = NEW.OrderID; IF(Membership = TRUE) THEN IF(MOD(OrderCount, 10) = 0) THEN SET NEW.Discount = Cost/2; END IF; END IF; END// delimiter; ``` Above is a trigger for the database I am writing for a dry cleaning store. The trigger is supposed to make it so that every customer that has a membership at the dry cleaning store receives a discount with every 10th order they make. However when we enter the following data ``` insert into order_table values(0, 1, curdate(),20140426, null , 100, 10, 0, 110); ``` We get error 1054: Unknown column OrderCount in field list even though OrderCount exists in the customer table
First of all, the error message is coming from using OrderCount in the if block. You should be using the variable you declared and read this value into, OrderNum. The other problem you might have, which I can't tell from this design, is when the Order Number is being updated. It might be best to add this to the trigger, so that the count of orders in the Customer tables is increased by one with the trigger, which can then be checked. I just added this line to the trigger (after the variable declarations): ``` UPDATE `customer` SET `OrderCount`=OrderCount+1 WHERE `CustomerID`=NEW.CustomerID; ```
I work with oracle and it's quite different, but reading your trigger i suggest you , to rewrite the 4 querys to 2, this way you can avoid read each table twice. .: .: ``` .: SELECT OrderCount, Member INTO OrderNum, Membership FROM Customer WHERE CustomerID = NEW.CustomerID; SELECT Discount, Price INTO Disc, Cost FROM Order_Table WHERE OrderID = NEW.OrderID; ``` .: .: .:
SQL Trigger Causing Error 1054
[ "", "mysql", "sql", "triggers", "" ]
I have got a following query which counts the number of orders for last week for seven sites. If there is no order for a given site in last week it display 0 rather than not displaying the site at all, hence the reason i am using the Case statement. All well and good, the challenge is that for a given date lets say 21/04/2014 if there are more than one orders for the site, my query shoudl only condsider one record , so for site4 the count value is 4 but it should be 3, the reason its 4 is because on 21st of this week it got two order entries. is there any way i could only add 1 site for given day if more than two orderdates are found for that date? i was considering using min(orderdate), but sql gives me errors that i cannot use aggregate function in a subquery or aggregate function. ``` select SITENAME ,SUM(Case When OrderDate >= dateadd(dd,(datediff(dd,-53690,getdate()-1)/7)*7,-53690) Then 1 Else 0 End) as COMPLETED from TABLE1 where CLIENT in ('SITE1','SITE2','SITE3','SITE4','SITE5','SITE6','SITE7') GROUP BY SITENAME order by SITENAME ``` ``` SITENAME...............COMPLETED SITE1.....................0 SITE2.....................0 SITE3.....................0 SITE4.....................4 *(the count should be 3 if you only consider unique dates for that site) SITE5.....................2 SITE6.....................3 SITE7.....................2 ```
You can get distinct dates in an inner query and then run your query on top: ``` select SITENAME ,SUM(Case When OrderDate >= dateadd(dd,(datediff(dd,-53690,getdate()-1)/7)*7,-53690) Then 1 Else 0 End) as COMPLETED from ( SELECT DISTINCT SITENAME, OrderDate FROM TABLE1 where CLIENT in ('SITE1','SITE2','SITE3','SITE4','SITE5','SITE6','SITE7') ) X GROUP BY SITENAME order by SITENAME ``` Note, there's not really a need to group by address now when I'm using `distinct` in the inner query.
A simple and readable way to do that would be using a CTE: ``` ;WITH data AS ( SELECT DISTINCT Sitename, DATEADD(dd,(datediff(dd,-53690,getdate()-1)/7)*7,-53690) Date, ORDERDATE, FROM TABLE1 WHERE CLIENT in ('SITE1','SITE2','SITE3','SITE4','SITE5','SITE6','SITE7') ) select SITENAME ,SUM(Case When OrderDate >= Date Then 1 Else 0 End) as COMPLETED from data GROUP BY SITENAME order by SITENAME ```
SQL SERVER Counting one record for a day
[ "", "sql", "sql-server", "" ]
I have a logic-thinking problem. How should I design following database? A manufacturer sells cars. He sells them as they are, or with a package A, or with a package B, or with both packages. I would create a table with the different car models, then a table with the different packages, but I don't know how to create the mapping-table, which links the cars to the different packages and prices? The problem is the packages don't have a price (just the cars and the cars with packages included).
Add "offer" table. This table will contain price. Each offer should have at least one car and no, one, two, many, or all (if there will be more then 2 of them) packages. ![enter image description here](https://i.stack.imgur.com/hiRYu.png)
I would guess that a car would have a manufacturers suggested retail price (MSRP). Each package or option would have its own additional price on top of the MSRP. I'd have a base Car table, a Package table, and another purchased Car table, one to many with a Customer table, where the Customer can associate their base Car with one or more Package instances.
How to design a car-packages-database?
[ "", "sql", "database", "database-design", "" ]
Find the names of all departments located either in BOSTON or in DALLAS" and not in both cities. I having the code like this ``` SELECT D.DNAME FROM DEPARTMENT D INNER JOIN DEPTLOC L ON L.DNAME = D.DNAME WHERE L.CITY='BOSTON' OR L.CITY='DALLAS' ; ``` But this will show the department that located in BOSTON OR DALLAS . But i just want either in, what should i put in order to get the result. Example: in my DEPTLOC TABLE ``` //DEPTLOC DNAME CITY ---------------- ACCOUNTING BOSTON ACCOUNTING DALLAS SALES DALLAS TRANSPORT BOSTON TRANSPORT DALLAS ``` So in my DEPARTMENT i should get output like ``` DNAME ---------- SALES ```
Group them, then calculate the total count for each departments, then filter all departments which has only one location. ``` SELECT D.DNAME FROM DEPARTMENT D INNER JOIN DEPTLOC L ON L.DNAME = D.DNAME WHERE L.CITY='BOSTON' OR L.CITY='DALLAS' GROUP BY D.DNAME HAVING COUNT(1) = 1 ```
Try this: ``` SELECT D.DNAME FROM DEPARTMENT D INNER JOIN DEPTLOC L ON L.DNAME = D.DNAME GROUP BY D.DNAME HAVING 1 = SUM(CASE WHEN L.CITY IN ('BOSTON', 'DALLAS') THEN 1 ELSE 0 END); ```
Either in not both clause in select sql
[ "", "sql", "oracle", "" ]
For some reason I get the following error when running the code below: ``` #1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'FROM postcodes_demographics INNER JOIN latlon1 on postcodes_demographics.postc' at line 3 ``` I don't understand what I'm doing wrong, thanks for any suggestions! ``` INSERT INTO pslatlong SELECT postcodes_demographics.*, latlon1.*, FROM postcodes_demographics INNER JOIN latlon1 on postcodes_demographics.postcode = latlon1.postcodens; ```
You have an errant comma: ``` SELECT postcodes_demographics.*, latlon1.*, <--- HERE ``` Remove it.
I would be very surprised if merely removing the comma fixes the problem. When using `insert`, you should get in the habit of listing all the columns explicitly: ``` INSERT INTO pslatlong(col1, col2, . . . ) SELECT d.col1, l.col2, . . . FROM postcodes_demographics d INNER JOIN latlon1 ll on d.postcode = ll.postcodens; ``` You need to do this to be sure that the right column is assigned the right value, to allow auto incrementing columns to be auto-incremented, and to prevent problems based on the number of columns.
mysql statement returns syntax error
[ "", "mysql", "sql", "syntax-error", "" ]
How do I not include people with a birth date older than 5 years from today's date? Here is what I have, I just dont know how to put it in my where clause. ``` SELECT firstname, lastname, age, gender, birthdate FROM Person WHERE ... ... ... ```
I would use: ``` dateadd(year,-5, getdate()) >= birthdate ```
``` datediff(year,yourdate,getdate()) > 5 ```
sql where clause issue with date older than 5 years
[ "", "sql", "sql-server", "t-sql", "" ]
I have a set of data with a `DateTime`, say `CalculatedOn` what I would like is to get start at the current date `getdate()` and get an `x` amount of records from before the current date, and the same amount from after. If `x = 50` then 50 prior to now and 50 in front of now. I was thinking `rownumber()` would be perfect for this, however I cannot think of how to number the rows negative for prior and positive for future. Also there is the issue of if there are not 50 prior or future what will happen, but that will come after. Assume the table has just two columns : ``` create table MyTable ( Id int not null constraint pk_mytable primary key, SomeTextIWant nvarchar(50) not null, CalculateDate DateTime not null ); ``` Results : If today is 25/04 12:54 then ``` Id, SomeTextIWant, CalculatedDate -- 50 from before now-- -----now here----- -- 50 from after now-- ```
You can use two CTE's, one for past and one for future dates, then use `ROW_NUMBER` with `ASC` and `DESC`, multiply before now with `-1` and concat all: ``` WITH dataBefore AS ( SELECT d.*, rn = (-1) * row_Number() over (Order By CalculatedOn DESC) FROM dbo.TableName d WHERE CalculatedOn < GetDate() ) , dataAfter AS ( SELECT d.*, rn = row_Number() over (Order By CalculatedOn ASC) FROM dbo.TableName d WHERE CalculatedOn >= GetDate() ) SELECT * FROM ( SELECT db.* FROM dataBefore db UNION ALL SELECT da.* FROM dataAfter da )x WHERE x.rn >= -50 AND x.RN <= 50 ORDER BY x.RN ```
If you want to get 50 rows before and after, perhaps this will do what you want: ``` with cte1 as ( select top 50 t.* from table t where CalculatedDate <= getdate() order by CalculatedDate desc ), cte2 as ( select top 50 t.* from table t where CalculatedDate > getdate() order by CalculatedDate ) select * from (select * from cte1 union all select * from cte2) t ``` EDIT: It is not clear to me from the context of the question whether a row number is actually needed. It is easy enough to add, thoug: ``` (select top 50 t.*, - row_number() over (order by CalculatedDate desc) as rownumber from table t where CalculatedDate <= getdate() order by CalculatedDate desc ) union all (select top 50 t.*, row_number() over (order by CalculatedDate) as rownumber from table t where CalculatedDate > getdate() order by CalculatedDate ) ``` You can actually combine these into one query: ``` select t.*, ((case when CalculatedDate < getdate() then -1 else 1 end) * (row_number() over (partition by (case when CalculatedDate < getdate() then 1 else 0 end) order by (case when CalculatedDate < getdate() then CalculatedDate end) desc, CalculatedDate asc ) )) as rn from table t; ``` You can put this in a subquery and select where `rn` between -50 and 50. However, I'm not sure what to do about row number 0 and the question provides no information on what to do with any records that match `getdate()` (as unlikely as that is). I think the first answer does what that OP needs.
How to get a negative rownumber in sql server
[ "", "sql", "sql-server", "t-sql", "row-number", "windowing", "" ]
How can I represent the following table in database efficiently, assuming that the number of rows and cols is more the several hundreds (I think creating database with 100 fields is not a good idea:)) ![enter image description here](https://i.stack.imgur.com/RKy3v.jpg) I'm using Ruby on Rails/SQLite.
What you have here is a classical de-normalisation task. And the classical approach (not knowing your read-write patterns or other needs) is to make three tables: ``` ID | Name --------- 1 | Gothamist 2 | GigaOm 3 | Quick Online Tips ID | Name ---------- 1 | China 2 | Kids 3 | Music 4 | Yahoo ID1 | ID2 | Value ----------------- 1 | 1 | 0 1 | 2 | 3 ... 3 | 4 | 22 ```
Your data structure is a bipartite non-directional weighted graph. If you can use a graph database, that would be a good fit.
Save two dimensional table to database
[ "", "sql", "ruby-on-rails", "ruby", "database", "sqlite", "" ]
I have a column which is varchar data type.Some sample values are like ``` abc 56 def 34 ghi jkl mno 78 ``` I wanted to get the numeric values only, like ``` 56 34 78 ``` Thanks in advance.
If you're on TD14 you can simply use a Regular Expression: ``` REGEXP_SUBSTR(col, '[0-9]+') ``` Before you might have the OTRANSLATE UDF, there's an old trick to remove any character but a list of wanted ones: ``` OTRANSLATE(col,OTRANSLATE(col, '0123456789',''),'') ```
``` SEL OTRANSLATE('abc 56 def', 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@#$%^&*()', '') ``` or column data = 'abc 56 def' '34 ghi' 'jkl mno 78' ``` SEL columnname,OTRANSLATE(columnname, 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@#$%^&*()', '') FROM TableA; ```
Extracting numeric value in teradata
[ "", "sql", "teradata", "" ]
I have a table in a DB2 database containing customer information that I need to retrieve along with a count of how many times a specific column in the table is duplicated. Here is an example of the data. ``` CUSTOMERID | CUSTOMERGROUP | PRODUCTID | PRODUCTNAME | ALERTNAME | ALERTLEVEL | XXX | YYY | ZZZ 12345 ABC 987654 ProductA Alert1 4 More Data Here ``` A customer is identified by the CustomerID and CustomerGroup columns. They can have any number of products and these products get different types of alerts (ProductA, ProductC and ProductQ could all get Alert1). I need to retrieve the customer information for each row along with a count of how many times that customer got a specific alertname. In our old MySQL database, this was not too difficult as I would do something like this ``` SELECT customerID, customerGroup, ProductID, ProductName, AlertName, count(AlertName), AlertLevel, more data.... FROM TABLE WHERE customerID = XXX and customerGroup = YYY GROUP BY alertname ORDER BY AlertLevel, AlertName, ProductName ``` The group by did not include every column in the select statement so I would get back rows for the customer that included the customer information and a count of the number of times they received a specific alert. Now that we have migrated to DB2, I am required to put every column from the SELECT into the GROUP BY and this (obviously) makes each row distinct and therefore the count is now returning 1 for every row regardless of whether an alert name matches another row for this customer. Is there a way to recreate this functionality that will not require a significant overhaul of the way data is retrieved? As the developer of the front end, I have the ability to change the way I manipulate data on the PHP side and I can write SQL statements but I have no option for changing the way the data is stored.
You can do what you want with analytic functions: ``` SELECT customerID, customerGroup, ProductID, ProductName, AlertName, AlertCount, AlertLevel, more data.... FROM (SELECT t.*, COUNT(*) OVER (PARTITION BY AlertName) as AlertCount, ROW_NUMBER() OVER (PARTITION BY AlertName ORDER BY customerID) as seqnum FROM TABLE t WHERE customerID = XXX and customerGroup = YYY ) t WHERE seqnum = 1 ORDER BY AlertLevel, AlertName, ProductName; ```
You can compute counts in separate query and then join it to the original query: ``` SELECT customerID, customerGroup, ProductID, ProductName, AlertName, t2.alert_count as AlertName, AlertLevel, more data.... FROM TABLE t1 JOIN ( SELECT customerid, customergroup, count(AlertName) alert_count FROM table GROUP BY alertname) t2 ON t1.customerid = t2.customerid AND t1.customergroup = t2.customergroup WHERE customerID = XXX AND customerGroup = YYY ORDER BY AlertLevel, AlertName, ProductName ```
DB2 SELECT statement using COUNT and GROUP BY
[ "", "sql", "database", "db2", "" ]
I have 4 tables: \*\*[PERSON] ``` ID_Person Person_NAME** 1 First name 1 Last name 1 2 First name 2 Last name 2 3 First name 3 Last name 3 4 First name 4 Last name 4 5 First name 5 Last name 5 6 First name 6 Last name 6 **[mobile_number] ID Mobile_Number OPERATOR** 1 797900010 M 2 797900011 M 3 698900010 I 4 797900012 I 5 698900011 J 6 797900013 T 7 797900011 J 8 698900012 I 9 797900014 L 10 698900013 M 11 797900015 M **[user_mobile] ID ID_USER ID_MOBILE** 1 1 1 2 1 3 3 1 11 4 2 6 5 2 8 6 3 5 7 3 10 8 4 2 9 5 4 10 5 7 11 6 9 **[MESSAGE_ID] ID_Message ID_MOBILE Message Date** 1 1 text 1 12/04/2011 2 1 text 2 07/07/2011 3 1 text 3 05/11/2011 4 2 text 4 01/13/2012 5 2 text 5 17/02/2012 6 2 text 6 13/12/2012 7 3 text 7 25/12/2011 8 4 text 8 11/11/2012 9 4 text 9 03/03/2012 10 5 text 10 30/04/2012 11 5 text 11 28/02/2012 12 6 text 12 01/06/2011 13 7 text 13 19/08/2010 14 8 text 14 22/12/2010 15 8 text 15 14/05/2010 16 9 text 16 09/04/2012 17 10 text 17 11/05/2011 18 11 text 18 15/01/2012 ``` **I have to select all users that have sent sms in different years Columns to be displayed: Person Name, Mobile Number, SMSText, Year** **My solution:** ``` Select a.Person_Name, Mobile_Nr, message, year(Date) as Years from PERSON a inner join USER_MOBILE b on a.ID_PERSON=b.ID_USER inner join MOBILE_NUMBER c on b.ID_MOBILE=C.ID inner join MESSAGE_ID d on c.ID=d.ID_Mobile group by Person_Name, Mobile_Nr,message,year(Date) having count(distinct year(date))>1 ``` But it does not return nothing. if i change it to "`having count(distinct year(date))=1`" i get : ``` **Person_Name Mobile_Nr message Years** Firstname 1 Lastname 1 698900010 text7 2011 Firstname 1 Lastname 1 797900010 text1 2011 Firstname 1 Lastname 1 797900010 text2 2011 Firstname 1 Lastname 1 797900010 text3 2011 Firstname 1 Lastname 1 797900015 text18 2012 Firstname 2 Lastname 2 698900012 text14 2010 Firstname 2 Lastname 2 698900012 text15 2010 Firstname 2 Lastname 2 797900013 text12 2011 Firstname 3 Lastname 3 698900011 text10 2012 Firstname 3 Lastname 3 698900011 text11 2012 Firstname 3 Lastname 3 698900013 text17 2011 Firstname 4 Lastname 4 797900011 text4 2012 Firstname 4 Lastname 4 797900011 text5 2012 Firstname 4 Lastname 4 797900011 text6 2012 Firstname 5 Lastname 5 698900009 text13 2010 Firstname 5 Lastname 5 797900012 text8 2012 Firstname 5 Lastname 5 797900012 text9 2012 Firstname 6 Lastname 6 797900014 text16 2012 ``` **But this is wrong, i want to display only the users that sent sms in different years.**
Try this -- same base query, plus a WHERE-clause subquery that filters out Persons who did not have calls from multiple years. ``` -- Person, phone number, message, and year, for persons who had messages in multiple years Select a.Person_Name, Mobile_Nr, message, year(Date) as Years from PERSON a inner join USER_MOBILE b on a.ID_PERSON=b.ID_USER inner join MOBILE_NUMBER c on b.ID_MOBILE=C.ID inner join MESSAGE_ID d on c.ID=d.ID_Mobile where a.ID_PERSON in (select a.ID_PERSON from PERSON a inner join USER_MOBILE b on a.ID_PERSON=b.ID_USER inner join MOBILE_NUMBER c on b.ID_MOBILE=C.ID inner join MESSAGE_ID d on c.ID=d.ID_Mobile group by a.ID_PERSON having count(distinct year(d.date)) > 1) ``` (I couldn't check the syntax, but the concept is sound)
Try something like this: ``` Select a.Person_Name, Mobile_Nr, message, year(D.Date) as Years from PERSON a inner join USER_MOBILE b on a.ID_PERSON=b.ID_USER inner join MOBILE_NUMBER c on b.ID_MOBILE=C.ID inner join MESSAGE_ID d on c.ID=d.ID_Mobile WHERE EXISTS( SELECT 1 FROM PERSON a1 INNER JOIN USER_MOBILE b1 on a1.ID_PERSON=b1.ID_USER inner join MOBILE_NUMBER c1 on b1.ID_MOBILE=C1.ID inner join MESSAGE_ID d1 on c1.ID=d1.ID_Mobile WHERE C1.MOBILE_NUMBER = C.MOBILE_NUMBER AND year(D1.Date)<>year(D.DATE) ) ``` I know it could be simplified a lot, but I just wanted to demonstrate the concept here
How to select rows that have more than 1 distinct values in SQL?
[ "", "sql", "" ]
I have the following postgresql syntax that returns values WHERE session\_date matches $date\_string Problem is that sometimes the $date\_string will not be available in the table, so I am looking to return the closest date to the $date\_string ``` $date_string = '2014-04-25'; SELECT year, session_date FROM calendar_dates WHERE session_date='$date_string' ``` Any ideas how I can do this?
If you want the closest date before, do it this way: ``` SELECT year, session_date FROM calendar_dates WHERE session_date < '$date_string' ORDER BY session_date DESC LIMIT 1; ``` The closest date after uses similar logic. For the closest on either side: ``` SELECT year, session_date FROM calendar_dates ORDER BY abs(session_date - date '$date_string') LIMIT 1; ```
# Using [`btree_gist`](https://www.postgresql.org/docs/current/static/btree-gist.html) and knn Using this method you can find the nearest event with an index. ``` CREATE EXTENSION btree_gist; CREATE TABLE foo ( id serial, ts timestamp ); INSERT INTO foo (ts) VALUES ('2017-06-02 03:09'), ('2016-06-02 03:09'), ('1900-06-02 03:09'), ('1954-06-02 03:09'); CREATE INDEX ON foo USING gist(ts); SELECT * FROM foo ORDER BY '1950-06-02 03:09' <-> ts LIMIT 1; ``` # Pg 11 Coming some time in the distant future... with [knn/btree](https://www.postgresql.org/message-id/flat/ce35e97b-cf34-3f5d-6b99-2c25bae49999@postgrespro.ru#ce35e97b-cf34-3f5d-6b99-2c25bae49999@postgrespro.ru)
PostgreSQL return exact or closest date to queried date
[ "", "sql", "postgresql", "" ]
**The content of this question is long so please bear with me.** I am looking for a way to filter a view from a stored procedure where the stored procedure has a bunch of parameters that can be nullable. Here are the options I've identified for doing this, along with any pros and cons I can see for them: ## Option 1: Delete From A Temp Table ``` ALTER PROCEDURE GetView1Filtered @ForeignKeyID1 int = null, @ForeignKeyID2 int = null, @ForeignKeyID3 int = null, @ForeignKeyID4 int = null, @WildcardString1 varchar(128) = null, @WildcardString2 varchar(128) = null, @WildcardString3 varchar(128) = null, @DateRange1Start date = null, @DateRange1End date = null AS BEGIN SELECT * INTO #TempView1 FROM View1 IF @ForeignKeyID1 IS NOT NULL DELETE #TempView1 WHERE ForeignKeyID1 <> @ForeignKeyID1 IF @ForeignKeyID2 IS NOT NULL DELETE #TempView1 WHERE ForeignKeyID2 <> @ForeignKeyID2 -- ... IF @WildcardString1 IS NOT NULL DELETE #TempView1 WHERE NOT WildcardString1 LIKE '%' + @WildcardString1 + '%' -- ... END ``` > Pros of **Option 1** > > * Easy maintainability, easy to read, easy to modify. > * Deletes are fast and they are only run as needed. > * Execution plan can be cached and optimized. > * Can handle large amount of parameters. > > Cons of **Option 1** > > * Performance is dependent upon the initial select of the view. > * Unnecessary overhead to select rows that are going to be deleted. --- ## Option 2: Building Dynamic SQL ``` ALTER PROCEDURE GetView1Filtered @ForeignKeyID1 int = null, @ForeignKeyID2 int = null, @ForeignKeyID3 int = null, @ForeignKeyID4 int = null, @WildcardString1 varchar(128) = null, @WildcardString2 varchar(128) = null, @WildcardString3 varchar(128) = null, @DateRange1Start date = null, @DateRange1End date = null AS BEGIN DECLARE @SqlStatement VARCHAR(MAX); DECLARE @ParamDefinition VARCHAR(MAX); SET @SqlStatement = 'SELECT * FROM View1' IF @ForeignKeyID1 IS NOT NULL SET @SqlStatement = @SqlStatement + ' AND ForeignKeyID1 = @ForeignKeyID1'; IF @ForeignKeyID2 IS NOT NULL SET @SqlStatement = @SqlStatement + ' AND ForeignKeyID2 = @ForeignKeyID2'; -- ... IF @WildcardString1 IS NOT NULL SET @SqlStatement = @SqlStatement + ' AND WildcardString1 LIKE ''%'' + @WildcardString1 + ''%'''; -- ... SET @ParamDefinition = '@ForeignKeyID1 int, @ForeignKeyID2 int, @ForeignKeyID3 int, @ForeignKeyID4 int, @WildcardString1 varchar(128), @WildcardString2 varchar(128), @WildcardString3 varchar(128), @DateRange1Start date, @DateRange1End date' EXECUTE sp_Executesql @SqlStatement, @ParamDefinition, @ForeignKeyID1, @ForeignKeyID2, @ForeignKeyID3, @ForeignKeyID4, @WildcardString1, @WildcardString2, @WildcardString3, @DateRange1Start, @DateRange1End END ``` > Pros of **Option 2** > > * Query is run with the exact parameters needed to filter the view > * Maintainability is not too bad, structure is similar to option 1 but with a bit more work. > * Can handle large amount of parameters. > > Cons of **Option 2** > > * No optimization of the query execution plan. > * Might be unwarranted but I've seen major performance degradation with this on views that are slightly complex. > * *Possible bias:* This approach just doesn't feel like it's too good in the long run. --- ## Option 3: Permutations Of SQL Parameters ``` ALTER PROCEDURE GetView1Filtered @ForeignKeyID1 int = null, @ForeignKeyID2 int = null, @ForeignKeyID3 int = null, @ForeignKeyID4 int = null, @WildcardString1 varchar(128) = null, @WildcardString2 varchar(128) = null, @WildcardString3 varchar(128) = null, @DateRange1Start date = null, @DateRange1End date = null AS BEGIN IF @ForeignKeyID1 IS NOT NULL AND @ForeignKeyID2 IS NULL AND @ForeignKeyID3 IS NULL AND @ForeignKeyID4 IS NULL AND @WildcardString1 IS NULL AND @WildcardString2 IS NULL AND @WildcardString3 IS NULL AND @DateRange1Start IS NULL AND @DateRange1End IS NULL BEGIN SELECT * FROM View1 WHERE ForeignKeyID1 = @ForeignKeyID1 END ELSE IF @ForeignKeyID1 IS NOT NULL AND @ForeignKeyID2 IS NOT NULL AND @ForeignKeyID3 IS NULL AND @ForeignKeyID4 IS NULL AND @WildcardString1 IS NULL AND @WildcardString2 IS NULL AND @WildcardString3 IS NULL AND @DateRange1Start IS NULL AND @DateRange1End IS NULL BEGIN SELECT * FROM View1 WHERE ForeignKeyID1 = @ForeignKeyID1 AND ForeignKeyID2 = @ForeignKeyID2 END -- ... END ``` > Pros of **Option 3** > > * Execution plan optimization. > * No unnecessary overhead of row selection. > > Cons of **Option 3** > > * Absolutely a nightmare to maintain. > * Can only work for a very low number of parameters, otherwise the permutations are too high. --- ## Option 4: ID Selection Into Temp Table ``` ALTER PROCEDURE GetView1Filtered @ForeignKeyID1 int = null, @ForeignKeyID2 int = null, @ForeignKeyID3 int = null, @ForeignKeyID4 int = null, @WildcardString1 varchar(128) = null, @WildcardString2 varchar(128) = null, @WildcardString3 varchar(128) = null, @DateRange1Start date = null, @DateRange1End date = null AS BEGIN CREATE TABLE #Temp ( PrimaryKeyID int NOT NULL ); IF @ForeignKeyID1 IS NOT NULL BEGIN INSERT INTO #Temp (PrimaryKeyID) SELECT PrimaryKeyID FROM View1 WHERE ForeignKeyID1 = @ForeignKeyID1 END IF @ForeignKeyID2 IS NOT NULL BEGIN INSERT INTO #Temp (PrimaryKeyID) SELECT PrimaryKeyID FROM View1 WHERE ForeignKeyID2 = @ForeignKeyID2 END -- ... SELECT * FROM View1 INNER JOIN (SELECT PrimaryKeyID FROM #Temp GROUP BY PrimaryKeyID) t ON t.PrimaryKeyID = View1.PrimaryKeyID DROP TABLE #Temp END ``` > Pros of **Option 4** > > * Maintainability is not too bad. > * Can handle large amount of parameters. > * Execution plan can be cached and optimized. > > Cons of **Option 4** > > * Performance is degraded because inserts are slow. > * As-needed inserts can still result in large performance degradation. --- ## Option 5: Case Statements ``` ALTER PROCEDURE GetView1Filtered @ForeignKeyID1 int = null, @ForeignKeyID2 int = null, @ForeignKeyID3 int = null, @ForeignKeyID4 int = null, @WildcardString1 varchar(128) = null, @WildcardString2 varchar(128) = null, @WildcardString3 varchar(128) = null, @DateRange1Start date = null, @DateRange1End date = null AS BEGIN SELECT * FROM View1 WHERE (CASE WHEN @ForeignKeyID1 IS NULL THEN 1 ELSE CASE WHEN ForeignKeyID1 = @ForeignKeyID1 THEN 1 ELSE 0 END END = 1) AND (CASE WHEN @ForeignKeyID2 IS NULL THEN 1 ELSE CASE WHEN ForeignKeyID2 = @ForeignKeyID2 THEN 1 ELSE 0 END END = 1) AND (CASE WHEN @WildcardString1 IS NULL THEN 1 ELSE CASE WHEN WildcardString1 LIKE '%' + @WildcardString1 + '%' THEN 1 ELSE 0 END END = 1) -- ... END ``` > Pros of **Option 5** > > * Maintainability is not too bad. > * Can technically handle a bunch of parameters. > * Only one select statement and processing only goes to getting the rows needed. > * Execution plan can be cached and optimized. > > Cons of **Option 5** > > * I haven't tested this one extensively but I don't think the performance will be very good. I don't think the SQL engine will like > processing a ton of case statements for each row. --- Those are the options I've identified. I don't know how accurate my pros and cons are for them, feel free to correct me. Anyway, I'm hoping I can get some insight into how this is best achieved. I often run into situations where this type of filtering is necessary and I'd like to have a good solution to tackle it. Ideally, I'd like the solution to be able to handle slower views that have lots of rows.
I think you're over-complicating this. A standard way to deal with this type of situation is: ``` SELECT <columns> FROM MyView WHERE (@Param1 IS NULL OR ColA = @Param1) AND (@Param2 IS NULL OR ColB = @Param2) ... ``` There's no need for complicated logic here involving deleting from temporary result sets.
Plain TSQL with option recompile should be included in this list as well. It isn't right for every situation, but it's where I start from. ``` WHERE (@Param1 IS NULL OR @Param1 = col1) AND (@Param2 IS NULL OR @Param2 = col2) OPTION (RECOMPILE) ``` Recompiling will allow for an optimized query based on the specific parameters. It's plain sql, so it's easier to read and maintain. It doesn't get cached, so there isn't memory bloat. The drawback is the cpu cost of recompilation each time, on complex queries this can become significant. Erland Sommarskog did a writeup of the different approaches to this problem that's worth reviewing. (Not that I've seen anything of his that isn't.) [Dynamic Search Conditions in T-SQL](http://www.sommarskog.se/dyn-search.html).
How can I optimally filter a view from a stored procedure that accepts lots of nullable parameters?
[ "", "sql", "sql-server", "stored-procedures", "" ]
Do I need to issue commit command after running the stored function with select query?
commit means "save the changes" select statement does not change any data. changing data can be done by Insert, update, delete statements (Data Manipulation language) .
Yes, you do, in some cases (please read the discussion in the link below). The rule is: Always commit if you made change in DB (after DML commands), even with SELECT statement. Use the COMMIT statement to end your current transaction and make permanent all changes performed in the transaction. Read more: [oracle - what statements need to be committed?](https://stackoverflow.com/questions/9541013/oracle-what-statements-need-to-be-committed) Thanks @Ben for the head up!
Commit statement after Stored Function with select query
[ "", "sql", "oracle", "" ]
I have a primary key in my table something like "abc.001 to abc.100" and "xyz.1000 to xyz.2000". So, there is a range of variable numbers but constant String value. Now, if I wanted to find the particular range between abc.011 to abc.020 then what query should I supposed to use. I tried using the between clause but it returns the last result in the range.
I found the answer using the regexp\_like. I made the query something like this, `select * from orders where REGEXP_LIKE (order_id, '^abc.(\d{1}|\d{2})$');` This query returns the results from 0-99 however if I need the result until 0-100 then I tried to use the third criteria `select * from orders where REGEXP_LIKE (order_id, '^abc.(\d{1}|\d{2}|\d{3})$');` The above query returns the result until 999 but I am in search of only until 100. I still need to find the accurate query for that.
Will the string bit be constant? If so you can substring (SUBSTR()) the primary key to just take the number part, to\_number it and then filter by that. e.g. ``` select * from table where to_number(substr(col,5,3)) between 1 and 100 ``` If you want to use the characters for comparison as well you'll have to do something more fancy like convert them to ascii values one at a time and filter by that e.g. ``` select ascii('y') from dual ```
How to find the range of values using the SQL query where the string is constant and the number is variable
[ "", "sql", "oracle11g", "" ]
I have a table in SQL, and it looks like this: ``` FOO BAR DateTime --- --- ------ FOO1 BAR1 4/25/2014 FOO2 BAR2 4/24/2014 ........ ``` to any n number of records in the above format. I am trying to write a query that first returns all records that meet two property conditions, say all the records where FOO = FOO1 and BAR = BAR1, then with those results, return the record that has the most recent value out of the DateTime column. There can be multiple records where FOO = FOO1 and BAR = BAR1, and I am trying to get the most recent based on the datetime field value.
Well if I understood the request correctly (and it is SQL Server) it would be something like this: ``` SELECT TOP 1 FOO, BAR, DateTime FROM Table WHERE FOO='FOO1' AND BAR='BAR1' ORDER BY DateTime DESC ``` WHERE condition limits the set, ORDER BY sorts the set in descending order by date and TOP 1 selects the most recent record.
``` SELECT FOO, BAR, Datetime FROM table WHERE Foo = 'FOO1' and Bar = 'BAR1' and DateTime = (SELECT MAX(DATETIME) from table WHERE Foo = 'FOO1' and Bar = 'BAR1') ``` Probably this...or any other answer...haha
T-SQL, Compare Record Property, Then Suusequent Date Compare
[ "", "mysql", "sql", "sql-server", "t-sql", "" ]
I have designed the following query ``` SELECT v.visitid, CASE WHEN vd.DocType = 1 THEN 'y' ELSE 'n' END as 'FinalReportAttached' ,CASE WHEN vd.DocType = 13 THEN 'y' ELSE 'n' END as 'InspectorReportAttached' ,CASE WHEN vd.DocType = 2 THEN 'y' ELSE 'n' END as 'Co-ordReportAttached' FROM Visits v INNER JOIN VisitDocs vd on vd.VisitID = v.VisitID WHERE v.VisitID = 79118 ``` ![enter image description here](https://i.stack.imgur.com/SzMZC.jpg) I like to show results in one row. If report is there then 'y' and if not then 'n' . There is a one to manay relation b/w visit and visitdoc table. Visitdoc can have many different docs for one visit. I need to check if visitdoc has doctype 1,3 or 12 against each visit then say yes otherwise no. ``` visitID |FinalReport |InspectorReport |Co-ordReport ------------------------------------------------ 79118 |n |y |y ```
You could try this: ``` SELECT v.visitid, CASE WHEN SUM(CASE WHEN vd.DocType = 1 THEN 1 ELSE 0 END)>0 THEN 'y' ELSE 'n' END as 'FinalReportAttached' ,CASE WHEN SUM(CASE WHEN vd.DocType = 13 THEN 1 ELSE 0 END)>0 THEN 'y' ELSE 'n' END as 'InspectorReportAttached' ,CASE WHEN SUM(CASE WHEN vd.DocType = 2 THEN 1 ELSE 0 END)>0 THEN 'y' ELSE 'n' END as 'Co-ordReportAttached' FROM Visits v INNER JOIN VisitDocs vd on vd.VisitID = v.VisitID WHERE v.VisitID = 79118 GROUP BY v.VisitID ```
To get one row, use aggregation functions: ``` SELECT vd.visitid, MAX(CASE WHEN vd.DocType = 1 THEN 'y' ELSE 'n' END) as FinalReportAttached, MAX(CASE WHEN vd.DocType = 13 THEN 'y' ELSE 'n' END) as InspectorReportAttached, MAX(CASE WHEN vd.DocType = 2 THEN 'y' ELSE 'n' END) as [Co-ordReportAttached] FROM VisitDocs vd WHERE vd.VisitID = 79118; ``` Note that this works because `'y'` > `'n'` (at least in the character collations that I am familiar with). I also changed the query in two other ways. I removed the single quotes from the column aliases. Only use single quotes for string and date constants. Using single quotes for identifiers can lead to confusion. Also, the join doesn't seem to be needed, because the original query used an inner join and no other fields from `v`.
How to combine rows based on field value
[ "", "sql", "sql-server-2008", "" ]
I want to copy/update data from Table A to Table B. Table B has some more additional columns. I have tried the following options. ``` 1) `REPLACE INTO `B` (SHOW FIELDS FROM 'A') SELECT * FROM `A 2) `REPLACE INTO `B` (SELECT `COLUMN_NAME` FROM `INFORMATION_SCHEMA`.`COLUMNS` WHERE `TABLE_SCHEMA`='test1' AND `TABLE_NAME`='A') SELECT * FROM `A ``` But it throws errors. Can you guys help me how to select names with select query? **UPDATE:** 3) As suggested by Jerko, I have two tables A(warehouse\_id,long,lat) B(warehouse\_id,long) Applied the following statement. ``` SET @query = CONCAT('REPLACE INTO `A` (SELECT ', (SELECT GROUP_CONCAT(CONCAT('`',column_name, '`')) FROM information_schema.columns WHERE `TABLE_SCHEMA`='test2' AND `table_name` = 'A'), ' FROM `B`)'); PREPARE stmt FROM @query; EXECUTE stmt; ``` This gives me the error > "#1054 - Unknown column 'lat' in 'field list' "
Actually there is a way ``` SET @query = CONCAT('REPLACE INTO `A` (', (SELECT GROUP_CONCAT(CONCAT('`',column_name, '`')) FROM information_schema.columns WHERE `TABLE_SCHEMA`='test1' AND `table_name` = 'A' AND column_name IN (SELECT column_name FROM information_schema.columns WHERE table_schema = 'test1' AND table_name='B')) , ') (SELECT ', (SELECT GROUP_CONCAT(CONCAT('`',column_name, '`')) FROM information_schema.columns WHERE `TABLE_SCHEMA`='test1' AND `table_name` = 'A' AND column_name IN (SELECT column_name FROM information_schema.columns WHERE table_schema = 'test1' AND table_name='B')), ' FROM `B`)'); PREPARE stmt FROM @query; EXECUTE stmt; ```
You can't do this dynamically in mysql like you are trying to do. MySQL expects your list of column names to be provided directly, not from a subquery. If you want to do this dynamically you'll have to step back upstream to whatever language you are using to interact with MySQL such as PHP or Java.
How to select column name in select statement
[ "", "mysql", "sql", "" ]
I have data like this : ``` ID VERSION SEQUENCE -------------- -------------- --------------- 01-001 01 001 01-002 01 002 02-002 02 002 ``` And I want to select only the higher version for each sequence in order to have a result like this : ``` ID VERSION SEQUENCE -------------- -------------- --------------- 01-001 01 001 02-002 02 002 ``` I think the request should contains a group by on Sequence, but I can't manage to make it work Can someone help ?
So filter to include only those rows where the version ***is*** the highest version in it's sequence ``` Select id, version, sequence From DataTable dt where version = (Select Max(version) From DataTable where Sequence = dt.Sequence) ```
using a Common Table Expression you could: ``` with HighestSequence(ID,MaxSequence) as ( select id, max(Sequence) from table group by ID ) select t.* from table t inner join HighestSequence hs on t.id = hs.id and t.sequence = hs.sequence ```
SQL, Group by one column
[ "", "sql", "oracle", "group-by", "" ]
So this generates Hours, Projects, Descriptions and Client names. But the problem is that all the fields need to be grouped together if they're the same instead of displayed multiple times. I searched around and using the SUM function might work. Here's the SQL statement: ``` SELECT h.hoursworked AS Hours, p.projectname AS DocketName, p.description AS Description, p.archive AS Archived, c.clientname AS Clients FROM hours h JOIN projects p ON h.projectid = p.projectid JOIN clients c ON p.clientid = c.clientid WHERE p.archive = 0 ORDER BY p.projectname ASC; ```
You'll need to add a `GROUP BY` clause. Try the following: ``` SELECT SUM(h.hoursworked) AS TotalHours , p.projectname AS DocketName , p.description AS Description , p.archive AS Archived , c.clientname AS Clients FROM hours h JOIN projects p ON h.projectid = p.projectid JOIN clients c ON p.clientid = c.clientid WHERE p.archive = 0 GROUP BY p.projectname, p.description, p.archive, c.clientname ORDER BY p.projectname ASC ``` If you only want to see the Project Name and the Total Hours, you can do this instead: ``` SELECT SUM(h.hoursworked) AS TotalHours , p.projectname AS DocketName FROM hours h JOIN projects p ON h.projectid = p.projectid JOIN clients c ON p.clientid = c.clientid WHERE p.archive = 0 GROUP BY p.projectname ORDER BY p.projectname ASC ```
Grouping can be done but if there are different combinations then grouping will only work to some extent. Try this: Assuming all fields are `Varchar` and except `hours` ``` SELECT p.projectname AS DocketName, p.description AS Description, p.archive AS Archived, c.clientname AS Clients,Sum(h.hoursworked) AS Hours FROM hours h JOIN projects p ON h.projectid = p.projectid JOIN clients c ON p.clientid = c.clientid WHERE p.archive = 0 ORDER BY p.projectname ASC; Group by DocketName,Description,Archived,Clients ```
Grouping Project Names and Total Hours together SQL JOIN ONs
[ "", "sql", "database", "sum", "" ]
Using MS-SQL server 2008 r2 I have a table that I'm trying to count the number of groups of duplicates in: ``` id adId 1 a 2 a 3 b 4 c 5 c 6 c ``` I want to find the total number of records that have duplicates in column adId. The result I want here is 2 (as a and c have both been duplicated) These do not work as they count the duplicate rows and not the groups of duplicates ``` SELECT COUNT(*) AS TotalRecords FROM (SELECT DISTINCT adId FROM oc_cal_sc) HAVING COUNT(adId) > 1 SELECT COUNT(DISTINCT (adId)) AS TotalRecords FROM oc_cal_sc HAVING COUNT(adId) > 1 SELECT SUM(temp_1.adId) as TotalRecords FROM (SELECT adId, count(1) AS count_1 FROM oc_cal_sc GROUP BY adId HAVING adId >1) as temp_1 ``` Any ideas on how to go about this as I'm a little out of my depth?
For getting the records which have duplicates: ``` SELECT adId,COUNT(adId) AS TotalRecords FROM oc_cal_sc GROUP BY adId HAVING COUNT(adId) > 1 ``` With the example you have given, result would be: ``` adId TotalRecords a 2 c 3 ``` **EDIT:** For getting the **number of records** which have duplicates: ``` SELECT COUNT(*) as TotDuplicates FROM (SELECT adId,COUNT(adId) AS TotalRecords FROM oc_cal_sc GROUP BY adId HAVING COUNT(adId) > 1) T ``` Result would be: ``` TotDuplicates 2 ```
``` SELECT COUNT(1) TotalRecords FROM ( SELECT adId, COUNT(1) FROM oc_cal_sc GROUP BY adId HAVING COUNT(1) > 1 ) x ```
SELECT COUNT(*) DISTINCT - Finding the total number of times any groups of duplicates appear in a table
[ "", "sql", "sql-server", "" ]
Currently to reference multiple databases from the same query, we do it like so ``` DatabaseName.Tablename.Column ``` I was wondering if you can actually reference the database name from a field. The reason for this is sometimes there are hundreds of these references and when creating new instances for customers the database names always vary slightly, so these have to be updated. What I am looking to do is something like this: ``` (SELECT DatabaseName FROM ConfigurationTable).TableName.ColumnName ``` Would this work at all? And if so can you foresee any problems?
You cannot use syntax like in your question. What you could do is to use dynamic SQL and build the query dynamically based on the database name. For example: ``` declare @sql nvarchar(max) = (SELECT DatabaseName FROM ConfigurationTable) set @sql = N'SELECT ColumnName FROM ' + QUOTENAME(@sql) + N'..TableName' exec sp_executesql @sql ```
You seem to have an architecture where each customer has a separate database. Why are you including the database name in the query? Just add `use DatabaseName` and run the query as you would want. Such an architecture normally copies views, stored procedures, functions, tables and whatever else into each database. If you have a central "code" or "master" database for the application, then the code would often be using dynamic SQL to construct the queries to access the right database. You cannot dynamically reference a database in a single query. You would have to use dynamic SQL for this.
Get database name from field?
[ "", "sql", "sql-server-2012", "" ]
Let's say I have the following tables: ``` Batch Items ---+----- ---+----------+-------- id | size id | batch_id | quality ---+----- ---+----------+-------- 1 | 10 1 | 1 | 9 2 | 2 2 | 1 | 10 3 | 2 | 1 4 | 2 | 2 5 | 2 | 1 6 | 2 | 9 ``` I have batches of items. They are sent by batches of size `batch.size`. An item is broken if it's quality is <= 3. I want to know the number of broken items in the last batches sent: ``` batch_id | broken_item_count ---------+--------------------- 1 | 0 2 | 2 (and not 3) ``` My idea is the following: ``` SELECT batch.id as batch_id, COUNT(broken_items.*) as broken_item_count FROM batch INNER JOIN ( SELECT id FROM items WHERE items.quality <= 3 ORDER BY items.id asc LIMIT batch.size -- invalid reference to FROM-clause entry for table "batch" ) broken_items ON broken_items.batch_id = batch.id ``` (I would `ORDER BY` `items.shipped_at`. But for simplicity, I order by `items.id`) But this query shows me the error I put as the comment. **How can I limit the number of joined items based on the `batch.size` that is different for each row** ? Is there any other way to achieve what I want ?
``` SELECT b.id AS batch_id , count(i.quality < 4 OR NULL) AS broken_item_count FROM batch b LEFT JOIN ( SELECT batch_id, quality , row_number() OVER (PARTITION BY batch_id ORDER BY id DESC) AS rn FROM items ) i ON i.batch_id = b.id AND i.rn <= b.size GROUP BY 1 ORDER BY 1; ``` [**SQL Fiddle** with added examples.](http://sqlfiddle.com/#!15/c62bd/2) This is much like [@Clodoaldos's answer](https://stackoverflow.com/a/23298840/939860), but with a couple of differences. Most importantly: * You want to count the `broken items in the last batches sent`, so we have to `ORDER BY id` **`DESC`** * If there can be batches without items at all you need to use **`LEFT JOIN`** instead of a plain `JOIN` or those batches are excluded. Consequently, the check `i.rn <= b.size` needs to move from the `WHERE` clause to the `JOIN` clause.
[SQL Fiddle](http://sqlfiddle.com/#!15/ddbc1/1) ``` select b.id as batch_id, count(quality <= 3 or null) as broken_item_count from batch b inner join ( select id, quality, batch_id, row_number() over (partition by batch_id order by id) as rn from items ) i on i.batch_id = b.id where rn <= b.size group by b.id order by b.id ```
Join a dynamic number of rows in postgres
[ "", "sql", "postgresql", "join", "" ]
The question can be specific to SQL server. When I write a query such as : ``` SELECT * FROM IndustryData WHERE Date='20131231' AND ReportTypeID = CASE WHEN (fnQuarterDate('20131231')='20131231') THEN 1 WHEN (fnQuarterDate('20131231')!='20131231') THEN 4 END; ``` **Does the Function Call fnQuarterDate (or any Subquery) within Case inside a Where clause is executed for EACH row of the table ?** How would it be better if I get the function's (or any subquery) value beforehand inside a variable like: ``` DECLARE @X INT IF fnQuarterDate('20131231')='20131231' SET @X=1 ELSE SET @X=0 SELECT * FROM IndustryData WHERE Date='20131231' AND ReportTypeID = CASE WHEN (@X = 1) THEN 1 WHEN (@X = 0) THEN 4 END; ``` I know that in MySQL if there is a subquery inside IN(..) within a WHERE clause, it is executed for each row, I just wanted to find out the same for SQL SERVER. ... Just populated table with about 30K rows and found out the Time Difference: **Query1= 70ms ; Query 2= 6ms.** I think that explains it but still don't know the actual facts behind it. Also would there be any difference if instead of a UDF there was a simple subquery ?
I think the solution may in theory help you increase the performance, but it also depends on what the scalar function actually does. I think that in this case (my guess is formatting the date to last day in the quarter) would really be negligible. You may want to read this page with suggested workarounds: <http://connect.microsoft.com/SQLServer/feedback/details/273443/the-scalar-expression-function-would-speed-performance-while-keeping-the-benefits-of-functions#> > Because SQL Server must execute each function on every row, using any function incurs a cursor like performance penalty. And in Workarounds, there is a comment that > I had the same problem when I used scalar UDF in join column, the > performance was horrible. After I replaced the UDF with temp table > that contains the results of UDF and used it in join clause, the > performance was order of magnitudes better. MS team should fix UDF's > to be more reliable. So it appears that yes, this may increase the performance. Your solution is correct, but I would recommend considering an improvement of the SQL to use ELSE instead, it looks cleaner to me: ``` AND ReportTypeID = CASE WHEN (@X = 1) THEN 1 ELSE 4 END; ```
It depends. See [User-Defined Functions](http://technet.microsoft.com/en-us/library/ms191007.aspx): > The number of times that a function specified in a query is actually executed can vary between execution plans built by the optimizer. An example is a function invoked by a subquery in a WHERE clause. The number of times the subquery and its function is executed can vary with different access paths chosen by the optimizer.
WHERE-CASE clause Subquery Performance
[ "", "sql", "sql-server-2012", "subquery", "case", "database-performance", "" ]
How would I go about updating a table by using another table so it puts in the new data and if it doesnt match on an id it adds the new id and the data with it. My original table i much bigger than the new table that will update it. and the new table has a few ids that aren't in the old table but need to be added. for example I have: Table being updated- ``` +-------------------+ | Original Table | +-------------------+ | ID | Initials | |------+------------| | 1 | ABC | | 2 | DEF | | 3 | GHI | ``` and... the table I'm pulling data from to update the other table- ``` +-------------------+ | New Table | +-------------------+ | ID | Initials | |------+------------| | 1 | XZY | | 2 | QRS | | 3 | GHI | | 4 | ABC | ``` then I want my Original table to get its values that match up to be updated by the new table if they have changed, and add any new ID rows if they aren't in the original table so in this example it would look like the New Table. ``` +-------------------+ | Original Table | +-------------------+ | ID | Initials | |------+------------| | 1 | XZY | | 2 | QRS | | 3 | GHI | | 4 | ABC | ```
You can use MERGE statement to put this UPSERT operation in one statement but there are issues with merge statement I would split it into two Statements, UPDATE and INSERT **UPDATE** ``` UPDATE O SET O.Initials = N.Initials FROM Original_Table O INNER JOIN New_Table N ON O.ID = N.ID ``` **INSERT** ``` INSERT INTO Original_Table (ID , Initials) SELECT ID , Initials FROM New_Table WHERE NOT EXISTS ( SELECT 1 FROM Original_Table WHERE ID = Original_Table.ID) ``` *Important Note* Reason why I suggested to avoid using merge statement read this article [`Use Caution with SQL Server's MERGE Statement`](http://www.mssqltips.com/sqlservertip/3074/use-caution-with-sql-servers-merge-statement/) by *Aaron Bertrand*
You need to use the MERGE statement for this: ``` MERGE original_table AS Target USING updated_table as Source ON original_table.id = updated_table.id WHEN MATCHED THEN UPDATE SET Target.Initials = Source.Initials WHEN NOT MATCHED THEN INSERT(id, Initials) VALUES(Source.id, Source.Initials); ``` You have not specified, what happens in case the valuesin original table are not found in the updated one. But, just in case, you can add this to remove them from original table: ``` WHEN NOT MATCHED BY SOURCE THEN DELETE ```
update a table from another table and add new values
[ "", "sql", "sql-server", "" ]
This is in Oracle database. Say I have the following table A: ``` column1 column2 id1 a id2 a id3 a id4 b id5 b id6 c ``` So what I want the sql does is: First count there's three As and two bs and one c, then based on the counts return me the smallest number of these counts, in this case is 1 (because we only have one c) Can this be achieved somehow by using the combination of MIN and COUNT?
Try this: ``` SELECT MIN(Count) as MinVal FROM (SELECT column2,COUNT(column2) as Count FROM TableA GROUP BY column2) T ``` **Explanation:** Inner query select the counts of `column2` for each value of `column2` in the table. Then with the outer query, the minimum count is selected.
In Oracle you can do this directly; count per group and use MIN on the results to get back one row with the desired value. ``` select min(count(*)) from tablea group by column1; ```
SQL - combination of MIN and COUNT
[ "", "sql", "oracle", "aggregate-functions", "oracle-sqldeveloper", "" ]
I am using Oracle SQL (TeraTerm), and I am trying to join specific information from two tables CONSULTANT, and PROJECT\_CONSULTANT, and I need to retrieve only the employees who worked over 40 hours. Here are the tables Project Consultant ``` PROJECT_ID CONSULTANT_ID NUMBER_HOURS --------------- --------------- ------------ 94738949 49620928 6 45699847 34879223 57 45699847 95928792 44 45699847 04875034 59 19870398 49620928 32 30495394 95928792 57 30495394 07811473 50 62388923 07811473 82 ``` and Consultant ``` CONSULTA NAME ZIP START_DT -------- -------------------------------- ----- --------- CON_TITLE ------------------------- 49620928 Tom Jones 39875 01-SEP-98 Junior Consultant 04875034 Jack Johnson 29087 05-OCT-93 Manager 34879223 Lanny Harris 03944 30-APR-04 Principal CONSULTA NAME ZIP START_DT -------- -------------------------------- ----- --------- CON_TITLE ------------------------- 95928792 Michael Johnson 02953 22-JUN-02 Senior Manager 07811473 Wendy Adams 29087 05-JUL-05 Senior Consultant ``` The code I came up with is ``` select Consultant_ID, Name, Zip, and Number_Hours from Consultant Inner Join project_consultant ON Consultant.Consultant_ID=project_consultant.Consultant_ID WHERE project_consultant.number_Hours>40; ``` I am getting an error ``` ERROR at line 1: ORA-00936: missing expression ``` I just wanna know how to write the join statement correctly any help would be awesome, because I am having trouble knowing how to fix this join statement
You don't use `and` in the `select` clause: ``` select c.Consultant_ID, c.Name, c.Zip, pc.Number_Hours from Consultant c Inner Join project_consultant pc on c.Consultant_ID = pc.Consultant_ID where pc.number_Hours > 40; ``` You also need a table alias in the `select` clause to be clear what table `Consultant_Id` refers to. EDIT: You might actually want to *sum* the hours for employees. If so, you need an aggregation: ``` select c.Consultant_ID, c.Name, c.Zip, sum(pc.Number_Hours) from Consultant c Inner Join project_consultant pc on c.Consultant_ID = pc.Consultant_ID group by c.Consultant_ID, c.Name, c.Zip having sum(pc.number_Hours) > 40; ```
You can't use **and** in Select Clause Try this ``` SELECT C.Consultant_ID, C.Name, C.ip, PC.Number_Hours FROM Consultant C INNER Join project_consultant PC ON C.Consultant_ID=PC.Consultant_ID WHERE PC.number_Hours > 40; ```
Having trouble creating join looking for specific data
[ "", "sql", "join", "inner-join", "" ]
I'm storing tweets, twitter users, and categorising twitter users in groups. I have the following 4 tables tweets: ``` tweet_id | user_id | created ------------------------------- 23452345 | 2345 | 2013-08-12 23456094 | 1234 | 2014-03-24 23097777 | 1234 | 2014-04-12 23948798 | 9999 | 2013-09-22 ``` twitter\_users: ``` user_id | screen_name ------------------------ 2345 | michael 1234 | david 9999 | not_interested ``` twitter\_social: ``` user_id | social_id --------------------------- 2345 | 34 9999 | 20 1234 | 80 ``` social\_categories: ``` social_id | category_id ----------------------- 34 | 3 20 | 6 80 | 3 ``` I want to see the oldest tweet per user who appears in a certain social category. The following SQL I've written doesn't seem to work. I have one row per `twitter_user` however I'm not seeing the earliest tweet ``` SELECT tu.screen_name as Handle, tw.created_at as Earliest FROM twitter_users tu LEFT JOIN tweets tw ON tu.user_id = tw.user_id LEFT JOIN twitter_social ts ON ts.user_id = tu.user_id LEFT JOIN social_categories cs ON ts.social_id = cs.social_id WHERE cs.category_id=3 GROUP BY Handle ORDER BY Earliest ASC ``` **EDIT** I wish to have results like the following ``` Handle | Earliest --------------------- david | 2014-03-24 michael | 2013-08-12 ```
``` SELECT tu.screen_name as Handle ,MIN(tw.created_at) as Earliest FROM twitter_users tu LEFT JOIN tweets tw ON tu.user_id = tw.user_id LEFT JOIN twitter_social ts ON ts.user_id = tu.user_id LEFT JOIN social_categories cs ON ts.social_id = cs.social_id WHERE cs.category_id=3 GROUP BY tu.screen_name ORDER BY Earliest ASC ```
If you want the earliest, it's not ASC (ascending) ``` ORDER BY Earliest ASC ``` It should be DESC (descending) ``` ORDER BY Earliest DESC ```
SQL Limit & Order By across joins
[ "", "mysql", "sql", "join", "" ]
Can I do like this ``` select * from tableA JOIN tableB ON tableA.id=tableB.id where tableB.someId = select id from otherTable where anotherId = 1 ``` I have 2 where, possible?
You can use `=` when the subquery returns only 1 value. When subquery returns more than 1 value, you will have to use `IN` or `EXISTS`: 1. Using `IN`: ``` select * from tableA JOIN tableB ON tableA.id=tableB.id where tableB.someId IN (select id from otherTable where anotherId = 1) ``` `IN` determines whether a specified value matches any value in a subquery or a list. Read more [**here**](http://technet.microsoft.com/en-us/library/ms177682.aspx). 2. USING `EXISTS`: ``` select * from tableA JOIN tableB ON tableA.id = tableB.id where EXISTS (select id from otherTable where anotherId = 1 and tableB.someId = otherTable .id) ```
You could use the [`IN` Clause](https://dev.mysql.com/doc/refman/5.0/en/comparison-operators.html#function_in): ``` select * from tableA JOIN tableB ON tableA.id = tableB.id where tableB.someId IN (select id from otherTable where anotherId = 1) ``` You could also use the [`EXISTS` Condition](http://dev.mysql.com/doc/refman/5.0/en/exists-and-not-exists-subqueries.html): ``` select * from tableA JOIN tableB ON tableA.id = tableB.id where EXISTS (select id from otherTable ot where anotherId = 1 and tableB.someId = ot.id) ``` `=` would also work fine, if the subquery returned a single value. [Difference between EXISTS and IN](https://stackoverflow.com/q/24929/1492578)
SQL where equal to expression
[ "", "mysql", "sql", "" ]
I have a table with close to 3 million rows that has 5-10 updates/inserts every second. Each row is assigned a category, and I want to group by the category to count the total number of rows for each category. ``` Select CategoryId , COUNT(*) as TotalRows FROM Table1 WHERE SaleTypeId = 2 AND CategoryId > 1 GROUP BY CategoryId ``` Table Schema: ``` CREATE TABLE [dbo].[Table1]( [SaleId] INT IDENTITY (1, 1) NOT NULL, [SaleTypeId] INT NOT NULL, [CategoryId] INT NULL) ``` Primary Key: ``` ADD CONSTRAINT [PK_Table1] PRIMARY KEY CLUSTERED ([SaleId] ASC) WITH (ALLOW_PAGE_LOCKS = ON, ALLOW_ROW_LOCKS = ON, PAD_INDEX = OFF, IGNORE_DUP_KEY = OFF, STATISTICS_NORECOMPUTE = OFF); ``` I have a non-clustered index on the table: ``` CREATE NONCLUSTERED INDEX [Index1] ON [dbo].[Table1] ( [SaleTypeId] ASC, [CategoryId] ASC ) ``` Query Plan: ![enter image description here](https://i.stack.imgur.com/buu2u.png) The query takes 40 to 60 seconds to run, and it looks like a lot of data is being read in the index seek operation. Is there any way to speed up this query? I have read that count gets slower on bigger data sets and that there are quicker ways to get the count of an entire table, but I need to get the count by the category.
I ended up running this as a nightly aggregate job and storing the result in an aggregate table. It doesn't provide up to date results (which, after deliberation, we can live with) nor is the nightly query any faster, but reading from the aggregate table is a lot faster.
Reverse the columns order in the nonclustered index, like this: ``` CREATE NONCLUSTERED INDEX [Index1] ON [dbo].[Table1] ( [CategoryId] ASC, [SaleTypeId] ASC ) ```
SQL Group By with Count is slow
[ "", "sql", "sql-server", "performance", "" ]
``` CREATE TABLE Personn ( pid INTEGER PRIMARY KEY, iname VARCHAR(50), gender CHAR(1), dateOfBirth DATE, CONSTRAINT person_gender CHECK (Gender BETWEEN 'f' AND 'm') ); INSERT INTO Personn VALUES ('1005','john', 'M', 24/apr/1999 ); ``` ``` Error starting at line : 18 in command - INSERT INTO Personn VALUES ('1005','john', 'M', 24/apr/1999 ) Error at Command Line : 18 Column : 52 Error report - SQL Error: ORA-00984: column not allowed here 00984. 00000 - "column not allowed here" *Cause: *Action: ```
You should change your insert statement. Date values should be given as string literals. So change ``` INSERT INTO Personn VALUES ('1005','john', 'M', 24/apr/1999 ); ``` to ``` INSERT INTO Personn VALUES (1005, 'john', 'M', '24/apr/1999'); ``` Since your primary key is an integer, you can leave out the single quotes there.
``` INSERT INTO Personn VALUES ('1005','john', 'm', to_date('24/apr/1999', 'dd/mon/yyyy') ); ```
oracle error: "column not allowed here"
[ "", "sql", "oracle", "" ]
I need to select some columns from a table as XML with namespaces included in them along with other columns as is. For example, I have a following table layout: ``` ID C1 X1C1 X1C2 X2C3 1 A 1 2 3 ``` What the query should return is: ``` ID C1 XmlData 1 A <xmldata1> 2 A <xmldata2> ``` Where `<xmldata1>` would be: ``` <Root xmlns:xsd="w3.org/2001/XMLSchema" xmlns:xsi="w3.org/2001/XMLSchema-instance" xmlns:mst="microsoft.com/wsdl/types/"> <Child attrib="C1"> <ChildValue xsi:type="xsd:integer">1</ChildNode> </Child> <Child attrib="C2"> <ChildNode xsi:type="xsd:integer">2</ChildNode> </Child> </Root> ``` and `<xmldata2>` would be: ``` <Rootxmlns:xsd="w3.org/2001/XMLSchema" xmlns:xsi="w3.org/2001/XMLSchema-instance" xmlns:mst="microsoft.com/wsdl/types/"> <Child attrib="C3"> <ChildNode xsi:type="xsd:integer">3</ChildNode> </Child> </Root> ``` I have a good reference how to build the xml from [this SO question](https://stackoverflow.com/questions/10357203/selecting-few-columns-of-a-result-set-as-xml) but I'm not able to put in the namespaces. If this is possible how to do it? Edit: I've used following query attempting to get the required result: select 1 ID, 'A' C1, 1 X1C1, 2 X1C2, 3 X2C3 into #t ``` ;with xmlnamespaces('w3.org/2001/XMLSchema' as xsd, 'w3.org/2001/XMLSchema-instance' as xsi, 'microsoft.com/wsdl/types/' as mst) select ID, C1, (select (SELECT 'C1' "@attrib", 'xsd:integer' "ChildValue/@xsi:type",t.X1C1 as 'ChildValue' FOR XML PATH('Child'), type),(SELECT 'C2' "@name", 'xsd:integer' "ChildValue/@xsi:type", t.X1C2 as 'ChildValue' FOR XML PATH('Child'), type) FOR XML PATH('Root'), type) as property_data FROM #t t drop table #t ``` Here is the output of its xml part: ``` <Root xmlns:mst="microsoft.com/wsdl/types/" xmlns:xsi="w3.org/2001/XMLSchema-instance" xmlns:xsd="w3.org/2001/XMLSchema"> <Child xmlns:mst="microsoft.com/wsdl/types/" xmlns:xsi="w3.org/2001/XMLSchema-instance" xmlns:xsd="w3.org/2001/XMLSchema" attrib="C1"> <ChildValue xsi:type="xsd:integer">1</ChildValue> </Child> <Child xmlns:mst="microsoft.com/wsdl/types/" xmlns:xsi="w3.org/2001/XMLSchema-instance" xmlns:xsd="w3.org/2001/XMLSchema" name="C2"> <ChildValue xsi:type="xsd:integer">2</ChildValue> </Child> </Root> ``` I can't get rid of the namespaces in the `Child` node.
I used this solution: [TSQL for xml add schema attribute to root node](https://stackoverflow.com/questions/12726426/tsql-for-xml-add-schema-attribute-to-root-node) Basically, I did not put the namespace in the beginning but after generating the required xml structure I casted the xml to `nvarchar(max)` and replaced the root node with the desired namespace. I also needed to use namespace prefix in the attribute. For that I used a pseudo attribute name which I replaced with a proper xml namespace prefix. Both operations were done using tsql `REPLACE` function. Hacky but couldn't find other proper ways to do it.
you need to include WITH xmlnamespaces , example: ``` ;with xmlnamespaces('w3.org/2001/XMLSchema' as xsd, 'w3.org/2001/XMLSchema-instance' as xsi, 'microsoft.com/wsdl/types/' as mst) select ID, C1, (select (SELECT 'C1' "@name",t.C1 as 'value'FOR XML PATH('Property'), type), (SELECT 'C2' "@name",t.C2 as 'value'FOR XML PATH('property'), type) FOR XML PATH('data'), type) as property_data FROM TableName t ```
Selecting columns as XML with namespace
[ "", "sql", "xml", "t-sql", "sql-server-2008-r2", "" ]
If I have to work with `localdb`, do we need to install it separately? I have SQL Server 2008 R2 Management Studio installed, SQL Server 2012 installed, .net 4.0.2 update installed. But I don't see localdb in the PC yet.
From [MSDN](http://msdn.microsoft.com/ru-ru/library/hh510202.aspx) > The primary method of installing LocalDB is by using the > SqlLocalDB.msi program. LocalDB is an option when installing any SKU > of SQL Server 2012 Express. **Select LocalDB on the Feature Selection > page during installation of SQL Server Express**. There can be only one > installation of the LocalDB binary files for each major SQL Server > Database Engine version. Multiple Database Engine processes can be > started and will all use the same binaries. An instance of the SQL > Server Database Engine started as the LocalDB has the same limitations > as SQL Server Express ![enter image description here](https://i.stack.imgur.com/TWZp0.png) Or you can download standalone **SqlLocalDB.msi** from the list of available downloads on the `SQL Express` downloading [page](http://www.microsoft.com/en-us/download/details.aspx?id=29062) ![enter image description here](https://i.stack.imgur.com/Hyq3q.png) To locate `localdb` instance on your PC simply try to connect to `(localdb)\Projects` ![enter image description here](https://i.stack.imgur.com/fKZd7.png)
If you downloaded **SqlLocalDB.msi** You can use silent install. ``` msiexec /i SqlLocalDB.msi /qn IACCEPTSQLLOCALDBLICENSETERMS=YES ``` Remember about administrative rights for installation !
How to install localdb separately?
[ "", "sql", "sql-server", "localdb", "" ]
i am using the below query to return rowcount for paging, it works fine but take very long to return, because all of the table have millions of records. currently its taking 7 sec to return rowcount, can anyone help me in this to return it fast. i have also tried same query with #table and @table both are slow. query is ``` WITH cte_rowcount AS (SELECT p.policyid FROM resident (nolock) r INNER JOIN resident_policy (nolock) rp ON r.residentid = rp.residentid INNER JOIN policy (nolock) p ON p.policyid = rp.policyid --INNER JOIN PolicySource (NOLOCK) psourse ON p.PolicySourceID = psourse.PolicySourceId INNER JOIN policy_locations (nolock) pl ON pl.policyid = p.policyid INNER JOIN location (nolock) l ON pl.locationid = l.locationid --INNER JOIN Policy_Status (NOLOCK) ps ON ps.PolicyStatusId = p.PolicyStatusId INNER JOIN property (nolock) pr ON pr.propertyid = l.propertyid --INNER JOIN dbo.States (NOLOCK) s ON s.StateId = pr.StateId WHERE r.primary_resident = 0x1 AND ( ( @ResidentFirstName IS NULL ) OR R.firstname LIKE @ResidentFirstName + '%' ) AND ( ( @ResidentLastName IS NULL ) OR R.firstname LIKE @ResidentLastName + '%' ) AND ( @PropertyAddress IS NULL OR pr.address LIKE @PropertyAddress + '%' ) AND ( @Policynumber IS NULL OR p.policynumber LIKE @Policynumber + '%' ) AND ( @LocationAddress IS NULL OR l.address2 LIKE @LocationAddress + '%' ) AND ( @City IS NULL OR pr.city LIKE @City + '%' ) AND ( @ZipCode IS NULL OR pr.zipcode = @ZipCode ) AND ( @StateId IS NULL OR pr.stateid = @StateId ) AND ( @PolicyStatusId IS NULL OR p.policystatusid = @PolicyStatusId )) SELECT @rowcount = Count(*) FROM cte_rowcount ```
I'd say to look at the indexes, but it probably won't help much, because a) you probably did it already, and b) you can get no seeks with this kind of a query, only scans. The idea is to get rid of these ORs and allow the optimizer to produce a sound plan. There are two options. Don't know which version of SQL Server is in question, but if it's SQL 2008 SP1 CU5 (10.0.2746) or later, or SQL 2008 R2 CU1 (10.50.1702) or later, or anything newer than that, add an `option (recompile)` to the query. This should produce much better plan, using seeks on relevant indexes. This will, however, add some recompile overhead to every execution, so maybe the second option is better. You can rewite the query into dynamic one, and elliminate the NULL parameters before optimizer even see the query. I tried to rewrite your query, don't have your data so can't test it, and there may be some errors in it, but you'll get my intention nevertheless. And I had to guess the datatypes. (BTW, is there a specific reason for `SELECT p.policyid`?) Here it is: ``` declare @qry nvarchar(4000), @prms nvarchar(4000); set @qry = N' SELECT count(*) FROM resident (nolock) r INNER JOIN resident_policy (nolock) rp ON r.residentid = rp.residentid INNER JOIN policy (nolock) p ON p.policyid = rp.policyid INNER JOIN policy_locations (nolock) pl ON pl.policyid = p.policyid INNER JOIN location (nolock) l ON pl.locationid = l.locationid INNER JOIN property (nolock) pr ON pr.propertyid = l.propertyid WHERE r.primary_resident = 0x1 ' if @ResidentFirstName IS NOT NULL set @qry = @qry + ' AND R.firstname LIKE @ResidentFirstName + ''%''' if @ResidentLastName IS NOT NULL set @qry = @qry + ' AND R.firstname LIKE @ResidentLastName + ''%''' if @PropertyAddress IS NOT NULL set @qry = @qry + ' AND pr.address LIKE @PropertyAddress + ''%''' if @Policynumber IS NOT NULL set @qry = @qry + ' AND p.policynumber LIKE @Policynumber + ''%''' if @LocationAddress IS NOT NULL set @qry = @qry + ' AND l.address2 LIKE @LocationAddress + ''%''' if @City IS NOT NULL set @qry = @qry + ' AND pr.city LIKE @City + ''%''' if @ZipCode IS NOT NULL set @qry = @qry + ' AND pr.zipcode = @ZipCode' if @StateId IS NOT NULL set @qry = @qry + ' AND pr.stateid = @StateId' if @PolicyStatusId IS NOT NULL set @qry = @qry + ' AND p.policystatusid = @PolicyStatusId' set @prms = N'@PolicyStatusId int, @StateId int, @ZipCode int, @City varchar(50), @LocationAddress varchar(50), @Policynumber varchar(50), @PropertyAddress varchar(50), @ResidentLastName varchar(50), @ResidentFirstName varchar(50)' exec sp_executesql @qry, @prms, @PolicyStatusId = @PolicyStatusId, @StateId = @StateId, @ZipCode = @ZipCode, @City = @City, @LocationAddress = @LocationAddress, @Policynumber = @Policynumber, @PropertyAddress = @PropertyAddress, @ResidentLastName = @ResidentLastName, @ResidentFirstName = @ResidentFirstName ``` If you chect the execution plan you'll see the index seeks, provided you have nonclustered indexes on WHERE and JOIN columns. Moreover, the plan will be cached, one for each combination of parameters.
This is hard to answer because with huge bulk of data many things could happen. In term of join, this should perform well. If this query is just here to perform a count, then I can just suggest you to do it directly `SELECT count('x')` without `CTE` and without `(nolock)`. ``` SELECT @rowcount = count('x') as rc FROM resident r INNER JOIN resident_policy rp ON r.residentid = rp.residentid INNER JOIN policy p ON p.policyid = rp.policyid INNER JOIN policy_locations pl ON pl.policyid = p.policyid INNER JOIN location l ON pl.locationid = l.locationid INNER JOIN property pr ON pr.propertyid = l.propertyid WHERE r.primary_resident = 0x1 AND ( ( @ResidentFirstName IS NULL ) OR R.firstname LIKE @ResidentFirstName + '%' ) AND ( ( @ResidentLastName IS NULL ) OR R.firstname LIKE @ResidentLastName + '%' ) AND ( @PropertyAddress IS NULL OR pr.address LIKE @PropertyAddress + '%' ) AND ( @Policynumber IS NULL OR p.policynumber LIKE @Policynumber + '%' ) AND ( @LocationAddress IS NULL OR l.address2 LIKE @LocationAddress + '%' ) AND ( @City IS NULL OR pr.city LIKE @City + '%' ) AND ( @ZipCode IS NULL OR pr.zipcode = @ZipCode ) AND ( @StateId IS NULL OR pr.stateid = @StateId ) AND ( @PolicyStatusId IS NULL OR p.policystatusid = @PolicyStatusId ) ``` If this CTE is used for both rowcount and retrieve data from `CTE` be sure that you are retrieve only data for the page in question (only 20 elements with a `ROWCOUNT() as RC` and `RC > 0 AND RC <= 20`) In database side, you can check if you have indexes for all of your join clause. It looks like there is only PK so they already have indexes. Be sure, you have index on joined columns. If you continue to have trouble, use "execution plan in real time" fonction to see what the hell is going on. `LIKE` condition can be a performance killer depending on the text size and database content. You can think about COLLECTION to store your texts and have some gain on text comparison.
optimized way to get row count from a query contains large amount of data
[ "", "sql", "sql-server", "t-sql", "rowcount", "" ]
I have three tables called Hours, Projects and Clients. I'm somewhat experienced with SQL statements and can't seem to get my head around why this isn't working. Projects and Hours tables both share a foreign key called projectid and Projects and Clients both share a foreign key called clientid. Here's my query so far: ``` SELECT hoursworked.h, projectname.p, description.p, archive.p, clientname.c FROM hours AS h, projects AS p, clients AS c JOIN h ON projectid.h = projectid.p JOIN p ON clientid.p = clientid.c WHERE archive.p = 0; ``` I seem to be getting an error called "#1066 - Not unique table/alias: 'h' " Not sure where I am going wrong here. Any help would be great. Thanks in advance!
You are mixing implicit joins and explicit joins. A simple rule: don't use commas in `from` clauses. ``` SELECT h.hoursworked, p.projectname, p.description, p.archive, c.clientname FROM hours h join projects p on h.projectid = p.projectid join clients c ON p.clientid = c.clientid WHERE p.archive = 0; ``` In addition, the syntax for using aliases is `<table alias>`.`<column alias>`, not the other way around.
You need to prepend the table name to the field/column, not put it at the end, and usually you would use `AS` for field/column aliases, not for table aliases. Also, I would name the tables in the `JOIN`s, not separated by commas in the `FROM` statement. This is how it should look: ``` SELECT h.hoursworked, p.projectname, p.description, p.archive, c.clientname FROM hours h JOIN projects p ON h.projectid = p.projectid JOIN clients c ON p.clientid = c.clientid WHERE p.archive = 0; ```
SQL JOIN ON issue with aliases or not using aliases
[ "", "sql", "alias", "join", "" ]
``` table 1 _id sub_id 1 32 2 34 3 42 4 44 5 47 6 50 ``` . ``` table 2 _id sub_id 1 34 2 42 ``` # i want result ``` _id sub_id count 1 32 2 2 34 2 3 42 1 4 44 0 5 47 0 6 50 0 ``` table 2 sub id 34 contains table 1, above 32 -> count+1 table 2 sub id 42 contains table 1, above 32, 34, 42 -> count + 1 ``` result 32, 34 = 2 42 = 1 44, 47, 50 = 0 ``` i try outer join, left join etc.... not correct result. how about this correct result? plz. help me T.T....
Try this query ``` SELECT _id ,sub_id ,( SELECT count(*) FROM table2 t2 WHERE t2.sub_id >= t1.sub_id ) count FROM table1 t1 ```
This is what you want : ``` SELECT t1._id ,t1.sub_id ,count(t2._id) as count FROM table1 t1 left join table2 t2 on t2.sub_id >= t1.sub_id GROUP BY t1._id ,t1.sub_id ``` **Here is the [SQLfiddle demo](http://www.sqlfiddle.com/#!7/791ff/6)**
sql outer join count above rows
[ "", "sql", "database", "sqlite", "join", "" ]
I have multiple tables which have multiple number of columns and field names. I want to select \* data from these tables where there is a specific condition(then insert the data into a different database). For example, if I run below queries separately, each of them returns a different number of rows where number of columns and field's names are totally different. Of course I can export the results of below queries separately in three different files and then insert them into a different database, but my goal is to combine these results so that I can export the data in one file. ``` Select * From table1 Where id>=500; Select * From table2 Where id>=200; Select * From table3 Where id>=1500; ``` Please note that `Union all` did not work in this case and MySQL said `#1222 - The used SELECT statements have a different number of columns`. Could you please let me know if you can help on this problem?
I think you're going wrong way, if you axport a union, than you will have problem importing that. Instead try to merge files, or export three tables separatly into one file, for example using tool **MYSQLDUMP** like this: ``` mysqldump -u root -pyour_password your_database table1 >> /tmp/mysql_dump.sql mysqldump -u root -pyour_password your_database table2 >> /tmp/mysql_dump.sql mysqldump -u root -pyour_password your_database table3 >> /tmp/mysql_dump.sql ``` EDIT: You stated that you need a where condition - that is also possible, like that: ``` mysqldump -u root -pyour_password --where="id>=500" your_database table1 >> /tmp/mysql_dump.sql ```
You can use something like this: ``` Select ID as ID, '' as Name, '' as Something From table1 where id>=500; union all Select ID as ID, Name as Name, '' as Something from table2 where id>=200; union all Select ID as ID, '' as Name, Something as Something from table3 where id>=1500; ```
Combine multiple select * queries for tables with a different number of columns/names for export into a different database
[ "", "mysql", "sql", "" ]
I have two tables, ``` ID NAME ------------------- 12 Jon Doe 4 Jane Doe 9 Sam Doe ``` AND ``` MemID Cat# DateChkOut DateDue DateRet 4 T 430.98 1956 15-Mar-2011 14-Jun-2011 31-May-2011 12 B 125.2 2013 15-Mar-2011 14-Jun-2011 4 T 430.98 1956 27-Dec-2012 25-Mar-2013 ``` Now I need to list the `members’ names who have never checked out a book. `SELECT Name FROM MEMBER, CHECKOUT WHERE ID != MemID;` did not work. Any suggestions?
You need to use a `LEFT JOIN` in this case. Try the following: ``` SELECT M.Name FROM MembersTable M LEFT JOIN CheckoutTable C On C.MemId = M.ID WHERE C.MemId IS NULL ``` Edit: `LEFT JOIN` works in this situation because it joins the two tables on a common key - in this case the `MemId`. Since this is a `LEFT JOIN` it will take everything that exists in the Left table (Members) and if a match is found, it will include everything on the Right table (Checkout). But if no match is found in the Checkout table, everything on that side will be `NULL`. So, all you would need to do is check to see if the Right side is `NULL`. Hope this makes sense :)
``` SELECT name FROM member WHERE id NOT IN (SELECT MemID FROM checkout) ``` I think it should do the job
SELECT query on multiple table
[ "", "mysql", "sql", "database", "" ]
I have two tables. TableA contains EventID, Date, **Manager** TableB contains **Manager**, Department, Date Started, Date Ended. In TableB managers often switch departments so the seeing entries like Manager1, Tulsa, 1-1-2012, 6-1-2012 Manager1, Iowa City, 6-2-2012, 12-31-2012 Is to be expected. I want to assign each EventID to a department based on the department the manager was in at the time the event occurred. so for example TableA has 000001, 2-1-2012, Manager1 I want the return to be 000001, Tulsa If TableA has 000002, 8-1-2012, Manager1 I want the return to be 000001, Iowa City UPDATE: The solution by user2034570 worked! thanks
You need to join on an FK and use BETWEEN in the WHERE clause SELECT A.EventID, B.Department FROM TableA A INNER JOIN TableB B ON A.Manager = B.Manager WHERE A.Date BETWEEN B.DateStarted AND B.DateEnded
``` Select Max(EventID),Date, Manager From TableA Group By Date, Manager ```
Join 2 tables based on the date of an event happening in table 1, being between start and end dates of table 2
[ "", "sql", "sql-server", "" ]
In the vb script I have a select statement I am trying to pass a string value with an undetermined length to a SQL in operator the below code works but allows for SQL injection. I am looking for a way to use the ADO createParameter method. I believe the different ways I have tried are getting caught up in my data type (adVarChar, adLongChar, adLongWChar) ``` Dim studentid studentid = GetRequestParam("studentid") Dim rsGetData, dbCommand Set dbCommand = Server.CreateObject("ADODB.Command") Set rsGetData = Server.CreateObject("ADODB.Recordset") dbCommand.CommandType = adCmdText dbCommand.ActiveConnection = dbConn dbCommand.CommandText = "SELECT * FROM students WHERE studentID in (" & studentid & ")" Set rsGetData = dbCommand.Execute() ``` I have tried ``` Call addParameter(dbCommand, "studentID", adVarChar, adParamInput, Nothing, studentid) ``` which gives me this error ADODB.Parameters error '800a0e7c' Problems adding parameter (studentID)=('SID0001','SID0010') :Parameter object is improperly defined. Inconsistent or incomplete information was provided. I have also tried ``` Call addParameter(dbCommand, "studentID", adLongVarChar, adParamInput, Nothing, studentid) ``` and ``` Dim studentid studentid = GetRequestParam("studentid") Dim slength slength = Len(studentid) response.write(slength) Dim rsGetData, dbCommand Set dbCommand = Server.CreateObject("ADODB.Command") Set rsGetData = Server.CreateObject("ADODB.Recordset") dbCommand.CommandType = adCmdText dbCommand.ActiveConnection = dbConn dbCommand.CommandText = "SELECT * FROM students WHERE studentID in (?)" Call addParameter(dbCommand, "studentID", adVarChar, adParamInput, slength, studentid) Set rsGetData = dbCommand.Execute() ``` both of these options don't do anything... no error message and the SQL is not executed. Additional information: studentid is being inputted through a HTML form textarea. the design is to be able to have a user input a list of student id's (up to 1000 lines) and perform actions on these student profiles. in my javascript on the previous asp I have a function that takes the list and changes it into a comma delimited list with '' around each element in that list.
Classic ASP does not have good support for this. You need to fall back to one of the alternatives discussed here: > <http://www.sommarskog.se/arrays-in-sql-2005.html> That article is kind of long, but in a good way: it's considered by many to be the standard work on this subject. It also just so happens that my preferred option is not included in that article. What I like to do is use a holding table for each individual item in the list, such that each item uses an ajax request to insert or remove it from the holding table the moment the user selects or de-selects it. Then I join to that table for my list, so that you end up with something like this: ``` SELECT s.* FROM students s INNER JOIN studentSelections ss on s.StudentID = ss.StudentID WHERE ss.SessionKey = ? ```
What does your `addParameter()` function do? I don't see that anywhere in your code. You should be able to create and add your string param like so: ``` With dbCommand .Parameters.Append .CreateParameter(, vbString, , Len(studentid), studentid) End With ``` (Small hack here. `vbString` has the same value as `adBSTR`. You'll find that the `VarType` of all VB "types" have matching ADO counterparts.) ``` Type VarType (VBScript) DataTypeEnum (ADO) Value --------- ------------------ ------------------ ----- Integer vbInteger adSmallInt, 2-byte 2 Long vbLong adInteger, 4-byte 3 Single vbSingle adSingle 4 Double vbDouble adDouble 5 Currency vbCurrency adCurrency 6 Date vbDate adDate 7 String vbString adBSTR 8 Object vbObject adIDispatch 9 Error vbError adError 10 Boolean vbBoolean adBoolean 11 Variant vbVariant adVariant 12 Byte vbByte adUnsignedTinyInt 17 ``` --- **Edit:** Looks like Joel has a good solution for you. I didn't realize `IN` isn't compatible with ADO parameterized queries. I think something like the following would work, but you probably wouldn't want to do it with (potentially) 1000 ID's. ``` ' Create array from student IDs entered... a = Split(studentid, ",") ' Construct string containing proper number of param placeholders. Remove final comma. strParams = Replace(String(UBound(a) - 1, "?"), "?", "?,") strParams = Left(strParams, Len(strParams) - 1) With dbCommand .CommandText = "select * from students where studentID in (" & strParams & ")" Set rsGetData = .Execute(, a) End With ```
Pass a vbscript String list to a SQL "in"operator
[ "", "sql", "vbscript", "asp-classic", "" ]
I have an insert command with values are calculated based on currency value on another table. Sql code is something like ``` INSERT INTO table1 (column1,column2,column3,column4,column5,column6) VALUES ( value1 * (SELECT currency FROM currency_table WHERE date=date1), value2 * (SELECT currency FROM currency_table WHERE date=date1), value3 * (SELECT currency FROM currency_table WHERE date=date1), value4 * (SELECT currency FROM currency_table WHERE date=date2), value5 * (SELECT currency FROM currency_table WHERE date=date2), value6 * (SELECT currency FROM currency_table WHERE date=date2), ) UPDATE table1 SET column1 = value7 * (SELECT currency FROM currency_table WHERE date=date1), column2 = value8 * (SELECT currency FROM currency_table WHERE date=date1), column3 = value9 * (SELECT currency FROM currency_table WHERE date=date1), column4 = value10 * (SELECT currency FROM currency_table WHERE date=date2), column5 = value11 * (SELECT currency FROM currency_table WHERE date=date2), column6 = value12 * (SELECT currency FROM currency_table WHERE date=date2) WHERE column7 = value13 ``` So i get currency values 3 times. Consider this is an example becaue my actual query has this with 10 calls. How can i change this query to get Currency value not 3 times but only 1 time. Consider currency\_table has only 1 row. Forgot to mention this query is for sql server 2008
Insert is ok with what Guneli has provided. Syntactically, the query is incorrect. Use ``` INSERT INTO table1 (column1,column2,column3,column4,column5,column6) SELECT value1*currency1, value2*currency1, value3*currency1, value4*currency2, value5*currency2, value6*currency2 FROM (select (select currency from currency_table where date = @date1) currency1, (select currency from currency_table where date = @date2) currency2) A; ``` For update, you can use the following command. ``` UPDATE t1 SET t1.column1 = value1 * c.currency1, t1.column2 = value2 * c.currency1, t1.column3 = value3 * c.currency1, t1.Column4 = value4 * c.currency2, t1.column5 = value5 * c.currency2, t1.column6 = value6 * c.currency2 from table1 t1, (select (select currency from currency_table where date = @date1) currency1, (select currency from currency_table where date = @date2) currency2) C WHERE t1.column4 = value4 ``` It will make a cross join with currency\_table, and assuming currency\_table is having single row, it will update correctly.
Try this for INSERT: ``` INSERT INTO table1 (column1,column2,column3) VALUES SELECT value1*currency, value2*currency, value3*currency FROM currency_table; ```
Using Same Value from Select Statement in an Insert Query
[ "", "sql", "sql-server", "optimization", "insert", "" ]
What would be the best database design for this: I have a list of products and a list of clients. Each product has assigned 9 different prices Each client has assigned one of those prices for every product. IT would be something like this: ``` PRODUCTS id name price1 price2 price3 price4 price5 price6 price7 price8 price9 ``` CLIENTS id name price\_Code Any idea?
Some information is missing. First, you need a product table ``` Product ------- Product ID Product Description ... ``` Next, you need a product price table ``` ProductPrice ------------ ProductPrice ID Product ID Price ``` Next, you have a client table ``` Client ------ Client ID Client Name ... ``` Finally, you have a client product price table. The missing information is, does the price for the client change if the price changes in the product price table? ``` ClientProductPrice ------------------ Client ID Product ID ??? ``` If the price for the client changes, then the ??? is the Product Price ID. You don't need the Product ID because you can get that from the product price table. If the price for the client does not change, then the ??? is the Price.
This makes no sense. Prices are examples of diachronic properties: meaning they represent a value *at a given time*. What you want is a price table with 2 dates in it. ``` Prices productID:long effectiveDate:Date endDate:Date ``` Then when you want the current price, you have two choices. The most common one is to join the row that has a null for the endDate field. The other reason you want to do this is that you might be asked to change the price of a product on a given date.
Database design for products with multiple prices
[ "", "sql", "database", "" ]
I have a query like the below showed. ``` select * from tbl a WHERE a.device_cat = (CASE (SELECT :VIEW1 FROM DUAL WHERE :VIEW1 IN (SELECT DISTINCT version FROM tbl2)) WHEN NULL THEN NULL ELSE DECODE (:device_cat, 'ALL', a.device_cat, :device_cat) END) ``` So, when the below query is null, a.device\_cat should be null, if so, the above query will always return empty records. But, the records are definitely exists when a.device\_cat is null. Please help me! Thanks! ``` (SELECT :VIEW1 FROM DUAL WHERE :VIEW1 IN (SELECT DISTINCT version FROM tbl2) ```
You cannot compare to null like this: ``` a.device_cat =null ``` Try this: ``` select * from tbl a WHERE nvl(a.device_cat, 0) = nvl((CASE (SELECT :VIEW1 FROM DUAL WHERE :VIEW1 IN (SELECT DISTINCT version FROM tbl2)) WHEN null THEN null ELSE DECODE (:device_cat, 'ALL', a.device_cat, :device_cat) END), 0) ```
You cannot compare `null` to `null` like that. It is like: ``` undefined = undefined ``` Which is not true... Something you don't know isn't something you don't know. At least, in SQL. You can use `coalesce` to circumvent this: ``` WHERE coalesce(a.device_cat, '###') = (CASE (SELECT coalesce(:VIEW1, '###') ```
Why the case when doesn't work in oracle
[ "", "sql", "oracle", "" ]
Okay, so I have kind of a weird issue... the dates in the table have been entered in as string values `MMDDYYYY` and I'm trying to have the displayed as `MM/DD/YYYY` in a report and only select the most recent date pertaining to an ID, because some ID's may have multiple dates. Example of my table: ``` ID | MyDate | ------+----------+ 1 | 01302014 | 1 | 04222014 | 2 | 01302014 | ``` What I want to see when I select and insert into a temp table is this: ``` ID | MyDate | ------+-----------+ 1 | 4/22/2014 | 2 | 1/30/2014 | ``` I know that storing dates as string values is a poor practice especially when storing them as `MMDDYYYY`, but does anyone have a solution to this nightmare? **EDIT** I forgot to mention that some fields might be NULL. Not sure if that makes a difference or not, but I think it does if I try to flip the dates using Right, Left, Convert.
This question is for almost a year ago, nut probably someone can find it useful. You need to `CONVERT` your string to `DATE` format and use a `ROW_NUMBER` function to window your result set. **Create table** ``` DECLARE @tbl TABLE(Id INT, myDate VARCHAR(8)) ``` **Sample data** ``` INSERT @tbl SELECT 1 , '01302014' UNION ALL SELECT 1 , '04222014' UNION ALL SELECT 2 , '01302014' ``` **Query** ``` ;WITH C AS( SELECT ROW_NUMBER() OVER (PARTITION BY Id ORDER BY CONVERT(DATETIME, (SUBSTRING(myDate, 5, 4) + '.' + SUBSTRING(myDate, 1, 2) + '.' + SUBSTRING(myDate, 3, 2)), 101) DESC) AS Rn ,Id ,CAST(CONVERT(DATETIME, (SUBSTRING(myDate, 5, 4) + '.' + SUBSTRING(myDate, 1, 2) + '.' + SUBSTRING(myDate, 3, 2)), 101) AS DATE) AS myDate FROM @tbl ) SELECT Id, myDate FROM C WHERE Rn = 1 ``` [SQLFiddle Demo](http://sqlfiddle.com/#!6/46c23/2)
Using a CONVERT like the following code snippet will work on any SQL Server regardless of language and/or locale configuration. ``` DECLARE @OldDate varchar(8); SELECT @OldDate = '04252012'; SELECT CONVERT(datetime, substring(@OldDate,5,4) + '-' + substring(@OldDate,1,2) + '-' + substring(@OldDate,3,2) + 'T00:00:00') ```
Convert varchar MMDDYYYY to MM/DD/YYYY datetime and select the most recent date only
[ "", "sql", "sql-server", "datetime", "varchar", "" ]
I have a revision table like this: ``` id revision_id event_id ------------------------ 1 1 1 2 2 1 3 1 2 4 2 2 5 3 2 ``` I want to get last revision\_id for each distinct event\_id . I want MYSQL query to do so. Can any one guide me??
``` SELECT event_id, max(revision_id) FROM mytable GROUP BY event_id ``` See in **[SQLFiddle](http://sqlfiddle.com/#!2/6f2e4/1)**.
``` SELECT MAX(revision_id) , event_id FROM TABLE_NAME GROUP BY event_id ```
How can I get last revision_id of particular event_id?
[ "", "mysql", "sql", "" ]
I have 3 tables: `Student`, `Address` and `StudentAddress`. `Student` stores all the students, address stores all the address details while `StudentAddress` resolves many to many relationship between `Student` and `Address`. This table stores details of student who have lived in more than one addresses. I am trying to list the Names and address details of a student who has changed his address more than 5 times. ``` SELECT a.StudentID, CONCAT(b.FirstName + " " + b.LastName), c.MajorMunicipality, COUNT(a.AddressID) AS count FROM StudentAddress a INNER JOIN Member b ON a.StudentID = b.StudentID INNER JOIN Address c ON a.AddressID = b.AddressID GROUP BY a.StudentID, a.AddressID HAVING count > 5; ``` This query has issues with joining. Please help!!
I would prefer `join` since it gives your more possibilities to use the result for your second query. To help you narrowing down the actual result set, try something like this: ``` select a.MemberID , a.AddressID , COUNT(a.AddressID) as countAddress from MemberAddress a group by a.MemberID , a.AddressID having countAddress > 3 ; ``` EDIT: Try this: ``` select a.memberid , concat(b.firstname + " " + b.lastname) , c.majormunicipality , count(a.addressid) as countAddresses from memberaddress a join member b on a.memberid = b.memberid join address c on a.addressid = b.addressid group by a.memberid , concat(b.firstname + " " + b.lastname) , c.majormunicipality having count > 5 ; ```
To filter them: ``` select a.MemberID,a.AddressID,COUNT(a.AddressID) as count from MemberAddress a group by a.MemberID HAVING COUNT(a.AddressID) > 3 ```
Use Join or Sub Query?
[ "", "mysql", "sql", "" ]
Imagine that I have three entities (`EntityA`, `EntityB`, `EntityC`) that can have some `images`. So there are two ways: 1. Make an `image` table for each entity. It means that `EntityA` has a `image` table named `AImages` and similarly for `EntityB` and `EntityC`. This method is more intelligent but has more tables. 2. Have an `image` table and another table name `EntityType` that goes between. `EntityType` table has an `EntityTypeId` column and a `name` and has three records: `1,EntityA`, `2,EntityB`, `3,EntityC`. Then in `image` table: If I save a record for `EntityA` the record in `image` table would be this: `1,1,name` First column is `ImageId`, second one is `EntityTypeId` and the third one is `image's filename`. If I save a record for `EntityB` the record in `image` table would be this: `2,2,name` And if I save a record for `EntityA` the record in `image` table would be this: `3,3,name` In this method the number of tables would decrease but the queries would be longer. Which one is optimised or any other way...
> Have an image table and another table name EntityType that goes between. This will require you to enforce the referential integrity manually, which is [trickier than it looks](https://stackoverflow.com/a/20873843/533120). So either: * for each 1:N relationship, have a separate "N" table for each "1" table (as you proposed), * or employ something like [exclusive FK or inheritance](https://stackoverflow.com/a/13317463/533120) if you want to avoid too much structural duplication.
The raght way is to store images in `varbinary(max)` columns in their respective tables, and not to separate them. SQL Server will take care of phisically storing them in an efficient, off-row, manner. Depending on which version of SQL Server is used and the size of the LOBs, you might consider `FILESTREAM` or `FILETABLE` [feature](http://technet.microsoft.com/en-us/library/ff929144.aspx).
Which one is correct and optimized in sql?
[ "", "sql", "sql-server", "database-design", "relational-database", "query-optimization", "" ]
How does one create a custom aggregate in SQL Server 2012? I'm trying to get a median aggregate. Something like: ``` ;with cte as ( select top 50 percent val from tbl order by val desc ) select top 1 from cte ```
You should just use the built-in function, `percentile_cont()` or `percentil_disc()` (see [here](http://technet.microsoft.com/en-us/library/hh231473.aspx)), depending on how you define the median for an even number of records. I think the following does what you want: ``` select percentile_cont(0.5) within group (order by val) as median from tbl; ``` This is much easier than defining your own function.
There is an interesting article that examines a number of different approaches to this problem, and also provides a performance analysis. It claims that percentile\_cont() is the worst-performing, while this is the best: ``` DECLARE @c BIGINT = (SELECT COUNT(*) FROM dbo.EvenRows); SELECT AVG(1.0 * val) FROM ( SELECT val FROM dbo.EvenRows ORDER BY val OFFSET (@c - 1) / 2 ROWS FETCH NEXT 1 + (1 - @c % 2) ROWS ONLY ) AS x; ``` Reference: <http://www.sqlperformance.com/2012/08/t-sql-queries/median>
Custom aggregate functions
[ "", "sql", "sql-server", "t-sql", "sql-server-2012", "" ]
I have searched similar problems here on stackoverflow but I could not understand how to make this work, what I'm trying to do... So, I want to get last 7 days transactions from database and get total sales amount and also include empty rows if there is no data for some day. What I have so far: <http://sqlfiddle.com/#!2/f4eda/6> This outputs: ``` | PURCHASE_DATE | AMOUNT | |---------------|--------| | 2014-04-25 | 19 | | 2014-04-24 | 38 | | 2014-04-22 | 19 | | 2014-04-19 | 19 | ``` What I want: ``` | PURCHASE_DATE | AMOUNT | |---------------|--------| | 2014-04-25 | 19 | | 2014-04-24 | 38 | | 2014-04-23 | 0 | | 2014-04-22 | 19 | | 2014-04-21 | 0 | | 2014-04-20 | 0 | | 2014-04-19 | 19 | ``` Any help appreciated :)
Simply put together a subquery with the dates you want and use `left outer join`: ``` select d.thedate, coalesce(SUM(amount), 0) AS amount from (select date('2014-04-25') as thedate union all select date('2014-04-24') union all select date('2014-04-23') union all select date('2014-04-22') union all select date('2014-04-21') union all select date('2014-04-20') union all select date('2014-04-19') ) d left outer join transactions t on t.purchase_date = d.thedate and vendor_id = 0 GROUP BY d.thedate ORDER BY d.thedate DESC; ```
This is not easy. I took help from this thread [generate days from date range](https://stackoverflow.com/questions/2157282/generate-days-from-date-range) and combined it with your query. So the idea was to get the list of dates from last 7 days then left join these dates with a static amount 0 to the query you have and then finally sum them. This could be used for any date range, just need to change them in both the queries ``` select t1.purchase_date, coalesce(SUM(t1.amount+t2.amount), 0) AS amount from ( select DATE_FORMAT(a.Date,'%Y-%m-%d') as purchase_date, '0' as amount from ( select curdate() - INTERVAL (a.a + (10 * b.a) + (100 * c.a)) DAY as Date from (select 0 as a union all select 1 union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9) as a cross join (select 0 as a union all select 1 union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9) as b cross join (select 0 as a union all select 1 union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9) as c ) a where a.Date BETWEEN NOW() - INTERVAL 7 DAY AND NOW() )t1 left join ( SELECT DATE_FORMAT(purchase_date, '%Y-%m-%d') as purchase_date, coalesce(SUM(amount), 0) AS amount FROM transactions WHERE purchase_date BETWEEN NOW() - INTERVAL 7 DAY AND NOW() AND vendor_id = 0 GROUP BY purchase_date )t2 on t2.purchase_date = t1.purchase_date group by t1.purchase_date order by t1.purchase_date desc ``` **[DEMO](http://sqlfiddle.com/#!2/f4eda/59)**
MySql Single Table, Select last 7 days and include empty rows
[ "", "mysql", "sql", "" ]
I appreciate any help. I am creating a database query and I would like to add something. I created two tables(users&videos) and I would like to insert video\_id into users table but however I would like one user to have multiple videos. For example, user\_id 1 should have all three videos. Any help? `create table users ( user_id int, username varchar(50), video_id int, primary key (user_id) );` `insert into users(user_id,username,video_id) values (1,'trex',1&2&3);` This line is incorrect. `create table videos ( video_id int, video_title varchar(100), primary key (video_id) );` `insert into videos(video_id,video_title) values (1,'Math tutor'),(2,'SQL joins'),(3,'Fastest cars');`
I think in that case, you should normalize your table users, remove `video_id` from here and create one table to have `users to video mapping`, something like this: ``` create table user_videoes ( user_id int, video_id int); ``` By this approach, one user can have multiple videos and one video can belong to multiple users.
If a video can only belong to one user then you should do it the other way around, ie. the user\_id field is in the video table. That way one user can have many videos.
SQL query to add multiple values into an attribute
[ "", "mysql", "sql", "" ]
I have a table `A` with two columns that I want to append to another table `B`, how can I do that? They have the exact same rows. Software is `SQL Server 2012`. EDIT (attempted code from comment): ``` INSERT INTO B_table([Column 0], [Column 1]) SELECT [Column 0], [Column 1] FROM [dbo].[A] ; ```
The basic form is: ``` insert into tableB(col1, col2) select col1, col2 from tableA; ``` This may not work if, for instance, you have a unique constraint on the columns and the `insert` violates this constraint. This assumes that you actually want to add the rows to the table. If you just want to see the results together: ``` select col1, col2 from tableB union all select col1, col2 from tableA; ``` EDIT: The goal seems to be to add columns one `tableB`. You can do this by adding the columns and then updating the values: ``` alter table tableB add col1 . . . ; alter table tableB add col2 . . . ; ``` The `. . .` is the definition of the column. Then do: ``` update b set col1 = a.col1, col2 = b.col2 from tableb b join tablea a on b.joinkey = a.joinkey; ``` If you don't have a column for joining, then you have a problem. Tables in SQL are inherently unordered, so there is no way to assign values from a particular row of `A` to a particular row of `B`.
Try this: ``` INSERT INTO tbl1 SELECT * FROM tbl2; ```
Append table to an existing one: SQL Server
[ "", "sql", "sql-server", "sql-server-2012", "" ]
I'm using EXISTS AND NOT EXISTS IN THE same query, but not getting any results. I'm looking for results where there is a particular position and sub position, but want to exclude items where that particular position exists, but the subposition is null. I'm a newbie, so I'm probably missing something here. Here is my query: ``` SELECT a.* FROM dbo.table AS a WHERE EXISTS (SELECT Distinct b.EventID FROM dbo.table AS b WHERE b.EventID = a.EventID AND b.Position = 'ABC' AND b.SubPosition = 'DEF') AND NOT EXISTS (SELECT Distinct b.EventID FROM dbo.table AS b WHERE b.EventID = a.EventID AND b.Position = 'ABC' AND b.SubPosition IS NULL) ``` Thx.
`EXCEPT` is an alternative to `EXISTS` and `NOT EXISTS` in the same query: ``` SELECT * FROM Test T WHERE T.EventId IN ( SELECT EventId FROM Test T WHERE T.Position = 'ABC' AND T.SubPosition = 'DEF' EXCEPT SELECT EventId FROM Test T WHERE T.Position = 'ABC' AND T.SubPosition IS NULL ) ``` `EXCEPT` will yield distinct rows.
First, your English language description is unclear, and does not match the SQL. If you are looking ".. for results where there is a particular position and sub position, ..." Then you would simply write ``` SELECT * FROM dbo.table a Where Position = 'ABC' AND SubPosition = 'DEF' ``` and obviously, none of those rows will have a null subposition ... So I assume you meant to write: "*looking for results of all rows for those eventIds where there is a row for that eventId with a particular position and sub position, ...."* then, from within that resultset you wish to exclude those rows with null SubPosition? If that is accurate, then try this: ``` SELECT * FROM dbo.table a WHERE EXISTS (SELECT * FROM dbo.table WHERE EventID = a.EventID AND Position = 'ABC' AND SubPosition = 'DEF') AND SubPosition IS NULL ``` Another way to write this is with `IN` operator (in which case the sub-query is not correlated, so you don't need any aliases): ``` SELECT * FROM dbo.table WHERE EventID In (SELECT EventID FROM dbo.table WHERE Position = 'ABC' AND SubPosition = 'DEF') AND SubPosition IS NULL ``` EDIT: After your comment, I am guessing what you meant was actually: "*looking for results of all rows for those eventIds where there exists a row for that eventId with a particular position and sub position, but there does NOT EXIST any row with that position and a null subposition .*" This is different from eliminating rows with Null subposition from the output! THis eliminates rows for EventIds where there is any row in the table with a null subposition. Is this what you want? If so, Then try this: ``` SELECT * FROM dbo.table a WHERE EXISTS (SELECT * FROM dbo.table WHERE EventID = a.EventID AND Position = 'ABC' AND SubPosition = 'DEF') AND Not EXISTS (SELECT * FROM dbo.table WHERE EventID = a.EventID AND Position = 'ABC' AND SubPosition IS NULL) ``` ok, try these queries and tell me what you get ``` SELECT * FROM dbo.table WHERE EventID = 2 AND Position = 'ABC' AND SubPosition = 'DEF' -- should return one row ``` and... ``` SELECT * FROM dbo.table WHERE EventID = 2 AND Position = 'ABC' AND SubPosition IS NULL -- should return NO rows ``` If this is true, then the main query should also return the one row with EventId = 2 FURTHUR EDIT: Execute this to create test table with test data and execute query: ``` CREATE TABLE [dbo].[TEST]( [EventId] [int] NULL, [Pos] [char](3) NULL, [subPos] [char](3) NULL ) Insert Test(EventId, Pos, SubPos) Values (1, 'ABC', 'DEF') Insert Test(EventId, Pos, SubPos) Values (1, 'ABC', null) Insert Test(EventId, Pos, SubPos) Values (2, 'ABC', 'DEF') Insert Test(EventId, Pos, SubPos) Values (3, 'ABC', 'DEF') Insert Test(EventId, Pos, SubPos) Values (3, 'ABC', null) SELECT * FROM dbo.test a WHERE EXISTS (SELECT * FROM dbo.test WHERE EventID = a.EventID AND Pos = 'ABC' AND SubPos = 'DEF') AND Not EXISTS (SELECT * FROM dbo.test WHERE EventID = a.EventID AND Pos = 'ABC' AND SubPos IS NULL) ``` BTW, you don't need an alias for a table or resultset unless you need to reference that table/resultset from outside that part of the query. The only place in your query where that is the case is for the main table, which needs to be referenced from the two correlated sub-queries. The correleated subqueries are not referenced by anything.
MS SQL On Using WHERE EXISTS and WHERE NOT EXISTS
[ "", "sql", "sql-server", "" ]
How do I select rows in the past starting from yesterday in Oracle DB where a field like `created_date` is a `timestamp(6)`? I don't want to compare time, just date.
From the [Oracle documentation on SELECT](http://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_10002.htm#i2105152) : ``` SELECT * FROM orders WHERE created_date < TO_DATE('2014-04-28', 'YYYY-MM-DD'); ``` I can pass this date format from my application, worked like a charm.
If you want exactly one day prior to the current time: ``` select * from table t where created_date < sysdate - 1; ``` If you want times before today: ``` select * from table t where created_date <= trunc(sysdate); ```
Select rows where day is less than today
[ "", "sql", "oracle", "" ]
I have to two table ``` table 1 table 2 id a b c id a 1 2 3 a 1 r 2 4 5 b 4 d 3 6 7 c 5 s 4 8 9 d 5 1 2 e 6 2 3 f ``` I want the table2 override table 1. below is the result I want ***I want to create a view table*** ``` table override id a b c 1 r 3 a 2 4 5 b 3 6 7 c 4 d 9 d 5 s 2 e 6 2 3 f ``` How am I able to do that? Thx
``` UPDATE table1 INNER JOIN table2 ON table1.id=table2.id SET table1.a=table2.a ``` > if You want view try thiz one : ``` select t1.id,ifnull(t2.a,t1.a),t1.b from table_1 as t1 left join table_2 as t2 on t2.id=t1.id ``` **Creating View :** `CREATE VIEW tbl_vw AS SELECT t1.id,COALESCE(t2.a,t1.a),b,c FROM table_1 as t1 left JOIN table_2 as t2 ON t1.id = t2.id ;` [Here is a Link for sample](http://sqlfiddle.com/#!2/1849d/2/0)
Try this ``` UPDATE table1 t1 JOIN table2 ON t1.id = t2.id SET t1.a = t2.a ``` **Change for View:** ``` CREATE VIEW v AS SELECT t1.id,COALESCE(t2.a,t1.a),b,c FROM t1 ``` LEFT JOIN t2 ON t1.id = t2.id ; [SQL FIDDLE](http://sqlfiddle.com/#!2/d20d7/1)
How can I make one smaller table that overrides another bigger table?
[ "", "mysql", "sql", "" ]
I wanted to use `MOD` function in `SQL Server 2008R2` and followed this [link](http://technet.microsoft.com/en-us/library/ee634767(v=sql.105).aspx) but still got the message: > 'MOD' is not a recognized built-in function name. ``` DECLARE @m INT SET @m = MOD(321,11) SELECT @m ``` Error: > Msg 195, Level 15, State 10, Line 2 > 'MOD' is not a recognized built-in function name. Why I can't use this function from the link above?
The `MOD` keyword only exists in the `DAX` language (tabular dimensional queries), not `TSQL` Use `%` instead. Ref: [Modulo](https://learn.microsoft.com/en-us/sql/t-sql/language-elements/modulo-transact-sql?view=sql-server-ver15)
In TSQL, the modulo is done with a percent sign. SELECT 38 % 5 would give you the modulo 3
'MOD' is not a recognized built-in function name
[ "", "sql", "sql-server-2008", "t-sql", "sql-server-2008-r2", "modulo", "" ]
Following command is used to insert data from one table into another. ``` insert into school_record (phone_id) select phone_id from students where name="abc" order by phone_id asc; ``` But what if I want to insert all the phone\_id values for the names "abc", "def", "ghi" and so on... That is how to insert data from one table to another table, after selecting values from more than one records?
Use IN() - in brackets list all names which you need to insert ``` insert into school_record (phone_id) select phone_id from students where name in ('abc', 'def', 'ghi', ...) ; ``` Or if you need to insert all ``` insert into school_record (phone_id) select phone_id from students ; ```
If you want all the phone\_ids of the students table you can simply do: ``` insert into school_record (phone_id) select phone_id from students order by phone_id asc; ```
How do we insert "data from more than one records", from one table to another?
[ "", "mysql", "sql", "" ]
I have a table with 56 rows. Adding new rows starts from ID 59. What can I do to continue with ID 57?
This happens when you delete a column from the table or change the identity to be higher number. It is possible to reset the identity, but unless you have a very good reason to do that (good reason may be deleting most of the table contents), I won't recommend doing that. Some details of how to reseed the identity can be found [here](http://technet.microsoft.com/en-us/library/ms176057.aspx): And the way to use it is ``` DBCC CHECKIDENT ( table_name, RESEED, new_reseed_value ) ``` where new\_reseed\_value will be 57 for this specific case (i.e. the identity will start from 57). However, note that if there is something in the table with ID higher than 57, it will fail. Alternatively, you can use: ``` SET IDENTITY_INSERT Table_Name ON INSERT INTO Table_Name(ID, Other_Columns,You_Are_Inserting,You_Must_Specify_All_Of_Them) VALUES (57, 'SomeData','OtherData',...) SET IDENTITY_INSERT Table_Name OFF ``` This will allow you inserting by specifying the identity column value specifically without reseeding. Also, don't insert higher values than your current identity or you'll get failures on insertion.
This is expected behaviour of `IDENTITY` columns. If you want consecutive numbers, you shouldn't be using them, because the identity value will increase every time you insert a new row, but will *not* decrease if you delete the last row, nor will it decrease if you begin a transaction, insert a new row, and then roll back the transaction. You can use the `DBCC CHECKIDENT` command to change the identity value, but really, you should just change the column to a regular non-identity column and manage the value from your own code.
Identity column skipping some values when adding a new row
[ "", "sql", "sql-server", "ssms", "" ]
This is an example of how my source xml looks ``` <Catalog xmlns="http://schemas.example.com/stuff/stuff"> <String Key="Name" Tag="22a41320-bb66-41a9-8806-760d13679c6c">Document Title 1</String> <String Key="Name" Tag="023463cf-9237-45b6-ac3f-621b9b09f609">Title for document 2</String> </Catalog> ``` I plan to loop through the String nodes and transform them into Document nodes (this only shows the first iteration of the loop). However, when I insert the new node, it inserts an empty namespace. This is the result I get: ``` <Catalog xmlns="http://schemas.example.com/stuff/stuff"> <String Key="Name" Tag="023463cf-9237-45b6-ac3f-621b9b09f609">Title for document 2</String> <Document xmlns="" Key="Document Title 1" Handle="22a41320-bb66-41a9-8806-760d13679c6c" /> </Catalog> ``` Notice the empty namespace. I want to omit the namespace on the Document node entirely. Here's the result I want ``` <Catalog xmlns="http://schemas.example.com/stuff/stuff"> <String Key="Name" Tag="023463cf-9237-45b6-ac3f-621b9b09f609">Title for document 2</String> <Document Key="Document Title 1" Handle="22a41320-bb66-41a9-8806-760d13679c6c" /> </Catalog> ``` --- Here is a full query you can play with: ``` declare @temp xml, @newNode xml; set @temp = cast( '<Catalog xmlns="http://schemas.example.com/stuff/stuff"> <String Key="Name" Tag="22a41320-bb66-41a9-8806-760d13679c6c">Document Title 1</String> <String Key="Name" Tag="023463cf-9237-45b6-ac3f-621b9b09f609">Title for document 2</String> </Catalog>' as xml) select 'before', @temp set @newNode = CAST( '<Document Key="' + @temp.value('declare default element namespace "http://schemas.example.com/stuff/stuff"; (/Catalog/String/text())[1]', 'varchar(max)') + '" Handle="' + @temp.value('declare default element namespace "http://schemas.example.com/stuff/stuff"; (/Catalog/String/@Tag)[1]', 'varchar(50)') + '" />' as xml) set @temp.modify('declare default element namespace "http://schemas.example.com/stuff/stuff"; insert sql:variable("@newNode") into (/Catalog)[1] ') set @temp.modify('declare default element namespace "http://schemas.example.com/stuff/stuff"; delete (/Catalog/String)[1]') select 'after', @temp ```
I tried various methods to get around this * using with xmlnamespaces: no change * using explicitly defined namespaces: no change * using the same namespace as the parent: results in the default namespace being inserted into the Document node * deleting the empty namespace with modify/delete: would not remove the xmlns attribute * inserting the values dynamically inside the modify/insert : "The argument 1 of the XML data type method "modify" must be a string literal." **Solution** So the last error got me thinking, it will insert the node I want with no namespace so long as it's a string literal.. So I did just that. 1. Insert empty node with empty attributes 2. Use modify/replace to fill in the values of the attributes after insertion And here's the example of what it looks like ``` declare @temp xml set @temp = cast( '<Catalog xmlns="http://schemas.example.com/stuff/stuff"> <String Key="Name" Tag="22a41320-bb66-41a9-8806-760d13679c6c">Document Title 1</String> <String Key="Name" Tag="023463cf-9237-45b6-ac3f-621b9b09f609">Title for document 2</String> </Catalog>' as xml) select 'before', @temp while (@temp.value('declare default element namespace "http://schemas.example.com/stuff/stuff"; count(/Catalog/String)', 'int') > 0) begin SET @temp.modify('declare default element namespace "http://schemas.example.com/stuff/stuff"; insert <Document Key="" Handle="" /> into (/Catalog)[1] ') SET @temp.modify('declare default element namespace "http://schemas.example.com/stuff/stuff"; replace value of (/Catalog/Document[@Handle=""]/@Handle)[1] with (/Catalog/String/@Tag)[1]') SET @temp.modify('declare default element namespace "http://schemas.example.com/stuff/stuff"; replace value of (/Catalog/Document[@Key=""]/@Key)[1] with (/Catalog/String/text())[1]') SET @temp.modify('declare default element namespace "http://schemas.example.com/stuff/stuff"; delete (/Catalog/String)[1]') end select 'after', @temp ```
Instead of looping and using `modify` with `INSERT` and `DELETE`, why just not replace the desire nodes: ``` declare @temp xml, @newNode xml; set @temp = cast( '<Catalog xmlns="http://schemas.example.com/stuff/stuff"> <String Key="Name" Tag="22a41320-bb66-41a9-8806-760d13679c6c">Document Title 1</String> <String Key="Name" Tag="023463cf-9237-45b6-ac3f-621b9b09f609">Title for document 2</String> </Catalog>' as xml) SELECT CAST(REPLACE(CAST(@temp AS NVARCHAR(MAX)), 'String', 'Catalog') AS XML) ```
How to insert xml node in Sql Server without inserting empty namespace?
[ "", "sql", "sql-server", "xml", "xpath", "xml-dml", "" ]
I am trying to query the last time a file was imported from a SQL table "import", given a month string integer (Jan is '01', Feb is '02', March is '03'..). I have pasted my solution below but I was wondering if there is a more elegant way of doing so. ``` SELECT DISTINCT months.month_string, MAX(import.process_date) FROM import import, ( select '01' month_string from dual union select '02' month_string from dual union select '03' month_string from dual union select '04' month_string from dual union select '05' month_string from dual union select '06' month_string from dual union select '07' month_string from dual union select '08' month_string from dual union select '09' month_string from dual union select '10' month_string from dual union select '11' month_string from dual union select '12' month_string from dual ) months WHERE import.process_month (+) = months.month_string GROUP BY months.month_string ORDER BY months.month_string; ```
I don't know if you will find this more "elegant", but here is a better way to write the query: ``` SELECT months.month_string, MAX(import.process_date) FROM (select '01' as month_string from dual union all select '02' as month_string from dual union all select '03' as month_string from dual union all select '04' as month_string from dual union all select '05' as month_string from dual union all select '06' as month_string from dual union all select '07' as month_string from dual union all select '08' as month_string from dual union all select '09' as month_string from dual union all select '10' as month_string from dual union all select '11' as month_string from dual union all select '12' as month_string from dual ) months LEFT OUTER JOIN import on import.process_month = months.month_string GROUP BY months.month_string ORDER BY months.month_string; ``` Here are the changes: * Replaced the (uninterpretable) Oracle syntax for outer joins with an explicit, ANSI standard outer join. * Reversed the order of the tables, to use a `left outer join` rather than a `right outer join`. * Changed `select distinct` to `select`. `select distinct` is almost never needed with `group by`. * Changed `union` to `union all`. `union` expends effort to remove duplicates, which is not needed. * Added `as` for the column aliases. This makes it more apparent that the name is being assigned to the column, and helps prevent wandering commas from messing up the query. You could also use a `connect by` or recursive CTE to actually generate the month numbers, but I'm not sure that would be as clear as this version. EDIT: I was making the assumption that you need to get `NULL` values out because not all months would be present in `import`. That is why you would use a `months` table. If not, just do: ``` SELECT i.process_month, MAX(i.process_date) FROM import i GROUP BY i.process_month ORDER BY i.process_month; ``` If you are concerned about the range, ``` SELECT i.process_month, MAX(i.process_date) FROM import i WHERE i.process_month in ('01', '02', '03', '04', '05', '06', '07', '08', '09', '10', '11', '12' ) GROUP BY i.process_month ORDER BY i.process_month; ```
How about ``` WITH SUMMARY_DATA AS (SELECT CASE WHEN PROCESS_MONTH IN ('01', '02', '03', '04', '05', '06', '07', '08', '09', '10', '11', '12') THEN PROCESS_MONTH ELSE NULL END AS SUMMARY_MONTH, PROCESS_DATE FROM IMPORT) SELECT SUMMARY_MONTH, MAX(PROCESS_DATE) FROM SUMMARY_DATA GROUP BY SUMMARY_MONTH ORDER BY SUMMARY_MONTH ``` [SQLFiddle here](http://sqlfiddle.com/#!4/23b0a/1) Share and enjoy.
SQL Query from list of strings
[ "", "sql", "oracle", "database-design", "" ]
I am having Two Sql Queries as Follows: 1st Query ``` SELECT TC.TCName, sum(BS.BLDOS) as BLDOS, sum(BS.CollectedAmount) as CollectedAmount, (0.35*sum(BS.BLDOS)) as TargetAmount FROM [Customer] C INNER JOIN [dbo].[BillingStatus] BS ON BS.CustomerID = C.CustomerID INNER JOIN [dbo].[TCDetails] TC ON TC.CustomerID = BS.CustomerID WHERE TC.TCName in (Select distinct(TCName) FROM [dbo].[TCDetails] ) GROUP BY TCName order by [TCName] ASC ``` I am getting a result: ``` ||TCName || BLDOS || CollectedAmount || TargetAmount|| | Aarti | 81234.2 | 1678.76 | 789065 | | Dev | 181234.2| 678.76 | 2389065 | ``` 2nd query is: ``` SELECT COUNT(*) as count FROM [dbo].[TCDetails] where TCName in (Select distinct(TCName) FROM [dbo].[TCDetails] ) AND CallDate is NOT NULL GROUP BY TCName ``` AND I am Getting: ``` ||count|| 231 123 ``` I want to print it like this ``` ||TCName || BLDOS || CollectedAmount || TargetAmount|| count || | Aarti | 81234.2 | 1678.76 | 789065 | 231 | | Dev | 181234.2| 678.76 | 2389065 | 123 | ``` I am confused because both the queries are having different wheres clause
Try this ``` SELECT * FROM ( SELECT TC.TCName, sum(BS.BLDOS) as BLDOS, sum(BS.CollectedAmount) as CollectedAmount, (0.35*sum(BS.BLDOS)) as TargetAmount FROM [Customer] C INNER JOIN [dbo].[BillingStatus] BS ON BS.CustomerID = C.CustomerID INNER JOIN [dbo].[TCDetails] TC ON TC.CustomerID = BS.CustomerID WHERE TC.TCName in (Select distinct(TCName) FROM [dbo].[TCDetails] ) GROUP BY TCName order by [TCName] ASC ) S JOIN ( SELECT TCName,COUNT(*) as count FROM [dbo].[TCDetails] where TCName in (Select distinct(TCName) FROM [dbo].[TCDetails] ) AND CallDate is NOT NULL GROUP BY TCName ) T ON S.TCName=T.TCName; ``` OP: ``` +--------------------------------------------------------------+ |TCName | BLDOS | CollectedAmount | TargetAmount| count | +--------------------------------------------------------------+ | Aarti | 81234.2 | 1678.76 | 789065 | 231 | | Dev | 181234.2| 678.76 | 2389065 | 123 | +--------------------------------------------------------------+ ```
In both queries you use TCDetails and group by TCName. You don't have to do this twice only to count non-null call dates. COUNT(CallDate) does that for you. I also removed the IN clause which doesn't add anthing to your query. So the query is simply: ``` SELECT TC.TCName, sum(BS.BLDOS) as BLDOS, sum(BS.CollectedAmount) as CollectedAmount, 0.35 * sum(BS.BLDOS) as TargetAmount, count(TC.CallDate) as cnt FROM [Customer] C INNER JOIN [dbo].[BillingStatus] BS ON BS.CustomerID = C.CustomerID INNER JOIN [dbo].[TCDetails] TC ON TC.CustomerID = BS.CustomerID GROUP BY TC.TCName ORDER BY TC.TCName ASC; ```
Merging two tables into one both the table having different where clause
[ "", "sql", "sql-server-2008", "" ]
I have created a calendar table that contains all the calendar dates of a year, incl. the corresponding quarter / week / month / day etc. information. The following Select gives me a specific date, here the 17th of March. How can I extend the below to check if this falls on a Saturday or Sunday (`weekDayCal = 7 or 1`) and, if true, return the date for the following Monday, otherwise return the 17th ? ``` SELECT * FROM Calendar WHERE (yearCal = 2014) AND (monthCal = 3) AND (dayCal = 17) ``` Many thanks in advance for any help with this, Mike.
Assuming you have a `day_of_calendar` style `id` field, where every date is sequentially in order, then this works... ``` SELECT * FROM Calendar WHERE id = (SELECT id + CASE weekDayCal WHEN 7 THEN 2 WHEN 1 THEN 1 ELSE 0 END FROM Calendar WHERE (yearCal = 2014) AND (monthCal = 3) AND (dayCal = 17) ) ``` If not, then you're going to have to return to using dates in one way or another. For example... ``` SELECT * FROM Calendar WHERE realDate = (SELECT realDate + CASE weekDayCal WHEN 7 THEN 2 WHEN 1 THEN 1 ELSE 0 END FROM Calendar WHERE (yearCal = 2014) AND (monthCal = 3) AND (dayCal = 17) ) ``` But then you may as well just use real date calculations.
This fetches rows who fall on sunaday or saturday ``` SELECT * FROM Calender WHERE DATEPART(dw, CAST( CAST(monthCal as VARCHAR(2))+ '-'+ CAST(dayCal as VARCHAR(2))+'-'+ CAST(yearCal as VARCHAR(4)) AS DATETIME ) ) IN(1,7) ```
SQL Server: check if certain date in calendar table falls on a weekend
[ "", "sql", "sql-server", "date", "stored-procedures", "calendar", "" ]
I have a table with the following data (paypal transactions): ``` txn_type | date | subscription_id ----------------+----------------------------+--------------------- subscr_signup | 2014-01-01 07:53:20 | S-XXX01 subscr_signup | 2014-01-05 10:37:26 | S-XXX02 subscr_signup | 2014-01-08 08:54:00 | S-XXX03 subscr_eot | 2014-03-01 08:53:57 | S-XXX01 subscr_eot | 2014-03-05 08:58:02 | S-XXX02 ``` I want to get the average subscription length overall for a given time period (`subscr_eot` is the end of a subscription). In the case of a subscription that is still ongoing (`'S-XXX03'`) I want it to be included from it's start date until *now* in the average. How would I go about doing this with an SQL statement in Postgres?
[SQL Fiddle](http://sqlfiddle.com/#!15/c862e/2). Subscription length for each subscription: ``` select subscription_id, coalesce(t2.date, current_timestamp) - t1.date as subscription_length from ( select * from t where txn_type = 'subscr_signup' ) t1 left join ( select * from t where txn_type = 'subscr_eot' ) t2 using (subscription_id) order by t1.subscription_id ``` The average: ``` select avg(coalesce(t2.date, current_timestamp) - t1.date) as subscription_length_avg from ( select * from t where txn_type = 'subscr_signup' ) t1 left join ( select * from t where txn_type = 'subscr_eot' ) t2 using (subscription_id) ```
I used a couple of common table expressions; you can take the pieces apart pretty easily to see what they do. One of the reasons this SQL is complicated is because you're storing column names as data. (subscr\_signup and subscr\_eot are actually column names, not data.) This is a SQL anti-pattern; expect it to cause you much pain. ``` with subscription_dates as ( select p1.subscription_id, p1.date as subscr_start, coalesce((select min(p2.date) from paypal_transactions p2 where p2.subscription_id = p1.subscription_id and p2.txn_type = 'subscr_eot' and p2.date > p1.date), current_date) as subscr_end from paypal_transactions p1 where txn_type = 'subscr_signup' ), subscription_days as ( select subscription_id, subscr_start, subscr_end, (subscr_end - subscr_start) + 1 as subscr_days from subscription_dates ) select avg(subscr_days) as avg_days from subscription_days -- add your date range here. avg_days -- 75.6666666666666667 ``` I didn't add your date range as a WHERE clause, because it's not clear to me what you mean by "a given time period".
Get average interval between pairs of rows in a table
[ "", "sql", "postgresql", "aggregate-functions", "psql", "" ]
Here is the query that I have. In my Table User, the schemas is ``` CREATE TABLE [dbo].[User]( [UserId] [int] IDENTITY(1,1) NOT NULL, [UserFirstName] [nchar](33) NOT NULL, [UserSurname] [nchar](33) NOT NULL, [UserDob] [date] NULL, CONSTRAINT [PK_User] PRIMARY KEY CLUSTERED ( [UserId] ASC ) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] ``` What I wish to do is set the UserId automatically when I insert the userFirstName, userSurname and UserDob, when a new record is created. I wish to use the userId that it generates in another table, what would the sqlserver syntax be for this. I plan to call the above information from c# and asp.net
Something like this? Use SCOPE\_IDENTITY not IDENTITY. ``` DECLARE @NewID int INSERT INTO User ([UserFirstName], [UserSurname], [UserDob]) Values ('Bob', 'Smith', '1979-01-01') Select @NewID = SCOPE_IDENTITY() INSERT INTO OtherTable ([UserId]) Values (@NewID) ```
If you are looking for bulk insertions suggest to use `OUTPUT INSERTED` ``` CREATE TABLE #Ids ( Id INT ) INSERT INTO [User] ([UserFirstName], [UserSurname], [UserDob]) OUTPUT INSERTED.UserId INTO #Ids VALUES ('Bob', 'Smith', '1979-01-01'), ('john', 'will', '1979-01-01'), ('jerry', 'david', '1979-01-01') INSERT INTO OtherTable ([UserId]) select Id from #Ids drop TABLE #Ids ```
Return Auto Increment value after inserting a record on a table
[ "", "sql", "sql-server", "sql-server-2012", "" ]
I have a **"messages table"** , and i want only to retrieve the **"user ID"** with his last message. I tried to add "2 sql statements" inside each other , But it keeps on looping without stopping, ``` sqlite3_stmt *statement; NSMutableArray * messages = [[NSMutableArray alloc]init]; const char *dbpath = [_databasePath UTF8String]; if (sqlite3_open(dbpath, &_chatDB) == SQLITE_OK) { NSString *querySQL = [NSString stringWithFormat: @"SELECT DISTINCT FROMID , USERNAME from CHATCOMPLETE"]; const char *query_stmt = [querySQL UTF8String]; if (sqlite3_prepare_v2(_chatDB, query_stmt, -1, &statement, NULL) == SQLITE_OK) { while (sqlite3_step(statement) == SQLITE_ROW) { int userID = [[[NSString alloc] initWithUTF8String: (const char *) sqlite3_column_text(statement, 0)] integerValue]; NSString *querySQL2 = [NSString stringWithFormat: @"SELECT MESSAGE , USERNAME from CHATCOMPLETE where FROMID=\"%d\"",userID]; const char *query_stmt2 = [querySQL2 UTF8String]; if (sqlite3_prepare_v2(_chatDB, query_stmt2, -1, &statement, NULL) == SQLITE_OK) { while (sqlite3_step(statement) == SQLITE_ROW) { NSLog(@"LAST MESSAGE %@",[[NSString alloc] initWithUTF8String: (const char *) sqlite3_column_text(statement, 0)]); sqlite3_reset(statement); } } } sqlite3_reset(statement); } } return messages; ``` **UPDATE:** This is the insert message ``` -(void)saveData:(NSString *)message toID:(int)toID fromID:(int)fromID isRead:(BOOL)read date:(NSDate *)date messageID:(int)messageID userName:(NSString*)userName { sqlite3_stmt *statement; const char *dbpath = [_databasePath UTF8String]; if (sqlite3_open(dbpath, &_chatDB) == SQLITE_OK) { NSString *insertSQL = [NSString stringWithFormat: @"INSERT INTO CHATCOMPLETE (MESSAGE, TOID, FROMID, READ, date, MESSAGEID, USERNAME) VALUES (\"%@\", \"%d\", \"%d\", \"%c\", \"%@\", \"%d\", \"%@\")", message, toID, fromID, read, date,messageID,userName]; const char *insert_stmt = [insertSQL UTF8String]; sqlite3_prepare_v2(_chatDB, insert_stmt, -1, &statement, NULL); if (sqlite3_step(statement) == SQLITE_DONE) { NSLog(@"DONE"); /* status.text = @"Contact added"; name.text = @""; address.text = @""; phone.text = @"";*/ } else { // status.text = @"Failed to add contact"; } sqlite3_finalize(statement); sqlite3_close(_chatDB); } } ```
There are a few issues: 1. You have only one `sqlite3_stmt` variable for your two nested queries. You want a separate `sqlite3_stmt` for each. 2. You are calling `sqlite3_reset`. That is only used when binding new values to `?` placeholders in your prepared statement, which is not applicable here. Worse, you're calling it inside your loop. 3. Unrelated to the problem at hand, but for each prepared statement, don't forget to call `sqlite3_finalize` when done looping through the results, in order to release the memory used when preparing the statements. --- Thus, you might want something like: ``` sqlite3_stmt *userStatement; sqlite3_stmt *messageStatement; int rc; // the return code NSMutableArray * messages = [[NSMutableArray alloc]init]; const char *dbpath = [_databasePath UTF8String]; if (sqlite3_open(dbpath, &_chatDB) == SQLITE_OK) { const char *query_stmt = "SELECT DISTINCT FROMID , USERNAME from CHATCOMPLETE"; if (sqlite3_prepare_v2(_chatDB, query_stmt, -1, &userStatement, NULL) != SQLITE_OK) { NSLog(@"%s: prepare userStatement failed: %s", __PRETTY_FUNCTION__, sqlite3_errmsg(_chatDB)); } else { while ((rc = sqlite3_step(userStatement)) == SQLITE_ROW) { int userID = [[[NSString alloc] initWithUTF8String: (const char *) sqlite3_column_text(statement, 0)] integerValue]; const char *query_stmt2 = "SELECT MESSAGE , USERNAME from CHATCOMPLETE where FROMID=? ORDER BY timestamp DESC LIMIT 1"; // change the `ORDER BY` to use whatever field you want to sort by if (sqlite3_prepare_v2(_chatDB, query_stmt2, -1, &messageStatement, NULL) != SQLITE_OK) { NSLog(@"%s: prepare messageStatement failed: %s", __PRETTY_FUNCTION__, sqlite3_errmsg(_chatDB)); } else { if (sqlite3_bind_int(messageStatement, 1, userID) != SQLITE_OK) { NSLog(@"%s: bind userID %d failed: %s", __PRETTY_FUNCTION__, userID, sqlite3_errmsg(_chatDB)); } while ((rc = sqlite3_step(messageStatement)) == SQLITE_ROW) { NSLog(@"LAST MESSAGE %@",[[NSString alloc] initWithUTF8String: (const char *) sqlite3_column_text(statement, 0)]); } if (rc != SQLITE_DONE) { NSLog(@"%s: step messageStatement failed: %s", __PRETTY_FUNCTION__, sqlite3_errmsg(_chatDB)); } sqlite3_finalize(messageStatement); } } if (rc != SQLITE_DONE) { NSLog(@"%s: step userStatement failed: %s", __PRETTY_FUNCTION__, sqlite3_errmsg(_chatDB)); } sqlite3_finalize(userStatement); } } else { NSLog(@"%s: open %@ failed", __PRETTY_FUNCTION__, _databasePath); } return messages; ``` Note, this code sample, in addition to my three points above, also: 1. Log errors using `sqlite3_errmsg` if `sqlite3_prepare_v2` fails. 2. Added check on return codes from `sqlite3_step`, too, again logging `sqlite3_errmsg` if it fails. 3. Added log if `sqlite3_open` failed. 4. Use `sqlite3_bind_int()` rather building SQL using `stringWithFormat`. In this case, because `userID` is numeric, this isn't critical, but if ever using string values in your `WHERE` clauses, using the `sqlite3_bind_text()` function becomes critical, so I just wanted to show the pattern. For example, look at your save routine and try saving a message that happens to have double quotation mark in it (e.g. `I spoke with Bob and he says "hello" to you.`). Your `stringWithFormat` construct will fail. If you use `sqlite3_bind_text`, it will solve that problem. BTW, as you can see, when you add all of the proper validation of results, binding of values, etc., the code becomes a bit unwieldy. You might consider using [FMDB](https://github.com/ccgus/fmdb), which greatly simplifies your SQLite Objective-C code.
This is the query to get the last message with a given `fromID`: `SELECT * FROM chatting WHERE fromID=9999 ORDER BY id DESC LIMIT 1`
How to get the last message by distinct userID using SQlite and iOS?
[ "", "ios", "sql", "sqlite", "xcode5", "" ]
Having this two table DEPARTMENT ``` //DEPARTMENT D# DNAME ------------------- 1 RESEARCH 2 IT 3 SCIENCE ``` PROJECT ``` //PROJECT P# D# BUDGET ------------------------- 1001 1 22500 1002 1 22500 1003 3 50000 ``` When i execute this command ``` SELECT d.D#,DNAME,sum(budget) as "TOTAL BUDGETS" FROM DEPARTMENT d,PROJECT p WHERE d.D# = p.D# GROUP BY d.D#; ``` I thought result will be ``` D# DNAME TOTAL BUDGETS ------------------------------ 1 RESEARCH 45000 2 IT 0 3 SCIENCE 50000 ``` but why i get this error ``` //Oracle: NOT A GROUP BY EXPRESSION ERROR ```
**All column names in SELECT list must appear in GROUP BY clause unless name is used only in an aggregate function.** Change query to: ``` SELECT d.D#,DNAME,sum(budget) as "TOTAL BUDGETS" FROM DEPARTMENT d,PROJECT p WHERE d.D# = p.D# GROUP BY d.D#, DNAME; ``` --- Update for the comment: ``` SELECT d.D#, d.DNAME, NVL(SUM(p.budget), 0) as "TOTAL BUDGETS" FROM DEPARTMENT d LEFT OUTER JOIN PROJECT p ON d.D#=p.D# GROUP BY d.D#, d.DNAME; ```
``` SELECT d.D#,max(DNAME),sum(budget) as "TOTAL BUDGETS" FROM DEPARTMENT d,PROJECT p WHERE d.D# = p.D# GROUP BY d.D#; ``` You should either include the field in `GROUP BY` section or use aggregate function (`MAX` or `MIN` is ok for the case)
SUM with group by return error
[ "", "sql", "oracle", "sqlplus", "" ]
I am using SQL server 2005. I have a variable @var that can take values 0, 1, or NULL. I care about it being 1 or not, 0 and NULL are the same for me. But when I set it to be not 1, I lose the NULL values: ``` DECLARE @var INT SET @var = NULL IF @var <> 1 PRINT 'not 1' ELSE PRINT 'equals 1' ``` The output is "equals 1". What would be the correct code to get "not 1"?
You could also use `coalesce` for this: ``` if coalesce(@var, 0) <> 1 PRINT 'not 1' ELSE PRINT 'equals 1' ```
``` IF @var = 1 PRINT 'equals 1' ELSE PRINT 'not 1' ``` or ``` IF @var <> 1 or @var is null PRINT 'not 1' ELSE PRINT 'equals 1' ``` since comparing to `null` results in *unknown* which is not *true*. Use the `IS` operator for `null` values.
SQL Conditional loses NULL values
[ "", "sql", "sql-server", "null", "conditional-statements", "" ]
``` class Room < ActiveRecord::Base belongs_to :hotel belongs_to :layout has_many :visits validates :number, presence: true validates :rate, presence: true #validates :smoking, presence: true def self.occupied_rooms(from_date, to_date) #24-26 self.joins(:visits).where('date >= ? and date <= ?', from_date, to_date).uniq end def self.vacant_rooms(from_date, to_date) self.where('rooms.id not in (?)', Room.occupied_rooms(from_date, to_date).select(:id).to_a) end def self.find_rooms_with(occupancy_count, smoking, from_date, to_date) Room.vacant_rooms(from_date, to_date).joins(:layout).where('occupancy_count >= ?', occupancy_count).where('smoking = ?', smoking) end end ``` method vacant\_rooms does not work, when occupied rooms returns nothing. For example, let say a room is booked Apr 29th. And we want to find occupied rooms between Apr 28-30. It till return out occupied room. Vacant\_rooms will returns all rooms but our one booked room. However, if we try and find vacant\_rooms between May 1-3 for example, it won't return anything, because occupied rooms is not returning anything and it does not like... ``` self.where('rooms.id not in (?)', Room.occupied_rooms(from_date, to_date).select(:id).to_a) ``` What's the proper way to do this?
There's a very nice thing that got introduced in Rails 4 called a `WhereChain`. You can use it like this: ``` self.where.not(id: Room.occupied_rooms(from_date, to_date).select(:id)) ``` Though I'm pretty sure it can be even simpler with: ``` self.where.not(id: Room.occupied_rooms(from_date, to_date)) ```
You need to do: ``` self.where('rooms.id not in (?)', Room.occupied_rooms(from_date, to_date).select(:id).to_a.join(',')) ```
In Rails, how can you use a .where('rooms.id not in (?), ____), when ____ doesn't return anything?
[ "", "sql", "ruby-on-rails", "activerecord", "null", "" ]
I want to delete all values apart from the top x value but I'm not too sure what I'm doing wrong. My query: ``` DELETE FROM dbo.cake where dbo.cake.pie not in (select top 500 * from dbo.cake where createdDate >= '2007-01-01' and createdDate < '2008-01-01') ``` The error I get is: ``` Msg 116, Level 16, State 1, Line 4 Only one expression can be specified in the select list when the subquery is not introduced with EXISTS. ``` Any and all help is appreciated!
Change to ``` DELETE FROM dbo.cake where dbo.cake.pie not in (select top 500 pie from dbo.cake where createdDate >= '2007-01-01' and createdDate < '2008-01-01') ```
Try putting the field in the subquery ``` DELETE FROM dbo.cake where dbo.cake.pie not in (select top 500 pie from dbo.cake where createdDate >= '2007-01-01' and createdDate < '2008-01-01') ```
SQL Delete all but top x
[ "", "sql", "sql-server", "" ]
I am trying to query my postgresql db to return results where a date is in certain month and year. In other words I would like all the values for a month-year. The only way i've been able to do it so far is like this: ``` SELECT user_id FROM user_logs WHERE login_date BETWEEN '2014-02-01' AND '2014-02-28' ``` Problem with this is that I have to calculate the first date and last date before querying the table. Is there a simpler way to do this? Thanks
With dates (and times) many things become simpler if you use `>= start AND < end`. For example: ``` SELECT user_id FROM user_logs WHERE login_date >= '2014-02-01' AND login_date < '2014-03-01' ``` In this case you still need to calculate the start date of the month you need, but that should be straight forward in any number of ways. The end date is also simplified; just add exactly one month. No messing about with 28th, 30th, 31st, etc. This structure also has the advantage of being able to maintain use of indexes. Many people may suggest a form such as the following, but they ***do not*** use indexes: ``` WHERE DATEPART('year', login_date) = 2014 AND DATEPART('month', login_date) = 2 ``` This involves calculating the conditions for every single row in the table (a scan) and not using index to find the range of rows that will match (a range-seek).
From PostreSQL 9.2 [Range Types](https://www.postgresql.org/docs/9.2/static/rangetypes.html) are supported. So you can write this like: ``` SELECT user_id FROM user_logs WHERE '[2014-02-01, 2014-03-01]'::daterange @> login_date ``` this should be more efficient than the string comparison
Postgresql query between date ranges
[ "", "sql", "postgresql", "date", "intervals", "between", "" ]