Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I am trying to make a drop down list that is populated by a sql database. The sql call is populating the list, but when I try to get the value= part it just doesn't work right. I want its value= to be the location\_id then what is displayed to the user to be the location\_description. However, when I do the code below, the value= is the location\_description and what is displayed to the user is the location\_id. If i reverse the order, it doesn't help. ``` <select name="building" id="building"> ~[tlist_sql;SELECT DISTINCT location_description, location_id FROM u_locations ORDER BY location_description] <option value="~(location_id)" >~(location_description)</option> [/tlist_sql] </select> ``` The result is: ``` <select name="building" id="building"> <option value="ADAM">1</option> <option value="ADMIN">0</option> <option value="BRON">12</option> <option value="CLA">3</option> <option value="CLATT">15</option> <option value="COQ">18</option> <option value="DAR">19</option> </select> ``` But I need it to be the reverse.
does this work ? ``` <select name="building" id="building"> ~[tlist_sql;SELECT DISTINCT location_id,location_description FROM u_locations ORDER BY location_description] <option value="~(location_id)" >~(location_description)</option> [/tlist_sql] </select> ``` I just reversed the fields in the SELECT list
Looks good to me. I would suspect there to either be: * The table actually contains the description in the location\_id column * caching problems in either the web server or the browser. Try restarting your web server and clearing your web cache in your browser.
Populating list box values with sql + html
[ "", "html", "sql", "" ]
Following a custom converter function, which convert a string values (a complete date with offset) to datetime and extract only the date part and convert extracted date part to varchar. I know this is mental. However, this is part of a data extraction and reporting (SSRS). ``` ISNULL(CONVERT(varchar(16), CONVERT(datetime,dt, 103), 20), '') ``` Where `dt` is my column where I have the “complete date time with offset” stored as string It works as expected excluding in one case. I.e If the `dt` has no value or empty string then a part of above function (`CONVERT(datetime,dt, 103), 20)`) return a value, which is `1900/01/01`. And this `1900/01/01` subsequently convert to a string. This is sin and I don’t want any data in the final converted value of the value of dt is empty. Question: Can someone help me to spot the issue (if any) in the above converter function. If not, who shall be the culprit here, which return 1900/01/01.
Replace `dt` with `NULLIF(dt, '')` (Also, stop storing dates as strings. Please.)
Use the [ISDATE()](http://technet.microsoft.com/en-us/library/ms187347.aspx) function. If you have SQL Server 2012, use the [TRY\_CONVERT()](http://technet.microsoft.com/en-us/library/hh230993.aspx) function. These will defend against all bad inputs instead of just the ones you thought of in advance.
Sql Server- Date conversion
[ "", "sql", "sql-server", "date", "datetime", "reporting-services", "" ]
we want to speed up the run of the parallel insert statement below. We are expecting to insert around 80M records and it is taking around 2 hours to finish. ``` INSERT /*+ PARALLEL(STAGING_EX,16) APPEND NOLOGGING */ INTO STAGING_EX (ID, TRAN_DT, RECON_DT_START, RECON_DT_END, RECON_CONFIG_ID, RECON_PM_ID) SELECT /*+PARALLEL(PM,16) */ SEQ_RESULT_ID.nextval, sysdate, sysdate, sysdate, '8a038312403e859201405245eed00c42', T1.ID FROM PM T1 WHERE STATUS = 1 and not exists(select 1 from RESULT where T1.ID = RECON_PM_ID and CREATE_DT >= sysdate - 60) and UPLOAD_DT >= sysdate - 1 and (FUND_SRC_TYPE = :1) ``` We think that caching the results of the not exist column will speed up the inserts. How do we perform the caching? Any ideas how else to speed up the insert? Please see below for plan statistics from Enterprise Manager. Also we noticed that the statements are not being run in parallel. Is this normal? ![insert-select plan statistics from oracle em](https://i.stack.imgur.com/aplI8.png) Edit: btw, the sequence is already cached to 1M
Try using more bind variables, especially where nested loops might happen. I've noticed that you can use it in cases like ``` CREATE_DT >= :YOUR_DATE instead of CREATE_DT >= sysdate - 60 ``` I think this would explain why you have 180 million executions in the lowest part of your execution plan even though the whole other part of the update query is still at 8 million out of your 79 million.
**Improve statistics.** The estimated number of rows is 1, but the actual number of rows is over 7 million and counting. This causes the execution plan to use a nested loop instead of a hash join. A nested loop works better for small amounts of data and a hash join works better for large amounts of data. Fixing that may be as easy as ensuring the relevant tables have accurate, current statistics. This can usually be done by gathering statistics with the default settings, for example: `exec dbms_stats.gather_table_stats('SIRS_UATC1', 'TBL_RECON_PM');`. If that doesn't improve the cardinality estimate try using a dynamic sampling hint, such as `/*+ dynamic_sampling(5) */`. For such a long-running query it is worth spending a little extra time up-front sampling data if it leads to a better plan. **Use statement-level parallelism instead of object-level parallelism.** This is probably the most common mistake with parallel SQL. If you use object-level parallelism the hint must reference the *alias* of the object. Since 11gR2 there is no need to worry about specifying objects. This statement only needs a single hint: `INSERT /*+ PARALLEL(16) APPEND */ ...`. Note that `NOLOGGING` is not a real hint.
Oracle 11g - How to optimize slow parallel insert select?
[ "", "sql", "performance", "oracle", "parallel-processing", "oracle11g", "" ]
I have 2 tables: **people** ``` person_id int FirstName varchar LastName varchar ``` **people\_codes** ``` person_id int code varchar primary bit ``` If I join the tables I get something like this: ``` firstName lastName code primary ------------------------------------- John Smith GEN 0 John Smith VAS 1 Aaron Johnson ANE 0 Allison Hunt HOS 0 ``` Ok, so here's the question. How do I query for only the people that have a primary bit of only a 0? In the above results I only want Aaron and Allison to return, because John Smith has a primary occurrence of 1. Essentially, I can't just say `where primary = 0` because I would still get John. Thank you, Trout
`select * from people p, people_codes c where p.person_id = c.person_id and p.person_id not in (select person_id from person_codes where primary = 1)` replace \* with desired columns, include alias (p.FirstName)
``` SELECT FirstName, LastName FROM dbo.people AS p WHERE EXISTS ( SELECT 1 FROM dbo.people_codes WHERE person_id = p.person_id GROUP BY person_id HAVING MAX(CONVERT(TINYINT, [primary]) = 0) ); ```
SQL Server : query for instances of a bit that is 0 for all codes in a table
[ "", "sql", "sql-server", "" ]
``` SELECT p1.last_name, p1.first_name, p1.city, p1.state FROM president AS p1 INNER JOIN president AS p2 ON p1.city = p2.city AND p1.state = p2.state WHERE (p1.last_name <> p2.last_name OR p1.first_name <> p2.first_name) ORDER BY state, city, last_name; ``` As the script says it's supposed to display different values of names that have the same cities and state. Then the same first names OR last names from p1 and p2 will be ignored. however i am getting this on the output. ``` last_name first_name City State -------------------------------------------- 'Adams', 'John Quincy', 'Braintree', 'MA' 'Adams', 'John', 'Braintree', 'MA' 'Obama', 'Barack', 'New York', 'NY' 'Roosevelt', 'Theodore', 'New York', 'NY' 'Bush', 'George', 'Westmoreland', 'VA' 'Bush', 'George', 'Westmoreland', 'VA' 'Monroe', 'James', 'Westmoreland', 'VA' 'Monroe', 'James', 'Westmoreland', 'VA' 'Washington', 'George', 'Westmoreland', 'VA' 'Washington', 'George', 'Westmoreland', 'VA' ``` It displays two values of George Bush, James Monroe and George Washington. I checked my database and i am positive that there are no other duplicate values of these names.
why dont you group the records? ``` SELECT p1.last_name, p1.first_name, p1.city, p1.state FROM president AS p1 INNER JOIN president AS p2 ON p1.city = p2.city AND p1.state = p2.state WHERE (p1.last_name <> p2.last_name OR p1.first_name <> p2.first_name) GROUP BY p1.last_name, p1.first_name, p1.city, p1.state ORDER BY state, city, last_name ```
The easiest solution is a simple DISTINCT ``` SELECT DISTINCT p1.last_name, p1.first_name, p1.city, p1.state FROM president AS p1 INNER JOIN president AS p2 ON p1.city = p2.city AND p1.state = p2.state WHERE (p1.last_name <> p2.last_name OR p1.first_name <> p2.first_name) ORDER BY state, city, last_name; ``` (A group by on all selected fields produces the same result but is less readable and less maintainable.)
Mysql self join produces duplicate entries
[ "", "mysql", "sql", "" ]
I have two tables : Employee and Customer. Customer has customer ID, name, cust state, cust rep# and employee has employee first name, last name, employee phone number, employee number. **Employee number = Cust Rep#.** I'm trying to extract employee first name, last name and employee phone number who serve customers that live in CA. This is what I had as a code but i get an error saying it returns more than one row ``` SELECT EMP_LNAME, EMP_FNAME, EMP_PHONE FROM employee WHERE EMP_NBR = (SELECT CUST_REP FROM customer WHERE CUST_STATE='CA') ; ```
What's happening is that your inner query returns multiple rows, so change it to `where EMP_NBR in` to retrieve all matches. The problem your query has is that it doesn't make sense to say `= (a set which returns multiple rows)`, since it's unclear what exactly should be matched.
Use `IN()` instead of `=` when expecting a set of results to be returned in the subquery (rather than a single scalar result): ``` select EMP_LNAME,EMP_FNAME,EMP_PHONE from employee where EMP_NBR IN ( select CUST_REP from customer where CUST_STATE='CA' ); ``` Alternatively, you can use an INNER JOIN (or CROSS JOIN with a WHERE filter) to possibly do this more efficiently: ``` SELECT EMP_LNAME, EMP_FNAME, EMP_PHONE FROM employee INNER JOIN customer ON employee.EMP_NBR = customer.CUST_REP WHERE customer.CUST_STATE = 'CA'; ```
MYSQL Selecting data from two different tables and no common variable name
[ "", "mysql", "sql", "select", "" ]
The table: `Flight (flight_num, src_city, dest_city, dep_time, arr_time, airfare, mileage)` I need to find the cheapest fare for unlimited stops from any given source city to any given destination city. The catch is that this can involve **multiple flights**, so for example if I'm flying from Montreal->KansasCity I can go from Montreal->Washington and then from Washington->KansasCity and so on. How would I go about generating this using a Postgres query? Sample Data: ``` create table flight( flight_num BIGSERIAL PRIMARY KEY, source_city varchar, dest_city varchar, dep_time int, arr_time int, airfare int, mileage int ); insert into flight VALUES (101, 'Montreal', 'NY', 0530, 0645, 180, 170), (102, 'Montreal', 'Washington', 0100, 0235, 100, 180), (103, 'NY', 'Chicago', 0800, 1000, 150, 300), (105, 'Washington', 'KansasCity', 0600, 0845, 200, 600), (106, 'Washington', 'NY', 1200, 1330, 50, 80), (107, 'Chicago', 'SLC', 1100, 1430, 220, 750), (110, 'KansasCity', 'Denver', 1400, 1525, 180, 300), (111, 'KansasCity', 'SLC', 1300, 1530, 200, 500), (112, 'SLC', 'SanFran', 1800, 1930, 85, 210), (113, 'SLC', 'LA', 1730, 1900, 185, 230), (115, 'Denver', 'SLC', 1500, 1600, 75, 300), (116, 'SanFran', 'LA', 2200, 2230, 50, 75), (118, 'LA', 'Seattle', 2000, 2100, 150, 450); ```
[this answer is based on Gordon's] I changed arr\_time and dep\_time to `TIME` datatypes, which makes calculations easier. Also added result columns for total\_time and waiting\_time. **Note**: if there are any loops possible in the graph, you will need to avoid them (possibly using an array to store the path) ``` WITH RECURSIVE segs AS ( SELECT f0.flight_num::text as flight , src_city, dest_city , dep_time AS departure , arr_time AS arrival , airfare, mileage , 1 as hops , (arr_time - dep_time)::interval AS total_time , '00:00'::interval as waiting_time FROM flight f0 WHERE src_city = 'SLC' -- <SRC_CITY> UNION ALL SELECT s.flight || '-->' || f1.flight_num::text as flight , s.src_city, f1.dest_city , s.departure AS departure , f1.arr_time AS arrival , s.airfare + f1.airfare as airfare , s.mileage + f1.mileage as mileage , s.hops + 1 AS hops , s.total_time + (f1.arr_time - f1.dep_time)::interval AS total_time , s.waiting_time + (f1.dep_time - s.arrival)::interval AS waiting_time FROM segs s JOIN flight f1 ON f1.src_city = s.dest_city AND f1.dep_time > s.arrival -- you can't leave until you are there ) SELECT * FROM segs WHERE dest_city = 'LA' -- <DEST_CITY> ORDER BY airfare desc ; ``` FYI: the changes to the table structure: ``` create table flight ( flight_num BIGSERIAL PRIMARY KEY , src_city varchar , dest_city varchar , dep_time TIME , arr_time TIME , airfare INTEGER , mileage INTEGER ); ``` And to the data: ``` insert into flight VALUES (101, 'Montreal', 'NY', '05:30', '06:45', 180, 170), (102, 'Montreal', 'Washington', '01:00', '02:35', 100, 180), (103, 'NY', 'Chicago', '08:00', '10:00', 150, 300), (105, 'Washington', 'KansasCity', '06:00', '08:45', 200, 600), (106, 'Washington', 'NY', '12:00', '13:30', 50, 80), (107, 'Chicago', 'SLC', '11:00', '14:30', 220, 750), (110, 'KansasCity', 'Denver', '14:00', '15:25', 180, 300), (111, 'KansasCity', 'SLC', '13:00', '15:30', 200, 500), (112, 'SLC', 'SanFran', '18:00', '19:30', 85, 210), (113, 'SLC', 'LA', '17:30', '19:00', 185, 230), (115, 'Denver', 'SLC', '15:00', '16:00', 75, 300), (116, 'SanFran', 'LA', '22:00', '22:30', 50, 75), (118, 'LA', 'Seattle', '20:00', '21:00', 150, 450); ```
You want to use a recursive CTE for this. However, you will have to make a decision about how many flights to include. The following (untested) query shows how to do this, limiting the number of flight segments to 5: ``` with recursive segs as ( select cast(f.flight_num as varchar(255)) as flight, src_city, dest_city, dept_time, arr_time, airfare, mileage, 1 as numsegs from flight f where src_city = <SRC_CITY> union all select cast(s.flight||'-->'||cast(f.flight_num as varchar(255)) as varchar(255)) as flight, s.src_city, f.dest_city, s.dept_time, f.arr_time, s.airfare + f.airfare as airfare, s.mileage + f.mileage as milage, s.numsegs + 1 from segs s join flight f on s.src_city = f.dest_city where s.numsegs < 5 ) select * from segs where dest_city = <DEST_CITY> order by airfare desc limit 1; ```
Recursive/Hierarchical Query Using Postgres
[ "", "sql", "postgresql", "" ]
I have a table with a string which contains several delimited values, e.g. `a;b;c`. I need to split this string and use its values in a query. For example I have following table: ``` str a;b;c b;c;d a;c;d ``` I need to group by a single value from `str` column to get following result: ``` str count(*) a 1 b 2 c 3 d 2 ``` Is it possible to implement using single select query? I can not create temporary tables to extract values there and query against that temporary table.
From your comment to @PrzemyslawKruglej [answer](https://stackoverflow.com/a/20079928/997660) > Main problem is with internal query with `connect by`, it generates astonishing amount of rows The amount of rows generated can be reduced with the following approach: ``` /* test table populated with sample data from your question */ SQL> create table t1(str) as( 2 select 'a;b;c' from dual union all 3 select 'b;c;d' from dual union all 4 select 'a;c;d' from dual 5 ); Table created -- number of rows generated will solely depend on the most longest -- string. -- If (say) the longest string contains 3 words (wont count separator `;`) -- and we have 100 rows in our table, then we will end up with 300 rows -- for further processing , no more. with occurrence(ocr) as( select level from ( select max(regexp_count(str, '[^;]+')) as mx_t from t1 ) t connect by level <= mx_t ) select count(regexp_substr(t1.str, '[^;]+', 1, o.ocr)) as generated_for_3_rows from t1 cross join occurrence o; ``` Result: *For three rows where the longest one is made up of three words, we will generate 9 rows*: ``` GENERATED_FOR_3_ROWS -------------------- 9 ``` Final query: ``` with occurrence(ocr) as( select level from ( select max(regexp_count(str, '[^;]+')) as mx_t from t1 ) t connect by level <= mx_t ) select res , count(res) as cnt from (select regexp_substr(t1.str, '[^;]+', 1, o.ocr) as res from t1 cross join occurrence o) where res is not null group by res order by res; ``` Result: ``` RES CNT ----- ---------- a 2 b 2 c 3 d 2 ``` [**SQLFIddle Demo**](http://sqlfiddle.com/#!4/41fae/7) Find out more about [regexp\_count()](http://docs.oracle.com/cd/E11882_01/server.112/e41084/functions147.htm#SQLRF20014)(11g and up) and [regexp\_substr()](http://docs.oracle.com/cd/E11882_01/server.112/e41084/functions150.htm#SQLRF06303) regular expression functions. **Note:** Regular expression functions relatively expensive to compute, and when it comes to processing a very large amount of data, it might be worth considering to switch to a plain PL/SQL. [Here is an example](https://stackoverflow.com/questions/18787116/oracle-sql-the-insert-query-with-regexp-substr-expression-is-very-long-split/18788096#18788096).
This is ugly, but seems to work. The problem with the `CONNECT BY` splitting is that it returns duplicate rows. I managed to get rid of them, but you'll have to test it: ``` WITH data AS ( SELECT 'a;b;c' AS val FROM dual UNION ALL SELECT 'b;c;d' AS val FROM dual UNION ALL SELECT 'a;c;d' AS val FROM dual ) SELECT token, COUNT(1) FROM ( SELECT DISTINCT token, lvl, val, p_val FROM ( SELECT regexp_substr(val, '[^;]+', 1, level) AS token, level AS lvl, val, NVL(prior val, val) p_val FROM data CONNECT BY regexp_substr(val, '[^;]+', 1, level) IS NOT NULL ) WHERE val = p_val ) GROUP BY token; ``` ``` TOKEN COUNT(1) -------------------- ---------- d 2 b 2 a 2 c 3 ```
split string into several rows
[ "", "sql", "oracle", "oracle11g", "" ]
Can anyone explain what will happen in following scenario? ``` SELECT * FROM A, B LEFT JOIN C ON B.FIELD1=C.FIELD1 WHERE A.FIELD1='SOME VALUE' ``` Here table A and table B are not joined with any condition. So my doubt is what kind of join will be applied between A and B?
A cross join (cartesian product, if you prefer) will be applied between the results of A and B left join C: each row in the first set will be tied to each row in the second set.
A cross join applies, If you don't used a join condition, get irrelavent results also.
left join with three tables
[ "", "sql", "left-join", "" ]
``` SELECT DEPTMST.DEPTID, DEPTMST.DEPTNAME, DEPTMST.CREATEDT, COUNT(USRMST.UID) FROM DEPTMASTER DEPTMST INNER JOIN USERMASTER USRMST ON USRMST.DEPTID=DEPTMST.DEPTID WHERE DEPTMST.CUSTID=1000 AND DEPTMST.STATUS='ACT ``` I have tried several combination but I keep getting error > Column 'DEPTMASTER.DeptID' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause I also add group by but it's not working
WHen using count like that you need to group on the selected columns, ie. ``` SELECT DEPTMST.DEPTID, DEPTMST.DEPTNAME, DEPTMST.CREATEDT, COUNT(USRMST.UID) FROM DEPTMASTER DEPTMST INNER JOIN USERMASTER USRMST ON USRMST.DEPTID=DEPTMST.DEPTID WHERE DEPTMST.CUSTID=1000 AND DEPTMST.STATUS='ACT' GROUP BY DEPTMST.DEPTID, DEPTMST.DEPTNAME, DEPTMST.CREATEDT ```
you miss group by ``` SELECT DEPTMST.DEPTID, DEPTMST.DEPTNAME, DEPTMST.CREATEDT, COUNT(USRMST.UID) FROM DEPTMASTER DEPTMST INNER JOIN USERMASTER USRMST ON USRMST.DEPTID=DEPTMST.DEPTID WHERE DEPTMST.CUSTID=1000 AND DEPTMST.STATUS='ACT group by DEPTMST.DEPTID, DEPTMST.DEPTNAME, DEPTMST.CREATEDT ```
how to use count with where clause in join query
[ "", "sql", "" ]
Here is my Table ``` Select ClaimId ,InterestSubsidyClaimId ,BankId,BankName ,UpdatedPrincipalAmountofOutStanding ,[date] From InterestSubsidyReviseClaim Where IsActive = 1 and InterestSubsidyReviseClaim.InterestSubsidyClaimId=1 ``` that gives me result like ![gggggg](https://i.stack.imgur.com/7QK6m.jpg) now i want record number 3 and 10 only and i only have "InterestSubsidyClaimId" resulting record should be ![trytt](https://i.stack.imgur.com/r1HE0.jpg) so how it can be done????
i have solved it by my self ``` Select ClaimId ,InterestSubsidyClaimId ,BankId,BankName ,UpdatedPrincipalAmountofOutStanding ,[date] From InterestSubsidyReviseClaim Where ClaimId=( Select max(ClaimId) From InterestSubsidyReviseClaim Where IsActive = 1 and InterestSubsidyReviseClaim.InterestSubsidyClaimId=1 group by BankId ) ```
You can do this using [ROW\_NUMBER](http://technet.microsoft.com/en-us/library/ms186734.aspx) function. For example: ``` ;WITH DataSource AS ( Select ROW_NUMBER() OVER (PARTITION BY BankName ORDER BY ClaimID DESC) AS [RowID] ,ClaimId ,InterestSubsidyClaimId ,BankId,BankName ,UpdatedPrincipalAmountofOutStanding ,[date] From InterestSubsidyReviseClaim Where IsActive = 1 and InterestSubsidyReviseClaim.InterestSubsidyClaimId=1 ) SELECT ClaimId ,InterestSubsidyClaimId ,BankId,BankName ,UpdatedPrincipalAmountofOutStanding ,[date] FROM DataSource WHERE [RowID] = 1 ```
get a latest record in group by
[ "", "sql", "sql-server", "group-by", "" ]
Is there any way to alter the column width of a resultset in SQL Server 2005 Management Studio? I have a column which contains a sentence, which gets cut off although there is screen space. ``` | foo | foo2 | description | | foo | foo2 | description | |--------------------------| TO |----------------------------------| | x | yz | An Exampl.. | | x | yz | An Example sentence | ``` I would like to be able to set the column size via code so this change migrates to other SSMS instances with the code.
No, the width of each column is determined at runtime, and there is no way to override this in any version of Management Studio I've ever used. In fact I think the algorithm got worse in SQL Server 2008, and has been essentially the same ever since - you can run the same resultset twice, and the grid is inconsistent in the same output (this is SQL Server 2014 CTP2): ![enter image description here](https://i.stack.imgur.com/OpQl2.png) I reported this bug in 2008, and it was promptly closed as "Won't Fix": * SSMS : Grid alignment, column width seems arbitrary (sorry, no link) If you want control over this, you will either have to create an add-in for Management Studio that can manhandle the results grid, or you'll have to write your own query tool. **Update 2016-01-12**: This grid misalignment issue should have been fixed in some build of Management Studio (well, the UserVoice item had been updated, but they admit it might still be imperfect, and I'm not seeing any evidence of a fix). **Update 2021-10-13:** I updated this item in 2016 when Microsoft unplugged Connect and migrated some of the content to UserVoice. Now they have unplugged UserVoice as well, so I apologize the links above had to be removed, but this issue hasn't been fixed in the meantime anyway (just verified in SSMS 18.10).
What you can do is alias the selected field like this: ``` SELECT name as [name .] FROM ... ``` The spaces and the dot will expand the column width.
Resultset column width in Management Studio
[ "", "sql", "sql-server", "t-sql", "sql-server-2005", "ssms", "" ]
Hello I have the following two tables: TableA: ``` Field1 | Field2 --------------- 9911-4 | 4800 9911-6 | 400 9911-9 | 480 785-25 | 455 6523-1 | 221 ``` And in the TableB I have: ``` ID | Name ------------ 9911 | A 785 | B ``` So, Field1 in TableA has the ID-number, and it must be jointed with Field ID of TableB. Output must be: ``` ID | Name ------------ 9911 | A 785 | B ``` but ID must be JOINT with Field1 of TableA. Field1 in TableA have NUMBER-NUMBER, where first number is the ID of TableB Thanks in advance
you could try this: ``` select * from TableA join TableB on id=substring(Field1,1,instr(Field1,'-')-1) ```
``` SELECT * from TableA join TableB on id=SUBSTRING_INDEX(field1,'-',2) ```
SQL select field using a part of other field in another table
[ "", "mysql", "sql", "join", "inner-join", "" ]
I have a web app I'm trying to profile and optimize and one of the last things is to fix up this slow running function. I'm not an SQL expert by any means, but know that doing this in a one step SQL query would be far faster than doing it the way I'm doing now, with multiple queries, sorting, and iterating over loops. The problem is basically this - I want the rows of data from the "users" table, represented by the UserData object, where there are no entries for that user in the "bids" table for a given round. In other words, which bidders in my database haven't submitted a bid yet. In SQL pseudo-code, this would be ``` SELECT * FROM users WHERE users.role='BIDDER' AND users.user_id CANNOT BE FOUND IN bids.user_id WHERE ROUND=? ``` (Obviously that's not valid SQL, but I don't know SQL well enough to put it together properly). Thanks!
You can do this with a LEFT JOIN. The LEFT JOIN creates a link between two tables, just like the INNER JOIN, but will also includes the records from the LEFT table ( users here ) that have no association with the right table. Doing this, we can now add a where clause to specify we want only records with no association with the right table. The right and left tables are determined by the order you write the join. The left table is the first part and the right table (bids here) is on the right part. ``` SELECT * FROM users u LEFT JOIN bids b ON u.user_id = b.user_id WHERE u.role='BIDDER' AND b.bid_id IS NULL ```
``` SELECT u.* FROM users u LEFT OUTER JOIN bids b on b.user_id = u.user_id WHERE u.role = 'BIDDER' AND b.user_id IS NULL ``` See [this great explanation of joins](http://www.codinghorror.com/blog/2007/10/a-visual-explanation-of-sql-joins.html)
SELECT Statement Across 2 Tables
[ "", "mysql", "sql", "" ]
I'm trying to do the sum when: ``` date_ini >= initial_date AND final_date <= date_expired ``` And if it is not in the range will show the last net\_insurance from insurances ``` show last_insurance when is not in range ``` Here my tables: ``` POLICIES ID POLICY_NUM DATE_INI DATE_EXPIRED TYPE_MONEY 1, 1234, "2013-01-01", "2014-01-01" , 1 2, 5678, "2013-02-01", "2014-02-01" , 1 3, 9123, "2013-03-01", "2014-03-01" , 1 4, 4567, "2013-04-01", "2014-04-01" , 1 5, 8912, "2013-05-01", "2014-05-01" , 2 6, 3456, "2013-06-01", "2014-06-01" , 2 7, 7891, "2013-07-01", "2014-07-01" , 2 8, 2345, "2013-08-01", "2014-08-01" , 2 INSURANCES ID POLICY_ID INITIAL_DATE FINAL_DATE NET_INSURANCE 1, 1, "2013-01-01", "2014-01-01", 100 2, 1, "2013-01-01", "2014-01-01", 200 3, 1, "2013-01-01", "2014-01-01", 400 4, 2, "2011-01-01", "2012-01-01", 500 5, 2, "2013-01-01", "2014-01-01", 600 6, 3, "2013-01-01", "2014-01-01", 100 7, 4, "2013-01-01", "2014-01-01", 200 ``` I should have ``` POLICY_NUM NET 1234 700 5678 600 9123 100 4567 200 ``` Here is what i tried ``` SELECT p.policy_num AS policy_num, CASE WHEN p.date_ini >= i.initial_date THEN SUM(i.net_insurance) ELSE (SELECT max(id) FROM insurances GROUP BY policy_id) END as net FROM policies p INNER JOIN insurances i ON p.id = i.policy_id AND p.date_ini >= i.initial_date GROUP BY p.id ``` Here is my query <http://sqlfiddle.com/#!2/f6077b/16> Somebody can help me with this please? Is not working when is not in the range it would show my last\_insurance instead of sum
Try it this way ``` SELECT p.policy_num, SUM(i.net_insurance) net_insurance FROM policies p JOIN insurances i ON p.id = i.policy_id AND p.date_ini >= i.initial_date AND p.date_expired <= i.final_date GROUP BY i.policy_id, p.policy_num UNION ALL SELECT p.policy_num, i.net_insurance FROM ( SELECT MAX(i.id) id FROM policies p JOIN insurances i ON p.id = i.policy_id AND (p.date_ini < i.initial_date OR p.date_expired > i.final_date) GROUP BY i.policy_id ) q JOIN insurances i ON q.id = i.id JOIN policies p ON i.policy_id = p.id ``` Output: ``` | POLICY_NUM | NET_INSURANCE | |------------|---------------| | 1234 | 700 | | 5678 | 600 | | 9123 | 100 | | 4567 | 200 | ``` Here is **[SQLFiddle](http://sqlfiddle.com/#!2/f6077b/43)** demo
This will ensure that you only get the last insurance if the policy has no insurances within the policy period. ``` SELECT policy_num, SUM(IF(p.cnt > 0, p.net_insurance, i.net_insurance)) AS net_insurance FROM ( SELECT p.id, p.policy_num, SUM(IF(p.date_ini >= i.initial_date AND p.date_expired <= i.final_date, i.net_insurance, 0)) AS net_insurance, SUM(IF(p.date_ini >= i.initial_date AND p.date_expired <= i.final_date, 1, 0)) AS cnt, MAX(IF(p.date_ini >= i.initial_date AND p.date_expired <= i.final_date, 0, i.id)) AS max_i_id FROM policies p INNER JOIN insurances i ON p.id = i.policy_id GROUP BY p.id ) as p LEFT JOIN insurances i ON i.id = p.max_i_id GROUP BY p.id ``` Here is my [SQLFiddle](http://sqlfiddle.com/#!2/4a712/42 "SQLFiddle")
How can I sum columns using conditions?
[ "", "mysql", "sql", "" ]
In my database I have a table with a rather large data set that users can perform searches on. So for the following table structure for the `Person` table that contains about 250,000 records: ``` firstName|lastName|age ---------|--------|--- John | Doe |25 ---------|--------|--- John | Sams |15 ---------|--------|--- ``` the users would be able to perform a query that can return about 500 or so results. What I would like to do is allow the user see his search results 50 at a time using pagination. I've figured out the client side pagination stuff, but I need somewhere to store the query results so that the pagination uses the results from his unique query and not from a `SELECT *` statement. Can anyone provide some guidance on the best way to achieve this? Thanks. Side note: I've been trying to use temp tables to do this by using the `SELECT INTO` statements, but I think that might cause some problems if, say, User A performs a search and his results are stored in the temp table then User B performs a search shortly after and User A's search results are overwritten.
In SQL Server the `ROW_NUMBER()` function is great for pagination, and may be helpful depending on what parameters change between searches, for example if searches were just for different firstName values you could use: ``` ;WITH search AS (SELECT *,ROW_NUMBER() OVER (PARTITION BY firstName ORDER BY lastName) AS RN_firstName FROM YourTable) SELECT * FROM search WHERE RN BETWEEN 51 AND 100 AND firstName = 'John' ``` You could add additional `ROW_NUMBER()` lines, altering the `PARTITION BY` clause based on which fields are being searched.
Historically, for us, the best way to manage this is to create a complete new table, with a unique name. Then, when you're done, you can schedule the table for deletion. The table, if practical, simply contains an index id (a simple sequenece: 1,2,3,4,5) and the primary key to the table(s) that are part of the query. Not the entire result set. Your pagination logic then does something like: ``` SELECT p.* FROM temp_1234 t, primary_table p WHERE t.pkey = p.primary_key AND t.serial_id between 51 and 100 ``` The serial id is your paging index. So, you end up with something like (note, I'm not a SQL Server guy, so pardon): ``` CREATE TABLE temp_1234 ( serial_id serial, pkey number ); INSERT INTO temp_1234 SELECT 0, primary_key FROM primary_table WHERE <criteria> ORDER BY <sort>; CREATE INDEX i_temp_1234 ON temp_1234(serial_id); // I think sql already does this for you ``` If you can delay the index, it's faster than creating it first, but it's a marginal improvement most likely. Also, create a tracking table where you insert the table name, and the date. You can use this with a reaper process later (late at night) to DROP the days tables (those more than, say, X hours old). Full table operations are much cheaper than inserting and deleting rows in to an individual table: ``` INSERT INTO page_table SELECT 'temp_1234', <sequence>, primary_key... DELETE FROM page_table WHERE page_id = 'temp_1234'; ``` That's just awful.
Store results of SQL Server query for pagination
[ "", "sql", "sql-server", "database", "pagination", "" ]
I have a table containing item data that it its simplest form consists of an item number and a category. An item number can be in multiple categories but not all items are in every category: ``` Item category 1111 A 1111 B 1111 C 2222 A 3333 B 3333 C ``` I have to put this data into a feed for a 3rd party in the form of an single item number and its associated categories. Feed layout cannot be changed. So for the above the feed would have to contain the following ``` 1111,A,B,C 2222,A 3333,B,C ``` Does anyone know how to does this. I have spiralled into a group by, roll up, pivoting mess and could use some assistance. thanks
You have to use `GROUP_CONCAT` function. It allowes to concatenate values from multiple (grouped) rows when using `GROUP BY` statement. Read more [here](http://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html). Ok, as you are using MS server, i think that you will have take the following approach [Simulating group\_concat MySQL function in Microsoft SQL Server 2005?](https://stackoverflow.com/questions/451415/simulating-group-concat-mysql-function-in-microsoft-sql-server-2005)
This is how you can take that list. ``` SELECT item,category FROM item_category WHERE item="1111" ``` But I doubt with your db design and your requirement. Listing like this [1111,A,B,C], [2222,B]... you have to manipulate in your business logic
Turning multiple data records into a single record
[ "", "sql", "" ]
I don't know how to phrase my question title for what I'm about to ask. I have a SELECT query that must not return any rows if the **combination** of my `where` clause is true. Here is my example code: ``` SELECT * FROM MyTable m1 WHERE (m1.User != '1' AND m1.Status != '1') ``` But what I am trying to ask SQL is: *"only return rows when the User is not '1' AND his status is not '1' at the same time. If this **combination** is not true, then its okay to return those rows".* So if the User is "1" and Status is "2" then that is fine to return those rows. Seems simple but I can't visualize how to do it... help please?
Just answered my own question.... here is the answer. 'OR' doesn't test for combination of both being true. Solution: ``` SELECT * FROM MyTable m1 WHERE NOT (m1.User = '1' AND m1.Status = '1') ``` Because both conditions have to be true for it not to return the rows. Both = AND, Either = OR.
You may try this using `OR` instead of `AND`: ``` SELECT * FROM MyTable m1 WHERE !(m1.User = '1' OR m1.Status = '1') ```
SQL Where Clause To Match Against Both Conditions Simultaneously
[ "", "sql", "sql-server", "" ]
I have an existing query that outputs current data, and I would like to insert it into a Temp table, but am having some issues doing so. Would anybody have some insight on how to do this? Here is an example ``` SELECT * FROM (SELECT Received, Total, Answer, ( CASE WHEN application LIKE '%STUFF%' THEN 'MORESTUFF' END ) AS application FROM FirstTable WHERE Recieved = 1 AND application = 'MORESTUFF' GROUP BY CASE WHEN application LIKE '%STUFF%' THEN 'MORESTUFF' END) data WHERE application LIKE isNull('%MORESTUFF%', '%') ``` This seems to output my data currently the way that i need it to, but I would like to pass it into a Temp Table. My problem is that I am pretty new to SQL Queries and have not been able to find a way to do so. Or if it is even possible. If it is not possible, is there a better way to get the data that i am looking for `WHERE application LIKE isNull('%MORESTUFF%','%')` into a temp table?
``` SELECT * INTO #Temp FROM (SELECT Received, Total, Answer, (CASE WHEN application LIKE '%STUFF%' THEN 'MORESTUFF' END) AS application FROM FirstTable WHERE Recieved = 1 AND application = 'MORESTUFF' GROUP BY CASE WHEN application LIKE '%STUFF%' THEN 'MORESTUFF' END) data WHERE application LIKE isNull( '%MORESTUFF%', '%') ```
SQL Server R2 2008 needs the `AS` clause as follows: ``` SELECT * INTO #temp FROM ( SELECT col1, col2 FROM table1 ) AS x ``` The query failed without the `AS x` at the end. --- **EDIT** It's also needed when using SS2016, had to add `as t` to the end. ``` Select * into #result from (SELECT * FROM #temp where [id] = @id) as t //<-- as t ```
Insert Data Into Temp Table with Query
[ "", "sql", "sql-server", "ssms", "" ]
I am working on a "Leaderboard" for a tool I am working on and I need to pull some numbers together and get the count of records across multiple rows. What you will see in this Stored Procedure is me trying to order the records by the sum of 2 columns. Any tips on how to accomplish this? ``` AS BEGIN SET NOCOUNT ON; BEGIN SELECT DISTINCT(whoAdded), count(tag) as totalTags, count(DISTINCT data) as totalSubmissions FROM Tags_Accounts GROUP BY whoAdded ORDER BY SUM(totalTags + totalSubmissions) DESC FOR XML PATH ('leaderboard'), TYPE, ELEMENTS, ROOT ('root'); END END ```
You can accomplish this by putting it in a derived table: ``` SELECT * FROM ( SELECT DISTINCT(whoAdded) AS whoAdded, count(tag) as totalTags, count(DISTINCT data) as totalSubmissions FROM Tags_Accounts GROUP BY whoAdded ) a ORDER BY totalTags + totalSubmissions DESC FOR XML PATH ('leaderboard'), TYPE, ELEMENTS, ROOT ('root') ``` Alternatively, you can order by the aggregates, but I think the above is a bit cleaner/less redundant: ``` SELECT DISTINCT(whoAdded) as whoAdded, count(tag) as totalTags, count(DISTINCT data) as totalSubmissions FROM Tags_Accounts GROUP BY whoAdded ORDER BY SUM(count(tag) + count(DISTINCT data)) DESC FOR XML PATH ('leaderboard'), TYPE, ELEMENTS, ROOT ('root') ```
``` SET NOCOUNT ON; BEGIN ;WITH CTE AS( SELECT DISTINCT(whoAdded), count(tag) as totalTags, count(DISTINCT data) as totalSubmissions FROM Tags_Accounts GROUP BY whoAdded) SELECT * FROM CTE ORDER BY SUM(totalTags + totalSubmissions) DESC FOR XML PATH ('leaderboard'), TYPE, ELEMENTS, ROOT ('root'); END END ```
SQL Count / sum in order clause
[ "", "sql", "sql-server", "t-sql", "" ]
im new on mysql workbench, and i tried so many things to put my script working but i simply cant... Ive got these tables: ``` CREATE TABLE Utilizador (email varchar(40) not null, nome varchar(50) not null, dataNascimento date, profissao varchar(50) not null, reputacao double(3,2) unsigned not null, constraint pk_Utilizador primary key(email)) ``` This is the first table created! ``` CREATE TABLE POI (email varchar(40) not null, designacaoPOI varchar(10) not null, coordenadaX int, coordenadaY int, descricaoPOI varchar(200), constraint pk_POI primary key(email, designacaoPOI), constraint fk_POI foreign key(email) references Utilizador(email) on delete cascade) ``` This is the second table created! ``` CREATE TABLE Utilizador_POI (email varchar(40) not null, designacaoPOI varchar(10) not null, constraint pk_Utilizador_POI primary key(email, designacaoPOI), constraint fk1_Utilizador_POI foreign key(email) references Utilizador(email) on delete cascade, constraint fk2_Utilizador_POI foreign key(designacaoPOI) references POI(designacaoPOI) on delete cascade) ``` This table gives me the error: Error Code: 1215. Cannot add foreign key constraint I did some tests and im almost sure that the problem is in the foreign key "designacaoPOI". The other FK ("email") dont give me any error, so maybe the problem is in the Table POI? Thanks in advanced!
The problem here is twofold: **1/ Use `ID`s for `PRIMARY KEY`s** You should be using `ID`s for primary keys rather than `VARCHAR`s or anything that has any real-world "business meaning". If you want the `email` to be unique within the `Utilizador` table, the combination of `email` and `designacaoPOI` to be unique in the `POI` table, and the same combination (`email` and `designacaoPOI`) to be unique in `Utilizador_POI`, you should be using `UNIQUE KEY` constraints rather than `PRIMARY KEY` constraints. **2/ You cannot `DELETE CASCADE` on a `FOREIGN KEY` that doesn't reference the `PRIMARY KEY`** In your third table, `Utilizador_POI`, you have two `FOREIGN KEY`s references `POI`. Unfortunately, the `PRIMARY KEY` on `POI` is a composite key, so MySQL has no idea how to handle a `DELETE CASCADE`, as there is not a one-to-one relationship between the `FOREIGN KEY` in `Utilizador_POI` and the `PRIMARY KEY` of `POI`. If you change your tables to all have a `PRIMARY KEY` of `ID`, as follows: ``` CREATE TABLE blah ( id INT(9) AUTO_INCREMENT NOT NULL .... PRIMARY KEY (id) ); ``` Then you can reference each table by `ID`, and both your `FOREIGN KEY`s and `DELETE CASCADE`s will work.
I think the problem is that `Utilizador_POI.email` references `POI.email`, which *itself* references `Utilizador.email`. MySQL is probably upset at the double-linking. Also, since there seems to be a many-to-many relationship between `Utilizador` and `POI`, I think the structure of `Utilizador_POI` isn't what you really want. Instead, `Utilizador_POI` should reference a primary key from `Utilizador`, and a matching primary key from `POI`.
MySQL error 1215 Cannot add Foreign key constraint - FK in different tables
[ "", "mysql", "sql", "foreign-keys", "create-table", "" ]
This might have been answered already, but I couldn't find anything. In my Oracle DB I have two tables S\_PIPE\_N ``` G3E_ID |G3E_FID 380 | 1181024 2064 | 1188176 ``` S\_PCONPT\_N ``` G3E_ID| G3E_FID| BOTTOM_HEIGHT 783 | 1181025| 253.4 4173 | 1188175| 364.51 4174 | 1188178| 366.76 17106 | 1379384| 253.11 ``` and the table that is connecting this two S\_MANY\_PCP\_N ``` G3E_ID | G3E_FID | G3E_OWNERFID |G3E_CID 2539 | 1181025 | 1181024 |1 68507 | 1379384 | 1181024 |2 15444 | 1188178 | 1188176 |1 15448 | 1188175 | 1188176 |2 ``` I want to get as the result of the select statement the following: ``` C.g3e_fid | A.bottom_height_1 | D.bottom_height_2 1181024 | 253.4 | 253.11 1188176 | 366.76 | 364.51 ``` I tried it with following statement: `select C.G3E_FID, A.BOTTOM_HEIGHT AS "bottom_height_1", D.BOTTOM_HEIGHT "bottom_height_2" FROM S_PIPE_N C, S_MANY_PCP_N B LEFT OUTER JOIN S_PCONPT_N A ON A.G3E_FID=B.G3E_FID AND B.G3E_CID=1 LEFT OUTER JOIN S_PCONPT_N D ON A.G3E_FID=D.G3E_FID AND B.G3E_CID=2 WHERE C.G3E_FID=B.G3E_OWNERFID` Though this way I get the following: ``` C.g3e_fid | A.bottom_height_1 | D.bottom_height_2 1181024 | 253.4 | null 1181024 |null | 253.11 1188176 | 366.76 | null 1188176 | null | 364.51 ``` how can I change the statement that I get just one result per g3e\_fid
``` SELECT A.G3E_FID, MAX (CASE WHEN B.G3E_CID = 1 THEN C.BOTTOM_HEIGHT END), MAX (CASE WHEN B.G3E_CID = 2 THEN C.BOTTOM_HEIGHT END) FROM S_PIPE_N a INNER JOIN S_MANY_PCP_N b ON A.G3E_FID = B.G3E_OWNERFID LEFT OUTER JOIN S_PCONPT_N c ON B.G3E_FID = C.G3E_FID GROUP BY A.G3E_FID; ```
This should work ``` select C.G3E_FID, A.BOTTOM_HEIGHT AS "bottom_height_1", D.BOTTOM_HEIGHT "b.bottom_height_2" FROM S_PIPE_N C, S_MANY_PCP_N B LEFT OUTER JOIN S_PCONPT_N A ON A.G3E_FID=C.G3E_FID AND G3E_CID=1 L LEFT OUTER JOIN S_PCONPT_N D ON A.G3E_FID=C.G3E_FID AND G3E_CID=2 WHERE C.G3E_FID=B.G3E_OWNERFID and A.BOTTOM_HEIGHT IS NOT NUL ```
Link two tables using third one with two entries of second table that relate to one entry in first table
[ "", "sql", "oracle", "left-join", "" ]
In my table I have the following values: ``` ProductId Type Value Group 200 Model Chevy Chevy 200 Year 1985 Chevy 200 Year 1986 Chevy 200 Model Ford Ford 200 Year 1986 Ford 200 Year 1987 Ford 200 Year 1988 Ford ``` In my query, I wanna know if my product is compatible to a certain model in a given year. I'm trying to build a function that returns true or false, depending on the parameters ProductId, Model and Value I pass to it. To be true, the Function have to match both parameters (Model and Year), along with the ProductId in the table, but they must belong to the same group. For example, if I pass to the function the values 200, Chevy, 1988 it must return False. Notice that the 3 values are found in the table, but they belong to different groups. On the other hand, if I pass to the function the values 200, Ford, 1986 it must return True, because all 3 values match and belong to the same group. I think a way of doing that is in multiple steps, like: 1. Select all records that match the model then all that match the year and insert them into a temporary table; 2. Select distinct the groups to another temp table; 3. Loop through each group checking if I find all the matches in that group, returning true when I found or false at the end of the function. I wonder if there's a better way of doing this in 1 step using only one SELECT command.
To get both `Model` and `Value` in **one** query, you can join the table on itself: *(I'll assume that the table is called `products`)* ``` select * from products as models inner join products as years on models.productid = years.productid and models.group = years.group where models.type = 'Model' and years.type = 'Year' ``` This will give you rows with `Chevy, 1985`, `Chevy, 1986`, `Ford, 1986` and so on. Then you just need to put your values (e.g. `200, Ford, 1986`) into the `WHERE` clause. So the final query for `200, Ford, 1986` will look like this: ``` select * from products as models inner join products as years on models.productid = years.productid and models.group = years.group where models.type = 'Model' and years.type = 'Year' and models.productid = 200 and models.value = 'ford' and years.value = '1986' ```
You may be overcomplicating it. This should be enough: ``` select exists(select * from Products where ProductId = 200 and Type = 'Year' and Value = 1986 and Group = 'Ford') ```
In SQL Server, how to SELECT rows that share a common column value?
[ "", "sql", "sql-server", "t-sql", "" ]
I'm looking for specification of SQL:2011 (ISO/IEC 9075:2011). Where can I find it? (I could find only the older one: [SQL 92](http://www.contrib.andrew.cmu.edu/~shadow/sql/sql1992.txt))
Go to the ISO web site - you can search for `9075 2011` and you'll get 13 hits for the 13 parts of the standard that can be ordered from their online store: <http://www.iso.org/iso/home/search.htm?qt=9075%202011&sort=rel&type=simple&published=on> I don't know of any source where this standard would be available for free, unfortunately.
Reference from [Developer FAQ - PostgreSQL wiki](https://wiki.postgresql.org/wiki/Developer_FAQ#Where_can_I_get_a_copy_of_the_SQL_standards.3F) (some of them are broken, take remarks in 【】 symbol) : > You are supposed to buy them from ISO/IEC JTC 1/SC 32 or ANSI. Search for ISO/ANSI 9075. ANSI's offer is less expensive, but the contents of the documents are the same between the two organizations. > > Since buying an official copy of the standard is quite expensive, most developers rely on one of the various draft versions available on the Internet. Some of these are: > > * SQL-92 [http://www.contrib.andrew.cmu.edu/~shadow/sql/sql1992.txt](http://www.contrib.andrew.cmu.edu/%7Eshadow/sql/sql1992.txt) > * SQL:1999 [http://web.cs.ualberta.ca/~yuan/courses/db\_readings/ansi-iso-9075-2-1999.pdf](http://web.cs.ualberta.ca/%7Eyuan/courses/db_readings/ansi-iso-9075-2-1999.pdf) 【404 Now, an alternative [SQL-99 Complete, Really -- SQL99](https://crate.io/docs/sql-99/en/latest//)】 > * SQL:2003 <http://www.wiscorp.com/sql_2003_standard.zip> > * SQL:2011 (preliminary) <http://www.wiscorp.com/sql20nn.zip> > * No free copy of SQL:2016 appears to exist. > > ... > > Some further web pages about the SQL standard are: > > * <http://troels.arvin.dk/db/rdbms/links/#standards> > * <http://www.wiscorp.com/SQLStandards.html> > * [http://www.contrib.andrew.cmu.edu/~shadow/sql.html#syntax](http://www.contrib.andrew.cmu.edu/%7Eshadow/sql.html#syntax) (SQL-92) 【404】 > * <http://dbs.uni-leipzig.de/en/lokal/standards.pdf> (paper) 【Redirect】
Where can I get/read SQL standard SQL:2011?
[ "", "sql", "standards", "iso", "" ]
Table Employee: ``` CREATE TABLE Employee( ID CHAR(10) PRIMARY KEY, SSN CHAR(15) NOT NULL, FNAME CHAR(15), LNAME CHAR(15), DOB DATE NOT NULL ); INSERT INTO Employee VALUES('0000000001','078-05-1120','George','Brooks', '24-may-85'); INSERT INTO Employee VALUES('0000000002','917-34-6302','David','Adams', '01-apr-63'); INSERT INTO Employee VALUES('0000000003','078-05-1123','Yiling','Zhang', '02-feb-66'); INSERT INTO Employee VALUES('0000000004','078-05-1130','David','Gajos', '10-feb-65'); INSERT INTO Employee VALUES('0000000005','079-04-1120','Steven','Cox', '11-feb-79'); INSERT INTO Employee VALUES('0000000006','378-35-1108','Eddie','Gortler', '30-may-76'); INSERT INTO Employee VALUES('0000000007','278-05-1120','Henry','Kung', '22-may-81'); INSERT INTO Employee VALUES('0000000008','348-75-1450','Harry','Leitner', '29-oct-85'); INSERT INTO Employee VALUES('0000000009','256-90-4576','David','Malan', '14-oct-88'); INSERT INTO Employee VALUES('0000000010','025-45-1111','John','Brooks', '28-nov-78'); INSERT INTO Employee VALUES('0000000011','025-59-1919','Michael','Morrisett', '04-nov-85'); INSERT INTO Employee VALUES('0000000012','567-45-2351','David','Nelson', '10-nov-54'); INSERT INTO Employee VALUES('0000000013','100-40-0011','Jelani','Parkes', '20-dec-44'); ``` When I use queries like: ``` SELECT * FROM EMPLOYEE WHERE DOB < '01-jan-80'; ``` I did not get the record of '0000000013','100-40-0011','Jelani','Parkes', '20-dec-44'. I think that might be a date format problem, but I am not sure. Anyone have an idea? Thanks!
you provided year in 2 digit format(rr) then year < 50 will be 20xx and year > 50 will be 19xx means 56 will be 1956 44 means 2044 that's why it was excluded
Well, you have a DATE datatype for your DOB column, so, that's a good thing. However, when using strings to specify dates, you should use explicit TO\_DATE function, and 4 digit years, to be certain of exactly what's going on. Try this: ``` INSERT INTO Employee VALUES('0000000001','078-05-1120','George','Brooks', to_date('24-may-1985','dd-mon-yyyy')); INSERT INTO Employee VALUES('0000000002','917-34-6302','David','Adams', to_date('01-apr-1963','dd-mon-yyyy')); INSERT INTO Employee VALUES('0000000003','078-05-1123','Yiling','Zhang', to_date('02-feb-1966','dd-mon-yyyy')); INSERT INTO Employee VALUES('0000000004','078-05-1130','David','Gajos', to_date('10-feb-1965','dd-mon-yyyy')); INSERT INTO Employee VALUES('0000000005','079-04-1120','Steven','Cox', to_date('11-feb-1979','dd-mon-yyyy')); INSERT INTO Employee VALUES('0000000006','378-35-1108','Eddie','Gortler', to_date('30-may-1976','dd-mon-yyyy')); INSERT INTO Employee VALUES('0000000007','278-05-1120','Henry','Kung', to_date('22-may-1981','dd-mon-yyyy')); INSERT INTO Employee VALUES('0000000008','348-75-1450','Harry','Leitner', to_date('29-oct-1985','dd-mon-yyyy')); INSERT INTO Employee VALUES('0000000009','256-90-4576','David','Malan', to_date('14-oct-1988','dd-mon-yyyy')); INSERT INTO Employee VALUES('0000000010','025-45-1111','John','Brooks', to_date('28-nov-1978','dd-mon-yyyy')); INSERT INTO Employee VALUES('0000000011','025-59-1919','Michael','Morrisett', to_date('04-nov-1985','dd-mon-yyyy')); INSERT INTO Employee VALUES('0000000012','567-45-2351','David','Nelson', to_date('10-nov-1954','dd-mon-yyyy')); INSERT INTO Employee VALUES('0000000013','100-40-0011','Jelani','Parkes', to_date('20-dec-1944','dd-mon-yyyy')); SELECT * FROM EMPLOYEE WHERE DOB < to_date('01-jan-1980','dd-mon-yyyy'); ```
Using less than in Oracle to compare date can not get expected answer
[ "", "sql", "oracle", "date-arithmetic", "" ]
I need to create a query to insert some records, the record must be unique. If it exists I need the recorded ID else if it doesnt exist I want insert it and get the new ID. I wrote that query but it doesnt work. ``` SELECT id FROM tags WHERE slug = 'category_x' WHERE NO EXISTS (INSERT INTO tags('name', 'slug') VALUES('Category X','category_x')); ```
It's called [UPSERT](http://www.wordnik.com/words/upsert) (i.e. UPdate or inSERT). ``` INSERT INTO tags ('name', 'slug') VALUES('Category X','category_x') ON DUPLICATE KEY UPDATE 'slug' = 'category_x' ``` MySql Reference: **13.2.5.3.** [INSERT ... ON DUPLICATE KEY UPDATE Syntax](http://dev.mysql.com/doc/refman/5.1/en/insert-on-duplicate.html)
Try something like... ``` IF (NOT EXISTS (SELECT id FROM tags WHERE slug = 'category_x')) BEGIN INSERT INTO tags('name', 'slug') VALUES('Category X','category_x'); END ELSE BEGIN SELECT id FROM tags WHERE slug = 'category_x' END ``` But you can leave the ELSE part and SELECT the id, this way the query will always return the id, irrespective of the insert...
Insert a record only if it is not present
[ "", "mysql", "sql", "" ]
I have a varchar field like: ``` 195500 122222200 ``` I need to change these values to: ``` 1955.00 1222222.00 ```
If you want to add a '.' before the last two digits of your values you can do: ``` SELECT substring(code,0,len(code)-1)+'.'+substring(code,len(code)-1,len(code)) FROM table1; ``` `sqlfiddle demo`
`try this` ``` Declare @s varchar(50) = '1234567812333445' Select Stuff(@s, Len(@s)-1, 0, '.') --> 12345678123334.45 ``` [fiddle demo](http://www.sqlfiddle.com/#!6/5169d/2)
How can I add a character into a specified position into string in SQL SERVER?
[ "", "sql", "sql-server", "" ]
I'm passing three parameter to URL: `&p1=eventID`, `&p2=firstItem`, `&p3=numberOfItems`. first parameter is a column of table. second parameter is the first item that I'm looking for. Third parameter says pick how many items after `firstItem`. for example in first query user send `&p1=1`, `&p2=0`, `&p3=20`. Therefore, I need first 20 items of list. Next time if user sends `&p1=1`, `&p2=21`, `&p3=20`, then I should pass second 20 items (from item 21 to item 41). PHP file is able to get parameters. currently, I'm using following string to query into database: ``` public function getPlaceList($eventId, $firstItem, $numberOfItems) { $records = array(); $query = "select * from {$this->TB_ACT_PLACES} where eventid={$eventId}"; ... } ``` Result is a long list of items. If I add LIMIT 20 at the end of string, then since I'm not using Token then result always is same. how to change the query in a way that meets my requirement? any suggestion would be appreciated. Thanks => **update:** I can get the whole list of items and then select from my first item to last item via `for(;;)` loop. But I want to know is it possible to do similar thing via sql? I guess this way is more efficient way.
I found my answer here: [stackoverflow.com/a/8242764/513413](https://stackoverflow.com/a/8242764/513413) For other's reference, I changed my query based on what Joel and above link said: ``` $query = "select places.title, places.summary, places.description, places.year, places.date from places where places.eventid = $eventId order by places.date limit $numberOfItems offset $firstItem"; ```
I would construct the final query to be like this: ``` SELECT <column list> /* never use "select *" in production! */ FROM Table WHERE p1ColumnName >= p2FirstItem ORDER BY p1ColumnName LIMIT p3NumberOfItems ``` The `ORDER BY` is important; according to my reading of the [documentation](http://www.postgresql.org/docs/8.1/static/queries-limit.html), PostGreSql won't guarantee which rows you get without it. I know Sql Server works much the same way. Beyond this, I'm not sure how to build this query safely. You'll need to be *very careful* that your method for constructing that sql statement does not leave you vulnerable to sql injection attacks. At first glance, it looks like I could very easily put a malicious value into your column name url parameter.
SQL, how to pick portion of list of items via SELECT, WHERE clauses?
[ "", "sql", "where-clause", "postgresql-8.4", "" ]
These two `SQL` syntaxtes produces the same result, which one is better to use and why? 1st: ``` SELECT c.Id,c.Name,s.Id,s.Name,s.ClassId FROM dbo.ClassSet c,dbo.StudentSet s WHERE c.Id=s.ClassId ``` 2nd: ``` SELECT c.Id,c.Name,s.Id,s.Name,s.ClassId FROM dbo.ClassSet c JOIN dbo.StudentSet s ON c.Id=s.ClassId ```
**The 2:nd one is better**. The way youre joining in the first query in considered outdated. Avoid using `,` and use `JOIN` "In terms of precedence, a JOIN's `ON` clause happens before the `WHERE` clause. This allows things like a `LEFT JOIN b ON a.id = b.id WHERE b.id IS NULL` to check for cases where there is NOT a matching row in b." "Using , notation is similar to processing the `WHERE` and ON conditions at the same time" Found the details about it here, [MySQL - SELECT, JOIN](https://stackoverflow.com/questions/8720833/mysql-select-join) Read more about SQL standards <http://en.wikipedia.org/wiki/SQL-92>
``` SELECT c.Id,c.Name,s.Id,s.Name,s.ClassId FROM dbo.ClassSet c JOIN dbo.StudentSet s ON c.Id=s.ClassId ``` Without any doubt the above one is better when comparing to your first one.In the precedence table "On" is sitting Second and "Where" is on fourth But for the simpler query like you don't want to break your head like this, for project level "JOIN" is always recommended
Proper way to write a SQL syntax?
[ "", "sql", "" ]
I have a table with DateTime and Value columns. I need to select max value from the last day (newest) day. The best I could come up with is 3 step process: 1. `SELECT MAX(DateTime) FROM MyTable` 2. get rid of time part in datetime, store date 3. `SELECT MAX(Value) FROM MyTable WHERE DateTime>date` Is there a better way to do this?
Your three steps in one query would be ``` SELECT MAX(Value) FROM MyTable WHERE DateTime >= CAST( (SELECT MAX(DateTime)FROM MyTable) AS DATE) ``` Now finding the max date could be quite expensive query so if you're actually after the yesterday's max value then you should use `CURRENT_DATE` instead, ie ``` WHERE DateTime >= ( CURRENT_DATE - 1 ) AND DateTime < CURRENT_DATE ```
If you mean the highest value of **today**, then you can use: ``` SELECT MAX(value) FROM MyTable WHERE CAST(DateTime AS DATE) = CURRENT_DATE ```
SQL select MAX value from last day
[ "", "sql", "firebird", "" ]
I'm trying to do an inner join between 3 tables in an ERD I created. I've successfully built 3 - 3 layer sub-queries using these tables, and when I researched this issue, I can say that in my DDL, I didn't use double quotes so the columns aren't case sensitive. Joins aren't my strong suite so any help would be much appreciated. This is the query im putting in, and the error it's giving me. All of the answers I've seen when people do inner joins they use the syntax, 'INNER JOIN' but I wasn't taught this? Is my method still okay? ``` SQL> SELECT regional_lot.location, rental_agreement.vin, rental_agreement.contract_ID FROM regional_lot, rental_agreement WHERE regional_lot.regional_lot_id = vehicle1.regional_lot_ID AND vehicle1.vin = rental_agreement.vin; * ERROR at line 1: ORA-00904: "VEHICLE1"."VIN": invalid identifier ```
For starters, you don't have `vehicle1` in your `FROM` list. You should give ANSI joins a try. For one, they're a lot more readable and you don't pollute your `WHERE` clause with join conditions ``` SELECT regional_lot.location, rental_agreement.vin, rental_agreement.contract_ID FROM rental_agreement INNER JOIN vehicle1 ON rental_agreement.vin = vehicle1.vin INNER JOIN regional_lot ON vehicle1.regional_lot_ID = regional_lot.regional_lot_id; ```
you foret add table vehicle1 to section "from" in your query: ``` from regional_lot, rental_agreement, vehicle1 ```
ORA-00904 INVALID IDENTIFIER with Inner Join
[ "", "sql", "oracle10g", "inner-join", "ora-00904", "" ]
Assume tables ``` team: id, title team_user: id_team, id_user ``` I'd like to select teams with just and only specified members. In this example I want team(s) where the only users are those with id 1 and 5, noone else. I came up with this SQL, but it seems to be a little overkill for such simple task. ``` SELECT team.*, COUNT(`team_user`.id_user) AS cnt FROM `team` JOIN `team_user` user0 ON `user0`.id_team = `team`.id AND `user0`.id_user = 1 JOIN `team_user` user1 ON `user1`.id_team = `team`.id AND `user1`.id_user = 5 JOIN `team_user` ON `team_user`.id_team = `team`.id GROUP BY `team`.id HAVING cnt = 2 ``` EDIT: Thank you all for your help. If you want to actually try your ideas, you can use example database structure and data found here: <http://down.lipe.cz/team_members.sql>
How about ``` SELECT * FROM team t JOIN team_user tu ON (tu.id_team = t.id) GROUP BY t.id HAVING (SUM(tu.id_user IN (1,5)) = 2) AND (SUM(tu.id_user NOT IN (1,5)) = 0) ``` I'm assuming a unique index on team\_user(id\_team, id\_user).
You can use ``` SELECT DISTINCT id, COUNT(tu.id_user) as cnt FROM team t JOIN team_user tu ON ( tu.id_team = t.id ) GROUP BY t.id HAVING count(tu.user_id) = count( CASE WHEN tu.user_id = 1 or tu.user_id = 5 THEN 1 ELSE 0 END ) AND cnt = 2 ``` Not sure why you'd need the `cnt = 2` condition, the query would get only those teams where all of users having the `ID` of either `1 or 5`
(My)SQL JOIN - get teams with exactly specified members
[ "", "mysql", "sql", "" ]
**Problem** I need to better understand the rules about when I can reference an outer table in a subquery and when (and why) that is an inappropriate request. I've discovered a duplication in an Oracle SQL query I'm trying to refactor but I'm running into issues when I try and turn my referenced table into a grouped subQuery. The following statement works appropriately: ``` SELECT t1.* FROM table1 t1, INNER JOIN table2 t2 on t1.id = t2.id and t2.date = (SELECT max(date) FROM table2 WHERE id = t1.id) --This subquery has access to t1 ``` Unfortunately table2 sometimes has duplicate records so I need to aggregate t2 first before I join it to t1. However when I try and wrap it in a subquery to accomplish this operation, suddenly the SQL engine can't recognize the outer table any longer. ``` SELECT t1.* FROM table1 t1, INNER JOIN (SELECT * FROM table2 t2 WHERE t1.id = t2.id --This loses access to t1 and t2.date = (SELECT max(date) FROM table2 WHERE id = t1.id)) sub on t1.id = sub.id --Subquery loses access to t1 ``` I know these are fundamentally different queries I'm asking the compiler to put together but I'm not seeing why the one would work but not the other. I know I can duplicate the table references in my subquery and effectively detach my subquery from the outer table but that seems like a really ugly way of accomplishing this task (what with all the duplication of code and processing). **Helpful References** * I found this fantastic description of the order in which clauses are executed in SQL Server: ([INNER JOIN ON vs WHERE clause](https://stackoverflow.com/questions/1018822/inner-join-on-vs-where-clause/1944492#1944492)). I'm using Oracle but I would think that this would be standard across the board. There is a clear order to clause evaluation (with FROM being first) so I would think that any clause occuring further down the list would have access to all information previously processed. I can only assume my 2nd query somehow changes that ordering so that my subquery is being evaluated too early? * In addition, I found a similar question asked ([Referencing outer query's tables in a subquery](https://stackoverflow.com/questions/2645485/referencing-outer-querys-tables-in-a-subquery) ) but while the input was good they never really explained why he couldn't do what he is doing and just gave alternative solutions to his problem. I've tried their alternate solutions but it's causing me other issues. Namely, that subquery with the date reference is fundamental to the entire operation so I can't get rid of it. **Questions** * I want to understand what I've done here... Why can my initial subquery see the outer table but not after I wrap the entire statement in a subquery? * That said, if what I'm trying to do can't be done, what is the best way of refactoring the first query to eliminate the duplication? Should I reference table1 twice (with all the duplication that requires)? Or is there (probably) a better way of tackling this problem? Thanks in advance! **------EDIT------** As some have surmised these queries above are not the actually query I'm refactoring but an example of the problem I'm running into. The query I'm working with is a lot more complicated so I'm hesitant to post it here as I'm afraid it will get people off track. **------UPDATE------** So I ran this by a fellow developer and he had one possible explanation for why my subquery is losing access to t1. Because I'm wrapping this subquery in a parenthesis, he thinks that this subquery is being evaluated before my table t1 is being evaluated. This would definitely explain the 'ORA-00904: "t1"."id": invalid identifier' error I've been receiving. It would also suggest that like arithmetic order of operations, that adding parens to a statement gives it priority within certain clause evaluations. I would still love for an expert to weigh in if they agree/disagree that is a logical explanation for what I'm seeing here.
So I figured this out based on the comment that Martin Smith made above (THANKS MARTIN!) and I wanted to make sure I shared my discovery for anyone else who trips across this issue. **Technical Considerations** Firstly, it would certainly help if I used the proper terminology to describe my problem: My first statement above uses a **correlated subquery**: * <http://en.wikipedia.org/wiki/Correlated_subquery> * <http://www.programmerinterview.com/index.php/database-sql/correlated-vs-uncorrelated-subquery/> This is actually a fairly inefficient way of pulling back data as it reruns the subquery for every line in the outer table. For this reason I'm going to look for ways of eliminating these type of subqueries in my code: * <https://blogs.oracle.com/optimizer/entry/optimizer_transformations_subquery_unesting_part_1> My second statement on the other hand was using what is called an **inline view** in Oracle also known as a **derived table** in SQL Server: * <http://docs.oracle.com/cd/B19306_01/server.102/b14200/queries007.htm> * <http://www.programmerinterview.com/index.php/database-sql/derived-table-vs-subquery/> An inline view / derived table creates a temporary unnamed view at the beginning of your query and then treats it like another table until the operation is complete. Because the compiler needs to create a temporary view when it sees on of these subqueries on the FROM line, *those subqueries must be entirely self-contained* with no references outside the subquery. **Why what I was doing was stupid** What I was trying to do in that second table was essentially create a view based on an ambiguous reference to another table that was outside the knowledge of my statement. It would be like trying to reference a field in a table that you hadn't explicitly stated in your query. **Workaround** Lastly, it's worth noting that Martin suggested a fairly clever but ultimately inefficient way to accomplish what I was trying to do. The Apply statement is a proprietary SQL Server function but it allows you to talk to objects outside of your derived table: * <http://technet.microsoft.com/en-us/library/ms175156(v=SQL.105).aspx> Likewise this functionality is available in Oracle through different syntax: * [What is the equivalent of SQL Server APPLY in Oracle?](https://stackoverflow.com/questions/1476191/what-is-the-equivalent-of-sql-server-apply-in-oracle) Ultimately I'm going to re-evaluate my entire approach to this query which means I'll have to rebuild it from scratch (believe it or not I didn't create this monstrocity originally - I swear!). **A big thanks to everyone who commented** - this was definitely stumping me but all of the input helped put me on the right track!
How about the following query: ``` SELECT t1.* FROM ( SELECT * FROM ( SELECT t2.id, RANK() OVER (PARTITION BY t2.id, t2.date ORDER BY t2.date DESC) AS R FROM table2 t2 ) WHERE R = 1 ) sub INNER JOIN table1 t1 ON t1.id = sub.id ```
SQL - Relationship between a SubQuery and an Outer Table
[ "", "sql", "subquery", "correlated-subquery", "derived-table", "inline-view", "" ]
In mysql, I have the following: Structure `Table`: ``` id(int primary key) name(varchar 100 unique) ``` Values: ``` id name 1 test 2 test1 ``` I have two queries: 1) `SELECT count(*) FROM Table WHERE name='test'` 2) if `count` select rows == 0 second query `INSERT INTO Table (name) VALUES ('test')` I know that may be use: ``` $res = mysql(SELECT count(*) as count FROM Table WHERE name='test'); // where mysql function make query in db $i = $res -> fetch_assoc(); if($i['count'] < 1 ){$res = mysql(INSERT INTO Table (name) VALUES ('test');} ``` But I would like know how to make two query in one query. How do I make one query inside of two?
Assuming your `name` column has a `UNIQUE` constraint, just add `IGNORE` to the `INSERT` statement. ``` INSERT IGNORE INTO Table (name) VALUES ('test') ``` This will skip the insertion if a record already exists for a particular value and return 0 affected rows. Note that a primary key is also considered a `UNIQUE` constraint. If the `name` column doesn't have such a constraint, I would advice that you add one: ``` ALTER TABLE `Table` ADD UNIQUE(name) ``` See also the documentation for [`INSERT`](http://dev.mysql.com/doc/refman/5.5/en/insert.html)
You can do it with a simple trick, like this: ``` insert into Table1(name) select 'test' from dual where not exists(select 1 from Table1 where name='test'); ``` This will even work if you do not have a primary key on this column. Explanation: `DUAL` is a special dummy table that is only referenced here to enable the `WHERE` clause. You would not be able to have a statement without a FROM clause (like `select 'test' where not exists(select 1 from Table1 where name='test')`) as it will be incomplete.
How make insert if select rows == '0' in one query?
[ "", "mysql", "sql", "" ]
I'm trying to write a SQL query that will sum total production from the following two example tables: ``` Table: CaseLots DateProduced kgProduced October 1, 2013 10000 October 1, 2013 10000 October 2, 2013 10000 Table: Budget OperatingDate BudgetHours October 1, 2013 24 October 2, 2013 24 ``` I would like to output a table as follows: ``` TotalProduction TotalBudgetHours 30000 48 ``` Here is what I have for code so far: ``` SELECT Sum(kgProduced) AS TotalProduction, Sum(BudgetHours) AS TotalBudgetHours FROM dbo.CaseLots INNER JOIN dbo.Budget ON dbo.CaseLots.DateProduced = dbo.Budget.OperatingDate WHERE dbo.Budget.OperatingDate BETWEEN '2013-10-01' AND '2013-10-02' ``` It seems that the query is double summing the budget hour in instances where more than one case lot is produced in a day. The table I'm getting is as follows: ``` Total Production BudgetHours 30000 72 ``` How do I fix this?
Think about what the INNER JOIN is doing. For every row in CaseLot, its finding any row in Budget that has a matching date. If you were to remove your aggregation statements in SQL, and just show the inner join, you would see the following result set: DateProduced kgProduced OperatingDate BudgetHours October 1, 2013 10000 October 1, 2013 24 October 1, 2013 10000 October 1, 2013 24 October 2, 2013 10000 October 2, 2013 24 (dammit StackOverflow, why don't you have Markdown for tables :( ) Running your aggregation on top of that it is easy to see how you get the 72 hours in your result. The correct query needs to aggregate the CaseLots table first, then join onto the Budget table. ``` SELECT DateProduced, TotalKgProduced, SUM(BudgetHours) AS TotalBudgetHours FROM ( SELECT DateProduced, SUM(kgProduced) AS TotalKgProduced FROM CaseLots GROUP BY DateProduced ) AS TotalKgProducedByDay INNER JOIN Budget ON TotalKgProducedByDay.DateProduced = Budget.OperatingDate WHERE DateProduced BETWEEN '1 Oct 2013' AND '2 Oct 2013' GROUP BY DateProduced ```
The problem is in the `INNER JOIN` produces a 3 row table since the keys match on all. So there is three '24's with a sum of 72. To fix this, it would probably be easier to split this into two queries. ``` SELECT Sum(kgProduced) AS TotalProduction FROM dbo.CaseLots WHERE dbo.CaseLots.OperatingDate BETWEEN '2013-10-01' AND '2013-10-02' LEFT JOIN SELECT Sum(BudgetHours) AS TotalBudgetHours FROM dbo.Budget WHERE dbo.Budget.OperatingDate BETWEEN '2013-10-01' AND '2013-10-02' ```
SQL Sum with non unique date
[ "", "sql", "" ]
I have a table structure like this : ``` ID commission transdate token 1 10 2013-11-22 08:24:00 token1 2 10 2013-11-22 08:24:00 token1 3 10 2013-11-22 08:24:00 token1 4 10 2013-11-22 08:24:00 token1 5 10 2013-11-22 08:24:00 token1 6 29 2013-11-22 06:24:00 token2 7 29 2013-11-22 06:24:00 token2 ``` The thing is that I have duplicate entries in my table for some data and I need to filter it out while taking the SUM. If the transdate is same and the token is same, I only need to consider one entry for the sum. The sum I need is sum of all the commission for all token grouped by date. How can I do this through query ?
Use [`DATE`](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date) with [`GROUP BY`](http://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html): ``` SELECT SUM(comission), token, DATE(transdate) FROM t GROUP BY token, DATE(transdate) ```
try below query, To consider only one entry for date for a particular token: ``` SELECT commission, date(transdate), token FROM tokens GROUP BY date(transdate), token; ``` To get a sum of commission for date ``` SELECT SUM(commission), DATE(transdate), token FROM tokens GROUP BY DATE(transdate), token; ``` [Sql Fiddle Example](http://sqlfiddle.com/#!2/614c3/8)
Sum of unique entries per day
[ "", "mysql", "sql", "group-by", "" ]
I am using perl and DBI to query a mysql table. I need to retrieve all rows (aprox. 75,000 rows from 3 separate databases) within the past 24 hours, ideally between 12:00 am and 11:59 pm or 00:00:00 and 23:59:59. I was using a `WHERE` date condition like this: ``` SELECT * FROM table_name WHERE insert_date >= DATE_SUB(NOW(), INTERVAL 1 DAY); ``` Then I would run my script at midnight using cron. This worked well enough, but due to a regular large volume of traffic at midnight and the size of the queries, the execution time scheduled with cron is now 3:00 am. I changed my sql to try and get the same 24 hour period from an offset like this: ``` SELECT * FROM table_name WHERE insert_date BETWEEN DATE_SUB(DATE_SUB(NOW(), INTERVAL 3 HOUR), INTERVAL 1 DAY) AND DATE_SUB(NOW(), INTERVAL 3 HOUR); ``` This seems to work fine for my purposes but I want to ask, is there is a more readable and more accurate way, using mysql, to get all rows from the past 24 hours ( between 00:00:00 and 23:59:59 time window ) once a day while running the query from an offset time? I am generally new to all of this so any critiques on my overall approach are more than welcome.
I presume `insert_date` is a `DATETIME`? It seems pointless to go to all the trouble of building two limits and using `BETWEEN`. I would simply check that `DATE(insert_date)` is yesterday's date. So ``` WHERE DATE(insert_date) = CURDATE() - INTERVAL 1 DAY ```
``` BETWEEN DATE_FORMAT(DATE_SUB(NOW(), INTERVAL 1 DAY), "%Y-%M-%d 00:00:00") AND DATE_FORMAT(DATE_SUB(NOW(), INTERVAL 1 DAY), "%Y-%M-%d 23:59:59") ``` You could also use Perl date formatting functions to produce the same date-time strings, and interpolate them into the query.
date condition to retrieve data in a 24 hour time window mysql
[ "", "mysql", "sql", "perl", "cron", "" ]
This is my current table with some rows in it: ``` CREATE TABLE `user_versus` ( `id_user_versus` int(11) NOT NULL AUTO_INCREMENT, `id_user_winner` int(10) unsigned NOT NULL, `id_user_loser` int(10) unsigned NOT NULL, `id_user` int(10) unsigned NOT NULL, `date_versus` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY (`id_user_versus`), KEY `id_user_winner` (`id_user_winner`,`id_user_loser`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=14 ; INSERT INTO `user_versus` (`id_user_versus`, `id_user_winner`, `id_user_loser`, `id_user`, `date_versus`) VALUES (1, 6, 7, 1, '2013-10-25 23:02:57'), (2, 6, 8, 1, '2013-10-25 23:02:57'), (3, 6, 9, 1, '2013-10-25 23:03:04'), (4, 6, 10, 1, '2013-10-25 23:03:04'), (5, 6, 11, 1, '2013-10-25 23:03:10'), (6, 6, 12, 1, '2013-10-25 23:03:10'), (7, 6, 13, 1, '2013-10-25 23:03:18'), (8, 6, 14, 1, '2013-10-25 23:03:18'), (9, 7, 6, 2, '2013-10-26 04:02:57'), (10, 8, 6, 2, '2013-10-26 04:02:57'), (11, 9, 8, 2, '2013-10-26 04:03:04'), (12, 9, 10, 2, '2013-10-26 04:03:04'), (13, 9, 11, 2, '2013-10-26 04:03:10'); ``` Now, I want to insert or replace the last row with this query: ``` REPLACE INTO user_versus (id_user_winner, id_user_loser, id_user) VALUES (9, 14, 2) ``` When I do that, instead of replacing the last row (that holds those exact values) it is adding a new row. Why? Thanks!
This is because you are not specifying the primary key, which is `id_user_versus` in your insert. Perhaps you intend to have a unique key (rather than just a standard key) across `id_user_winner` and `id_user_loser` in which case your replace would work to replace the row with those values.
`REPLACE` uses `UNIQUE CONSTRAINT`'s (unique indexes) to determine which rows are 'duplicate'. In your table the only unique constraint is the PRIMARY KEY, and it is not specified in the `INSERT`.
Using REPLACE in MySQL is not actually replacing row
[ "", "mysql", "sql", "replace", "" ]
I have a table containing "ID" column. Different numeric ids are present in this column from 1000 to 9999. (1000, 1001.....,5000,5001....,8000,8001....,9000....9999). Now I need a "**generic**" query which will give me distinct starting id series number present in this column. for example, desired output ``` 1000, 5000, 8000, 9000 ``` If in future 7000 series is added in the ID column then output of the same query should include 7000 also in list like ``` 1000,5000,7000,8000,9000 ```
Try This ``` SELECT min(id) from tablename group by SUBSTRING(id, 1, 1) ```
Another way to do it if your series start with every thousand ``` SELECT id FROM table1 WHERE id % 1000 = 0 ``` Here is **[SQLFiddle](http://sqlfiddle.com/#!2/3ae01/2)** demo --- Now here is a generic way which treats every "island" in id values (a sequence of values where next = prev + 1) as a series. This way a series can start and stop with any value ``` SELECT t1.id FROM table1 t1 LEFT JOIN table1 t2 ON t1.id = t2.id + 1 WHERE t2.id IS NULL ``` Here is **[SQLFiddle](http://sqlfiddle.com/#!2/e0693/2)** demo
Need SQL Query for accessing different serial numbers present in a table
[ "", "mysql", "sql", "sql-server", "" ]
I am trying get a day name like friday, saturday, sunday, monday etc from a given date. I know there is a built in function which returns the day name for example: ``` SELECT DATENAME(dw,'09/23/2013') as theDayName ``` this SQL query returns: **'Monday'** This is all OK. But I would like to pass `Month, Day and Year` individually. I am using the builtin DATEPART function to retrieve month, day and year from a date so I can pass it to the DATENAME function: ``` SELECT DATEPART(m, GETDATE()) as theMonth -- returns 11 SELECT DATEPART(d, GETDATE()) as theDay -- returns 20 SELECT DATEPART(yy, GETDATE()) as theYear -- returns 2013 ``` Now that I have Month, Day, Year values individually, I pass it to my `DATENAME` to get the `Weekname` of the date I want: ``` --my SQL query to return dayName SELECT (DATENAME(dw, DATEPART(m, GETDATE())/DATEPART(d, myDateCol1)/ DATEPART(yy, getdate()))) as myNameOfDay, FirstName, LastName FROM myTable ``` This returns an incorrect Day Name. I tried replace / with - so that in the DATENAME function my SQL query becomes: ``` SELECT DATENAME(dw,'09/23/2013') --becomes SELECT DATENAME(dw,'09-23-2013') ``` but it still returns incorrect dayName from my SQL query. Am I missing something here. Please advise.
You need to construct a date string. You're using `/` or `-` operators which do MATH/numeric operations on the numeric return values of DATEPART. Then `DATENAME` is taking that numeric value and interpreting it as a date. You need to convert it to a string. For example: ``` SELECT ( DATENAME(dw, CAST(DATEPART(m, GETDATE()) AS VARCHAR) + '/' + CAST(DATEPART(d, myDateCol1) AS VARCHAR) + '/' + CAST(DATEPART(yy, getdate()) AS VARCHAR)) ) ```
Tested and works on SQL 2005 and 2008. Not sure if this works in 2012 and later. The solution uses DATENAME instead of DATEPART ``` select datename(dw,getdate()) --Thursday select datepart(dw,getdate()) --2 ``` This is work in sql 2014 also.
Get week day name from a given month, day and year individually in SQL Server
[ "", "sql", "sql-server", "function", "date", "" ]
Imagine we have a table like this: ``` id value 1 a 2 b 3 a 4 a 5 b ``` Query like this ``` SELECT * , COUNT( * ) FROM test GROUP BY value ``` gives us a table like this: ``` id value COUNT(*) 1 a 3 2 b 2 ``` which tells us that there are three 'a' and two 'b' in our table. The question is: is it possible to make a query (without nested SELECT's), which would yield a table like ``` id value count_in_col 1 a 3 2 b 2 3 a 3 4 a 3 5 b 2 ``` The goal is to avoid collapsing columns and to add quantity of 'value' elements in the whole column to each row.
Yes, it's possible to return the specified resultset using only a single SELECT keyword. ``` SELECT t.id , t.value , COUNT(DISTINCT u.id) AS count_in_col FROM mytable t JOIN mytable u ON u.value = t.value GROUP BY t.id ``` to setup test case: ``` CREATE TABLE `mytable` (`id` INT, `value` VARCHAR(1)); INSERT INTO `mytable` VALUES (1,'a'), (2,'b'),(3,'a'),(4,'a'),(5,'b'); ``` returns: ``` id value count_in_col ------ ------ -------------- 1 a 3 2 b 2 3 a 3 4 a 3 5 b 2 ``` NOTE: This assumes that `id` is unique in the table, as would be enforced by a primary key or unique key constraint. In terms of performance, depending on cardinality, an index ... `ON (value,id)` may improve performance. This approach (using a JOIN to match rows on the `value` column) does have the potential to produce a very large intermediate resultset, if there are a "lot" of rows that match on `value`. For example, if there are 1,000 rows with `value='a'`, the intermediate resultset for those rows will be 1,000\*1,000 = 1,000,000 rows. Adding a predicate (in the ON clause) may also improve performance, but reducing the number of rows in the intermediate result. ``` ON u.value = t.value AND u.id >= t.id ``` (There's no real magic; the "trick" is to use `COUNT(DISTINCT id)` to avoid the same `id` value from being counted more than once.)
I don't think that is possible without a subquery to count the number of occurrences of the different values: ``` SELECT a.*,b.valCount FROM test a INNER JOIN ( SELECT value,COUNT(*) AS valCount FROM test GROUP BY value ) b ON b.value = a.value ORDER BY a.id; ``` `sqlfiddle demo`
mysql - how to display count of a value in an extra column
[ "", "mysql", "sql", "" ]
Environment: SQL Server 2012 I have a transaction table that contains a group of records with group id column. In the example illustrated below, group 2 records were copied from group 1 records, Except for the sideId, sideSort, topId and topSort columns. Is there a way to cascade that down from group 1 to group 2 `for just topSort and sideSort`? The hard part is that topId and sideId aren't the same because of Identity fields in parent tables. Here is a [sqlfiddle of the example](http://sqlfiddle.com/#!6/328c0/29) ![enter image description here](https://i.stack.imgur.com/D7wep.png)
Assuming that `Id` is an order field in the Group: ``` WITH C as ( select Tracker.*, ROW_NUMBER() OVER (PARTITION BY GroupId ORDER BY Id) as RN FROM Tracker ) UPDATE CT SET CT.topSort=CTU.topSort, CT.sideSort=CTU.sideSort FROM C as CT JOIN C as CTU ON (CT.rn=CTU.rn) and (CTU.GroupID=1) WHERE CT.GroupID=2 ``` `SQLFiddle demo`
You can do it this way. This is based on the assumption that you copy all the records for one group (`@copiedFromGroupId`) to another (`@copiedToGroupId`), i.e. that the ID's will be shifted by the number of records in the first group. ``` declare @copiedFromGroupId int = 1 declare @copiedToGroupId int = 2 declare @shift int select @shift = (select max(id) from Tracker where GroupId = @copiedToGroupId) - (select max(id) from Tracker where GroupId = @copiedFromGroupId) UPDATE T1 SET T1.TopSort = T2.TopSort, T1.SideSort = T2.SideSort FROM Tracker T1 INNER JOIN Tracker T2 ON T1.ID = T2.ID + @shift ``` Check this [SQL Fiddle](http://sqlfiddle.com/#!6/328c0/37)
sql query - update fields in existing with record values in the same table
[ "", "sql", "sql-server", "sql-server-2012", "" ]
I am writing this statement in MS ACCESS ``` SELECT tblTrades.Tick, Sum(tblBbgDivData.BBG_Div_Forecast) AS SumOfBBG_Div_Forecast, tblBbgDivData.Div_Declared_Date FROM tblBbgDivData INNER JOIN tblTrades ON tblBbgDivData.Tick = tblTrades.Tick GROUP BY tblTrades.Tick HAVING (((tblBbgDivData.Div_Declared_Date) Between [tblTrades].[vd_off] And [tblTrades].[vd_on])); ``` --> you tried to execute a query that does not include the specified expression 'Div\_Declared\_Date' as an aggregate function I have tried to change the <> with Between, same thing!
You can see your problems highlighted below. From the [documentation](http://office.microsoft.com/en-us/access-help/group-by-clause-HA001231482.aspx): > Summary values are omitted if there is no SQL aggregate function in > the SELECT statement. > > Null values in GROUP BY fields are grouped and are not omitted. > However, Null values are not evaluated in any SQL aggregate function. > > **Use the WHERE clause to exclude rows you do not want grouped, and use > the HAVING clause to filter records after they have been grouped.** > > Unless it contains Memo or OLE Object data, a field in the GROUP BY > field list can refer to any field in any table listed in the FROM > clause, even if the field is not included in the SELECT statement, > provided the SELECT statement includes at least one SQL aggregate > function. The Microsoft Access database engine cannot group on Memo or > OLE Object fields. > > **All fields in the SELECT field list must either be included in the > GROUP BY clause or be included as arguments to an SQL aggregate > function.** So your query should be: ``` SELECT tblTrades.Tick, Sum(tblBbgDivData.BBG_Div_Forecast) AS SumOfBBG_Div_Forecast, tblBbgDivData.Div_Declared_Date FROM tblBbgDivData INNER JOIN tblTrades ON tblBbgDivData.Tick = tblTrades.Tick WHERE (((tblBbgDivData.Div_Declared_Date) BETWEEN [tblTrades].[vd_off] AND [tblTrades].[vd_on])); GROUP BY tblTrades.Tick,tblBbgDivData.Div_Declared_Date ```
Having is used for checking aggregated columns (after aggregation). So if you wanted to know that the sum of tblBbgDivData.BBG\_Div\_Forecast was greater than 500, you would use `HAVING SUM(tblBbgDivData.BBG_Div_Forecast) > 500` You need to move it to your where clause: ``` SELECT tblTrades.Tick, Sum(tblBbgDivData.BBG_Div_Forecast) AS SumOfBBG_Div_Forecast, tblBbgDivData.Div_Declared_Date FROM tblBbgDivData INNER JOIN tblTrades ON tblBbgDivData.Tick = tblTrades.Tick WHERE (((tblBbgDivData.Div_Declared_Date) Between [tblTrades].[vd_off] And [tblTrades].[vd_on])) GROUP BY tblTrades.Tick ; ```
Aggregate function
[ "", "sql", "ms-access", "" ]
I have the feeling this should be really simple but can't get my head around it, I'd appreciate any advice on the following (which I've simplified to focus on what I can't do). I'm using Azure SQL. I have a table of results (called testanswers) as follows: ``` id | PupilId | NumMark ---------------------- 1 | 10 | 1 2 | 20 | 2 3 | 10 | 2 4 | 20 | 0 5 | 10 | 1 6 | 20 | 2 ``` I would like to count the number of pupils (i.e. distinct PupilId) who have achieved a number of marks between two boundaries. I tried: ``` SELECT COUNT(DISTINCT PupilId) FROM testanswers WHERE SUM(NumMark) >= 2 AND SUM(NumMark) <= 5 ``` The error message says > An aggregate may not appear in the WHERE clause unless it is in a subquery contained in a HAVING clause or a select list, and the column being aggregated is an outer reference I'd appreciate any suggestions about how to code this query correctly. Thanks, Steph
To find the `SUM(NumMArk)` for every pupil, you have to `GROUP BY PupilID`: ``` SELECT PupilId, SUM(NumMark) FROM testanswers GROUP BY PupilId ; ``` To then keep only the rows where this sum is between `2` and `5`, you can use `HAVING`: ``` SELECT PupilId, SUM(NumMark) FROM testanswers GROUP BY PupilId HAVING SUM(NumMark) >= 2 AND SUM(NumMark) <= 5 ; ``` If you then want to count how many pupils pass this condition (and not show their IDs), you have to wrap the above in another query: ``` SELECT COUNT(*) AS result FROM ( SELECT PupilId FROM testanswers GROUP BY PupilId HAVING SUM(NumMark) BETWEEN 2 AND 5 ) AS tmp ; ``` --- This would also work in SQL-Server - but not sure about the Azure version: ``` SELECT DISTINCT COUNT(*) OVER () AS result FROM testanswers GROUP BY PupilId HAVING SUM(NumMark) BETWEEN 2 AND 5 ; ``` Tested at **[SQL-Fiddle](http://sqlfiddle.com/#!3/ac241/2)**
``` SELECT COUNT(PupilId) AS NUMBEROFPUPILS FROM (SELECT PupilId FROM testanswers GROUP BY PupilId HAVING SUM(NumMark) BETWEEN 2 AND 5) AS TEMP; ```
Count distinct ids based on sum of columns
[ "", "sql", "azure-sql-database", "" ]
I have a query which outputs the below ![enter image description here](https://i.stack.imgur.com/795J0.jpg) I need to get it to provide a running total so for March it would give whats been paid in Feb and Mar, then for April Feb,Mar & Apr and so on. Never come across needing this kind of aggregation before in SQL.
``` select [monthid], [month], ( select sum([paid]) from tbl t2 where t2.[monthid] <= t1.[monthid] ) as paid from tbl t1 ```
You can check [this question](https://stackoverflow.com/questions/860966/calculate-a-running-total-in-sqlserver) and my [answer](https://stackoverflow.com/questions/860966/calculate-a-running-total-in-sqlserver/13744550#13744550) on it. Turns out that recursive common table expression is the fastest method to get running total in SQL Server < 2012. So in your case it could be something like: ``` with cte as ( select T.MonthID, T.Month, T.Paid, T.Paid as Running_Paid from Table1 as T where T.MonthID = 118 union all select T.MonthID, T.Month, T.Paid, T.Paid + C.Running_Paid as Running_Paid from cte as C inner join Table1 as T on T.MonthID = C.MonthID + 1 ) select * from cte option (maxrecursion 0) ```
aggregating data to getting running total
[ "", "sql", "sql-server", "sql-server-2008", "" ]
This may sound like an odd question, but I'm curious to know if it's possible... Is there a way to simulate MySQL records using inline data? For instance, if it is possible, I would expect it to work something like this: ``` SELECT inlinedata.* FROM ( ('Emily' AS name, 26 AS age), ('Paul' AS name, 56 AS age) ) AS inlinedata ORDER BY age ```
Unfortunately MySQL does not support the standard `values` row-constructor for this kind of things, so you need to use a "dummy" select for each row and combine the rows using `UNION ALL` ``` SELECT * FROM ( select 'Emily' AS name, 26 AS age union all select 'Paul', 56 ) AS inlinedata ORDER BY age ``` The `UNION ALL` serves two purposes 1. It preserves any duplicate you might have on purpose 2. It's a (tiny) bit faster than a plain `UNION` (because it does not check for duplicates)
No, not without making it complicated, but you can create a **temporary table** and query that instead. Temporary tables are deleted when the current client session terminates. You can query them and insert data into them just like with other tables. When you create them, you have to use the TEMPORARY keyword, like so: ``` CREATE TEMPORARY TABLE ... ``` This way, you can also reuse the data for multiple queries if needed, no data gets stored, and all records that you query have the right structure (whereas the syntax you give in your example would create problems when you spell a column name wrong)...
Simulate MySQL records using inline data
[ "", "mysql", "sql", "" ]
We have a Oracle 10g and most of our applications are running Oracle Forms 6i. I found that all of the queries written in views/packages/procedures/functions are JOINING tables at WHERE clause level. Example ``` SELECT * FROM TABLE_A A, TABLE_B B, TABLE_C C, TABLE_D D WHERE A.ID=B.ID(+) AND B.NO=C.NO(+) AND C.STATUS=D.ID AND C.STATUS NOT LIKE 'PENDING%' ``` This query applies only to ORACLE since it has the `(+)` join qualifier which is not acceptable in other SQL platforms. The above query is equivalent to: ``` SELECT * FROM TABLE_A A LEFT JOIN TABLE_B B ON A.ID=B.ID LEFT JOIN TABLE_C C ON B.NO=C.NO JOIN TABLE_D D ON C.STATUS=D.ID WHERE C.STATUS NOT LIKE 'PENDING%' ``` Non of the queries I have seen is written with join taking place in the FROM clause. My question can be divided into three parts: Q: Assuming that I have the same Oracle environment, which query is better in terms of performance, cache, CPU load, etc. The first one (joining at WHERE) or the second (joining at FROM) Q: Is there any other implementation of SQL that accepts the `(+)` join qualifier other than oracle? if yes, which? Q: Maybe having the join written at WHERE clause makes the query more readable but compromises the ability to LEFT/RIGHT join, that's why the `(+)` was for. Where can I read more about the origin of this `(+)` and why it was invented specifically to Oracle?
Q1. No difference. You can check it using profiling and compare execution plan. Q2. As I know, only oracle support it. But it is not recommended to use in latest version of Oracle RDBMS: > Oracle recommends that you use the FROM clause OUTER JOIN syntax > rather than the Oracle join operator. Outer join queries that use the > Oracle join operator (+) are subject to the following rules and > restrictions, which do not apply to the FROM clause OUTER JOIN syntax: Q3. Oracle "invent" (+) before outer join was specified in SQL ANSI.
There should be no performance difference. Assuming you're on a vaguely recent version of Oracle, Oracle will implicitly convert the SQL 99 syntax to the equivalent Oracle-specific syntax. Of course, there are bugs in all software so it is possible that one or the other will perform differently because of some bug. The more recent the version of Oracle, the less likely you'll see a difference. The `(+)` operator (and a variety of other outer join operators in other databases) were created because the SQL standard didn't have a standard way of expressing an outer join until the SQL 99 standard. Prior to then, every vendor created their own extensions. It took Oracle a few years beyond that to support the new syntax. Between the fact that bugs were more common in the initial releases of SQL 99 support (not common but more common than they are now), the fact that products needed to continue to support older database versions that didn't support the new syntax, and people being generally content with the old syntax, there is still plenty of code being written today that uses the old Oracle syntax.
What is the difference between join in FROM clause and WHERE clause?
[ "", "sql", "oracle", "join", "inner-join", "outer-join", "" ]
I have a view in SQL Server 2008 and would like to view it in Management Studio. Example: ``` --is the underlying query for the view Example_1 select * from table_aView ``` View name: `Example_1` How to get the query of the corresponding view table (query used to create the view)?
In Management Studio, open the Object Explorer. * Go to your database * There's a subnode `Views` * Find your view * Choose `Script view as > Create To > New query window` and you're done! ![enter image description here](https://i.stack.imgur.com/fNe8Y.png) If you want to retrieve the SQL statement that defines the view from T-SQL code, use this: ``` SELECT m.definition FROM sys.views v INNER JOIN sys.sql_modules m ON m.object_id = v.object_id WHERE name = 'Example_1' ```
Use `sp_helptext` before the `view_name`. Example: ``` sp_helptext Example_1 ``` Hence you will get the query: ``` CREATE VIEW dbo.Example_1 AS SELECT a, b, c FROM dbo.table_name JOIN blah blah blah WHERE blah blah blah ``` sp\_helptext will give stored procedures.
How to get a view table query (code) in SQL Server 2008 Management Studio
[ "", "sql", "sql-server", "view", "" ]
So I have a database with many tables that have a column that contains a GL Account value (for financial purposes). The column name varies by table (i.e. in one table the column is called "gldebitaccount" and in another table it's called "glcreditaccount"). I was able to find all combinations of table / column pairs using the following query: ``` SELECT c.name AS ColName, t.name AS TableName FROM sys.columns c JOIN sys.tables t ON c.object_id = t.object_id WHERE c.name LIKE '%gl%acc%' ``` This query returns close to 100 pairs of tables/columns. I am trying to find any value in any of those table/column pairs that exceeds 25 chars in length. For an individual table/column, I'd typically use: ``` SELECT * FROM tableName WHERE LEN(columnName)>25 ``` I want to avoid having to run that query 100 times with each pair. Is there any way I can do a "for each" (which I know is frowned upon in SQL since everything should be set-based). I've done sub-SELECT statements before, but not any that involved change the table in the FROM clause. Any ideas or help would be greatly appreciated! Thanks in advance!
Yet another solution with dynamic SQL. But now without cursors. It uses FOR XML statement and should be much faster. ``` DECLARE @sqlstatement VARCHAR(MAX); SET @sqlstatement = REPLACE ( STUFF ( ( SELECT 'UNION ALL SELECT ''' + t.name + ''' as TableName, ''' + c.name + ''' AS ColumnName, ' + c.name + ' AS Value FROM ' + t.name + ' WHERE LEN (' + c.name + ') ' + CHAR(62) + ' 25' FROM sys.columns c INNER JOIN sys.tables t ON c.object_id = t.object_id WHERE c.name LIKE '%gl%acc%' FOR XML PATH('') ), 1, 10, '') , '&gt;', '>') EXEC (@sqlstatement) ``` You may want to add extra filter for columns by their type and max\_length: ``` INNER JOIN sys.types ty ON c.system_type_id = ty.system_type_id AND ( ty.name IN ('text', 'ntext') OR ( ty.name IN ('varchar', 'char', 'nvarchar', 'nchar') AND (c.max_length > 25 OR c.max_length = -1) ) ```
As the previous answer, the solution will need dynamic SQL. Here is a way that uses both dynamic SQL and cursors, and you can expect slow performance, so use at your own risk: ``` DECLARE @TableName NVARCHAR(128), @ColumnName NVARCHAR(128) DECLARE @Query NVARCHAR(4000) DECLARE CC CURSOR LOCAL FAST_FORWARD FOR SELECT QUOTENAME(t.name), QUOTENAME(c.name) FROM sys.columns c INNER JOIN sys.tables t ON c.object_id = t.object_id WHERE c.collation_name IS NOT NULL AND c.max_length > 25 AND c.name LIKE '%gl%acc%'; CREATE TABLE #Results(TableName NVARCHAR(128), ColumnName NVARCHAR(128)); OPEN CC FETCH NEXT FROM CC INTO @TableName, @ColumnName WHILE @@FETCH_STATUS = 0 BEGIN SET @Query = 'IF EXISTS(SELECT 1 FROM '+@TableName+' WHERE LEN('+@ColumnName+') > 25) INSERT INTO #Results VALUES(@TableName,@ColumnName)' EXEC sp_executesql @Query, N'@TableName NVARCHAR(128),@ColumnName NVARCHAR(128)', @TableName, @ColumnName; FETCH NEXT FROM CC INTO @TableName, @ColumnName END CLOSE CC DEALLOCATE CC SELECT * FROM #Results ```
Search table columns for values over a certain length
[ "", "sql", "sql-server", "sql-server-2008", "" ]
Hey folks I'll do my best to explain this. I have a table of vehicles, lets say there is ID, year, make, model. ``` |----------------------------| |ID | year | make | model | |----------------------------| |1 | 2004 | Chevy | Silve..| |2 | 2004 | Ford | F-150 | |3 | 2003 | Chevy | Silve..| |4 | 2002 | Chevy | Silve..| |5 | 2002 | Chevy | Silve..| |6 | 2002 | Chevy | Silve..| |----------------------------| ``` I want to query the table and return distinct 'year' and a count of how many vehicles there are for that year. ``` 2004 (2) 2003 (1) 2002 (3) ``` Alternatively I would like to select distinct make and return how many vehicles are also that make. ``` Chevy (5) Ford (1) ``` At the end of the day I'd like to be able to put that all together to get something like this ``` 2004 Chevy Silve.. (1) 2004 Ford F-150 (1) 2003 Chevy Silve.. (1) 2002 Chevy Silve.. (3) ``` I'd appreciate any help
First query: ``` SELECT `year`, COUNT(DISTINCT `make`) N FROM YourTable GROUP BY `year` ``` Second query: ``` SELECT `make`, COUNT(*) N FROM YourTable GROUP BY `make` ``` Third query: ``` SELECT `year`, `make`, `model`, COUNT(*) N FROM YourTable GROUP BY `year`, `make`, `model` ```
This can be done in a single query itself. You need count of Vehicles broken down in to make and year. The below query will work for that ``` SELECT MAKE, YEAR, COUNT(*) FROM TABLE GROUP BY YEAR, MAKE ```
MYSQL Count Distinct year make model in single query?
[ "", "mysql", "sql", "" ]
I'm trying to create a database view which has the result of an aggregate query for a column value (see [this post](https://stackoverflow.com/a/20120049/1391325) for the aggregate query used), e.g. something like ``` MESSAGEVIEW ---------------------------------------------- MESSAGEID SENDER PARTCOUNT ---------------------------------------------- 1 Tim 2 2 Bridgekeeper 0 ``` I've tried this code for selecting the view data: ``` SELECT m.MESSAGEID, m.SENDER, (SELECT COUNT(mp.MESSAGEID) FROM MESSAGE m LEFT JOIN MESSAGEPART mp ON mp.MESSAGEID = m.MESSAGEID GROUP BY m.MESSAGEID) AS PARTCOUNT FROM MESSAGE m; ``` which returns `ORA-01427: single-row subquery returns more than one row`. I've also tried ``` SELECT m.MESSAGEID, m.SENDER, COUNT(mp.MESSAGEID) AS PARTCOUNT FROM MESSAGE m LEFT JOIN MESSAGEPART mp ON mp.MESSAGEID = m.MESSAGEID GROUP BY m.MESSAGEID; ``` Which returns `ORA-00979: not a GROUP BY expression`. How can I get a view which properly shows this information?
This should do it. ``` SELECT m.MESSAGEID, m.SENDER, (SELECT COUNT(mp.MESSAGEID) FROM MESSAGEPART mp WHERE mp.MESSAGEID = m.MESSAGEID) AS PARTCOUNT FROM MESSAGE m; ```
You can get it into a single query without a subquery if you use count() in window function format: ``` SELECT m.MESSAGEID, m.SENDER, COUNT(mp.MESSAGEID) OVER (PARTITION BY m.MESSAGEID) AS PARTCOUNT FROM MESSAGE m LEFT JOIN MESSAGEPART mp ON mp.MESSAGEID = m.MESSAGEID; ```
Embedding aggregate query in non-aggregate query
[ "", "sql", "oracle", "select", "count", "subquery", "" ]
How could I limit the JOIN to only show once for every player? `LIMIT 1` ``` SELECT name FROM players p INNER JOIN player_frags pd ON pd.lasthit = p.name OR pd.mostdamage = p.name LIMIT 10 ``` And perhaps find the most common value of lasthit or mostdamage and order by that
Two ways: ``` SELECT DISTINCT name FROM players p INNER JOIN player_frags pd ON pd.lasthit = p.name OR pd.mostdamage = p.name ``` or ``` SELECT name FROM players p INNER JOIN player_frags pd ON pd.lasthit = p.name OR pd.mostdamage = p.name GROUP BY name; ```
``` SELECT DISTINCT(name) FROM players p INNER JOIN player_frags pd ON pd.lasthit = p.name OR pd.mostdamage = p.name ```
LIMIT 1 on a join
[ "", "mysql", "sql", "join", "" ]
I'm trying to do something that I thought it would be simple but it seems not to be. I have a project model that has many vacancies. ``` class Project < ActiveRecord::Base has_many :vacancies, :dependent => :destroy end ``` I want to get all the projects that have at least 1 vacancy. I tried something like this: ``` Project.joins(:vacancies).where('count(vacancies) > 0') ``` but it says `SQLite3::SQLException: no such column: vacancies: SELECT "projects".* FROM "projects" INNER JOIN "vacancies" ON "vacancies"."project_id" = "projects"."id" WHERE ("projects"."deleted_at" IS NULL) AND (count(vacancies) > 0)`.
`joins` uses an inner join by default so using `Project.joins(:vacancies)` will in effect only return projects that have an associated vacancy. UPDATE: As pointed out by @mackskatz in the comment, without a `group` clause, the code above will return duplicate projects for projects with more than one vacancies. To remove the duplicates, use ``` Project.joins(:vacancies).group('projects.id') ``` UPDATE: As pointed out by @Tolsee, you can also use `distinct`. ``` Project.joins(:vacancies).distinct ``` As an example ``` [10] pry(main)> Comment.distinct.pluck :article_id => [43, 34, 45, 55, 17, 19, 1, 3, 4, 18, 44, 5, 13, 22, 16, 6, 53] [11] pry(main)> _.size => 17 [12] pry(main)> Article.joins(:comments).size => 45 [13] pry(main)> Article.joins(:comments).distinct.size => 17 [14] pry(main)> Article.joins(:comments).distinct.to_sql => "SELECT DISTINCT \"articles\".* FROM \"articles\" INNER JOIN \"comments\" ON \"comments\".\"article_id\" = \"articles\".\"id\"" ```
1) To get Projects with at least 1 vacancy: ``` Project.joins(:vacancies).group('projects.id') ``` 2) To get Projects with more than 1 vacancy: ``` Project.joins(:vacancies).group('projects.id').having('count(project_id) > 1') ``` 3) Or, if `Vacancy` model sets counter cache: ``` belongs_to :project, counter_cache: true ``` then this will work, too: ``` Project.where('vacancies_count > ?', 1) ``` Inflection rule for `vacancy` may need to be [specified manually](http://api.rubyonrails.org/classes/ActiveSupport/Inflector/Inflections.html)?
Find all records which have a count of an association greater than zero
[ "", "sql", "ruby-on-rails", "ruby-on-rails-3", "activerecord", "" ]
I have a requirement such that I want to get the id of last updated row in a table and I want to use this id for some other operation. I don't have column like UpdatedOn , so that I can refer that column. So is there any function like scope\_identity and @@identity(which gives me id of last inserted row id) for Update also. Please help me out for the same.
You can do something like this ``` declare @mytable as TABLE ( Id int ) Update Table Set Name='Nitin' OUTPUT INSERTED.Id into @mytable where LastName='Varpe' Select Id from @mytable ```
Since you're only interested in the last ID updated, then you could simply assign it to a variable in the `SET` clause: ``` DECLARE @LastID INT = NULL UPDATE SomeTable SET SomeColumn = 'SomeValue', @LastID = ID WHERE SomeOtherColumn = 'SomeOtherValue' ```
How to get the ID of last updated Row in SQL Server 2008
[ "", "sql", "sql-server-2008", "sql-update", "" ]
I'm trying to select some rows from an Oracle database like so: ``` select * from water_level where bore_id in ('85570', '112205','6011','SP068253'); ``` This used to work fine but a recent update has meant that `bore_id` in `water_level` has had a bunch of whitespace added to the end for each row. So instead of `'6011'` it is now `'6011 '`. The number of space characters added to the end varies from 5 to 11. Is there a way to edit my query to capture the `bore_id` in my list, taking account that trialling whitespace should be ignored? I tried: ``` select * from water_level where bore_id in ('85570%', '112205%','6011%','SP068253%'); ``` which returns more rows than I want, and ``` select * from water_level where bore_id in ('85570\s*', '112205\s*','6011\s*', 'SP068253\s*'); ``` which didn't return anything? Thanks JP
You should `RTRIM` the `WHERE` clause ``` select * from water_level where RTRIM(bore_id) in ('85570', '112205','6011'); ``` To add to that, `RTRIM` has an overload which you can pass a second parameter of what to trim, so if the trailing characters weren't spaces, you could remove them. For example if the data looked like 85570xxx, you could use: ``` select * from water_level where RTRIM(bore_id, 'x') IN ('85570','112205', '6011'); ```
You could use the `replace` function to remove the spaces `select * from water_level where replace(bore_id, ' ', '') in ('85570', '112205', '6011', 'SP068253');` Although, a better option would be to remove the spaces from the data if they are not supposed to be there or create a view.
SQL select from list where white space has been added to end
[ "", "sql", "oracle", "select", "whitespace", "" ]
Consider the example table posted below. What I need to do is to update the table, specifically the NULL values on each row with the "last" non-NULL values. For example, the NULL values on rows 3 and 4 should be updated with the values of row 2 of the same column, that is ``` 2 007585102 2001 03 31 2001 04 12 2 154980 6300 154980 6300 3 007585102 2001 03 31 2001 04 19 2 154980 6300 154980 6300 4 007585102 2001 03 31 2001 04 26 2 154980 6300 154980 6300 ``` and NULL values on rows 9 to 15 updated with the values of row 8 and so on. I honestly have no idea how to do this and I will greatly appreciate any help. Thanks in advance. Sorry about the extremely poor formatting of the table but I can't post anything but plain text. EXAMPLE TABLE ``` 1 007585102 2001 03 31 2001 04 05 2 543660 22100 543660 22100 2 007585102 2001 03 31 2001 04 12 2 154980 6300 154980 6300 3 007585102 NULL 2001 04 19 NULL NULL NULL NULL NULL 4 007585102 NULL 2001 04 26 NULL NULL NULL NULL NULL 5 007585102 2001 03 31 2001 05 03 2 2726664 110840 2726664 110840 6 007585102 2001 03 31 2001 05 10 2 836400 34000 836400 34000 7 007585102 2001 03 31 2001 05 17 2 534804 21740 7634364 310340 8 007585102 2001 03 31 2001 05 24 2 4920 200 4920 200 9 007585102 NULL 2001 05 31 NULL NULL NULL NULL NULL 10 007585102 NULL 2001 06 07 NULL NULL NULL NULL NULL 11 007585102 NULL 2001 06 14 NULL NULL NULL NULL NULL 12 007585102 NULL 2001 06 21 NULL NULL NULL NULL NULL 13 007585102 NULL 2001 06 28 NULL NULL NULL NULL NULL 14 007585102 NULL 2001 07 05 NULL NULL NULL NULL NULL 15 007585102 NULL 2001 07 12 NULL NULL NULL NULL NULL 16 007585102 2001 06 30 2001 07 19 2 2693301 118300 2693301 118300 17 007585102 2001 06 30 2001 07 26 2 232220 10200 NULL NULL ```
I have explained this in details here: <https://koukia.ca/common-sql-problems-filling-null-values-with-preceding-non-null-values-ad538c9e62a6#.k0dxirgwu> here is the TSQL you need, ``` SELECT * INTO #Temp FROM ImportedSales; ;With CTE As ( SELECT ProductName , Id , COUNT(ProductName) OVER(ORDER BY Id ROWS UNBOUNDED PRECEDING) As MyGroup FROM #Temp ), GetProduct AS ( SELECT [ProductName] , First_Value(ProductName) OVER(PARTITION BY MyGroup ORDER BY Id ROWS UNBOUNDED PRECEDING) As UpdatedProductName FROM CTE ) UPDATE GetProduct Set ProductName = UpdatedProductName; SELECT * FROM #TemP; ```
Not the most elegant solution, for that I'd suggest a recursive CTE. ``` drop table #temp GO select * into #temp from ( select 1 as id, '2001 03 31' as dat union all select 2, '2001 03 31' union all select 3, null union all select 4, null union all select 5, '2001 03 31' union all select 6, '2001 03 31' union all select 7, '2001 03 31' union all select 8, '2001 03 31' union all select 9, null union all select 10, null union all select 11, null union all select 12, null union all select 13, null union all select 14, null union all select 15, null union all select 16, '2001 06 30' union all select 17, '2001 06 30' ) x update t set t.dat = t2.dat from #temp t join ( select t1.id, max(t2.id) as maxid from #temp t1 join #temp t2 on t1.id>t2.id and t2.dat is not null and t1.dat is null group by t1.id ) x on t.id=x.id join #temp t2 on t2.id=x.maxid select * from #temp ```
Filling null row values with last non-null values
[ "", "sql", "sql-server", "null", "" ]
I am trying to build a tricky SQL statement and I need some advice. I have these 2 tables: ``` subscribers | id | name | email | | 1 | John Doe| john.doe@domain.com| | 2 | Jane Doe| jane.doe@domain.com| | 3 | Mr Jones| mr.jones@domain.com| ``` and ``` links | id | campaign_id | link | id_of_user_that_clicked | | 1 | 8 | http://somesite.com/?utm_source=news1 | 1,2,3 | | 2 | 8 | http://somesite.com/?utm_source=news2 | 1,2 | | 3 | 5 | http://somesite.com/?utm_source=news3 | 2 | ``` To pull the name and email of Mr. Jones is fast. I run: ``` SELECT name, email FROM subscribers WHERE id IN ('3') ``` But I want to add to the result the URL that he clicked stored in the link column of the second table. I tried to do something similar to: ``` SELECT name, email FROM subscribers WHERE id IN ('3') LEFT JOIN SELECT link FROM links WHERE (id_of_user_that_clicked LIKE '3') ``` to no avail Notice that in the second table, I have the id stored with other id's as well. How can I match **subscribers.id** with corresponding number in **links.id\_of\_user\_that\_clicked** and have the query display the link next to the name and email. Any ideas?
thank you for the help. I managed to build a query to serve my purposes like this. I am pasting the code used: ``` SELECT s.name, s.email, l.link FROM subscribers s, links l WHERE s.id IN ('.$subscribers.') AND FIND_IN_SET(s.id, l.id_of_user_that_clicked) ``` `$subscribers` is a comma separated array (3456, 7865, 267, etc) It works like a charm.
You probably want the string to contain 3, not be like 3. Try this: ``` SELECT name, email, link FROM subscribers LEFT JOIN link on id_of_user_that_clicked LIKE '%3%' WHERE subscribers.id = '3' ```
Add column from different table to SQL query (if it finds the id in a list)
[ "", "mysql", "sql", "select", "join", "" ]
At some point I have a *numeric(28,10)* and I cast it in *money* (I know its bad but for legacy reason I have to return *money*) in the same time I also have to set the sign (multiplying by +1/-1). In a first attempt I had cast the +/-1 to match the *numeric* type. For the value *133.3481497944* we encounter a strange behavior *(I have simplified the actual code in order to keep only the elements needed to demonstrate the problem)*: ``` SELECT CAST(CAST(133.3481497944 AS numeric(28,10))*cast(1 AS numeric(28,10)) AS money) ``` > 133.3482 which is not correctly rounded... Removing the cast solve the problem ``` SELECT CAST(CAST(133.3481497944 AS numeric(28,10)) * 1 AS money) ``` > 133.3481 Did someone know what is happening in SQL? How can a multiplication by 1 and cast(1 AS numeric(28,10)) affect the result of the rounding?
When multiplying numerics, SQL uses the following rules to determine the precision and scale of the output: ``` p = p1 + p2 + 1 s = s1 + s2 ``` which makes sense - you wouldn't want 1.5 \* 2.5 to be truncated to one digit past the decimal. Nor would you want 101 \* 201 to be limited to 3 digits of precision, giving you 20300 instead of 20301. In your case that would result in a precision of 57 and a scale of 20, which isn't possible - the maximum precision and scale is 38. If the resulting type is too big, decimal digits are sacrificed in order to preserve the integral (most significant) part of the result. From the [SQL Programmability & API Development Team Blog](http://blogs.msdn.com/b/sqlprogrammability/archive/2006/03/29/564110.aspx): > In SQL Server 2005 RTM (and previous versions), we decided preserve a minimum scale of 6 in both multiplication and division. So *your* answer depands on how *big* and *precise* you need the multiplier to be. In order to preserve 10 digits of decimal precision. If the multiplier needs a scale bigger than 9, then decimal digits may be truncated. If you use a smaller precision and scale, you should be fine: ``` SELECT CAST(CAST(133.3481497944 AS numeric(28,10))*cast(1 AS numeric(9,7)) AS money) ``` yields `133.3481`.
``` SELECT CAST(CAST(133.3481497944 AS numeric(28,10))*cast(1 AS numeric(28,10)) AS money) --Your original, CAST(1 AS numeric(28,10)) --Just the 1 casted, CAST(133.3481497944 AS numeric(28,10)) --Your expected calculation, CAST(133.3481497944 AS numeric(28,10))*cast(1 AS numeric(28,10)) -- The actual calculation SELECT CAST(133.3481497944 AS numeric(28,10))*cast(1.5 AS numeric(28,10)), CAST(133.3481497944 AS numeric(28,10))*1.5, CAST((133.3481497944*1) AS money), 133.3481497944*1 ``` Returns ``` 133.3482 1.0000000000 133.3481497944 133.348150 200.022225 200.02222469160 133.3481 133.3481497944 ``` So as mentioned above, there really isn't any true rounding, but a loss of precision during the cast. As to exactly why, I don't know. Most likely during the calculation(multiplication) while using the Numeric(28,10) it cuts off some precision. I added the second lines to show that really you may not need your numeric casting.
Strange behavior while rounding in SQL server 2008
[ "", "sql", "sql-server", "casting", "rounding", "" ]
I search a table based on an ID column in my where clause. I have a list of IDs that may or may not be present in this table. A simple query will give me the IDs which exist in that table (if any). Is there a way to also return ID's that were not found ? ``` Table -- ID 1GH 2BN 3ER SELECT * FROM Table WHERE ID IN (big list 9FG, 1GH, 3UI etc) --If ID's in above list are not in table, then show those ids. ``` Desired output - ``` 9FG, 3UI were not found in the table ```
If I understand correctly what you need you can do it this way ``` SELECT q.id, CASE WHEN t.id IS NULL THEN 'no' ELSE 'yes' END id_exists FROM ( SELECT '9FG' id UNION ALL SELECT '1GH' UNION ALL SELECT '3UI' ) q LEFT JOIN table1 t ON q.id = t.id ``` Output: ``` | ID | ID_EXISTS | |-----|-----------| | 9FG | no | | 1GH | yes | | 3UI | no | ``` or if you just need a list of non-existent ids ``` SELECT q.id FROM ( SELECT '9FG' id UNION ALL SELECT '1GH' UNION ALL SELECT '3UI' ) q LEFT JOIN table1 t ON q.id = t.id WHERE t.id IS NULL ``` Output: ``` | ID | |-----| | 9FG | | 3UI | ``` The trick is to use an `OUTER JOIN` instead of `WHERE` condition to filter data from your table and be able to see the mismatches. Here is **[SQLFiddle](http://sqlfiddle.com/#!3/d3f9d/1)** demo
To search you can use ``` SELECT * From Mytable where id in ( select id from (values (1), (2), (3)) as SearchedIds(Id) ) ``` and the opposite to find unamtched: ``` SELECT id from (values (1), (2), (3)) as SearchedIds(Id) WHERE id not in (SELECT id From MyTable) ``` The syntax ``` Values(...) asSearchedIds(id) ``` is supported in Sql2008, for Sql2005 you have to do ``` ( SELECT 1 as Id UNION ALL SELECT 2 UNION ALL ...etc ) as SearchedIds ``` Note: you can rewrite those queries with JOINS (INNER and LEFT)
Show elements of where clause that are not present in table
[ "", "sql", "sql-server", "sql-server-2005", "" ]
I've tried ``` @users = User.where(name: @request.requester or @request.regional_sales_mgr) ``` and ``` @users = User.where(name: @request.requester).where(name: @request.regional_sales_mgr).all ``` This doesn't seem to work. What I want is to find the user whose name matches @request.requester, and the user whose name matches @request.regional\_sales\_mgr, and save them both into the variable @users.
In the general case "OR" queries can be written as: ``` User.where("users.name = ? OR users.name = ?", request.requester, request.regional_sales_mgr) ``` Note: **Rails 5** will support `OR` using: ``` User.where(name: request.requester).or(User.where(name: request.regional_sales_mgr)) ``` For this specific case as state in other answers an IN query is simpler: ``` User.where(name: [request.requester, request.regional_sales_mgr]) ```
You want to use the SQL `IN` clause. Activerecord provides a shortcut to this: ``` @users = User.where(name: [@request.requester, @request.regional_sales_mgr]).all ``` Giving an array of values to `name:` will generate the following SQL statement: ``` SELECT * FROM users WHERE name IN (value1, value2, and so on...); ``` This should find all the users whose names are `@request.requester` or `@request.regional_sales_mgr`
Rails .where(.... and ....) Searching for two values of the same attribute
[ "", "sql", "ruby-on-rails", "activerecord", "" ]
I dont have proper timestamp in table; is it possible to delete 1 day old logs even now? I have a column name as SESSION\_IN which is basically a VARCHAR datatype, and the value will be like ``` 2013-10-15 02:10:27.883;1591537355 ``` is there any way to trim the number after ; and is it possible to compare with "sysdate" identifier? This SP should compare all the session IDs with current datetime and it should delete if it is older then 1 day.
You can igonre time part and convert date into required format somthing like this ``` SYSDATE - to_date('date_col','YYYY-DD-MM') ``` then you can perform operations.
Use the Substring function to extract the datetime portion from the record, then use convert to datetime to cast it to datetime, and then finally use datediff to check if it was inserted yesterday. Use all these caluses in a ``` DELETE FROM table WHERE ___ query ```
how to delete the records which is inserted 1 day ago
[ "", "sql", "oracle", "" ]
Given the following: ``` CREATE TABLE A (A1 INTEGER, A2 INTEGER, A3 INTEGER); INSERT INTO A(A1, A2, A3) VALUES (1, 1, 1); INSERT INTO A(A1, A2, A3) VALUES (2, 1, 1); ``` I want to select the maximum `A1` given specific `A2` and `A3` values, and have those values (`A2` and `A3`) also appear in the returned row (e.g. so that I may use them in a join since the SELECT below is meant for a sub-query). It would seem logical to be able to do the following, given that A2 and A3 are hardcoded in the `WHERE` clause: ``` SELECT MAX(A1) AS A1, A2, A3 FROM A WHERE A2=1 AND A3=1 ``` However, PostgreSQL (and I suspect other RDBMs as well) balks at that and requests an aggregate function for A2 and A3 even though their value is fixed. So instead, I either have to do a: ``` SELECT MAX(A1) AS A1, MAX(A2), MAX(A3) FROM A WHERE A2=1 AND A3=1 ``` or a: ``` SELECT MAX(A1) AS A1, 1, 1 FROM A WHERE A2=1 AND A3=1 ``` The first alternative I don't like cause I could have used `MIN` instead and it would still work, whereas the second alternative doubles the number of positional parameters to provide values for when used from a programming language interface. Ideally I would have wanted a `UNIQUE` aggregate function which would assert that all values are equal and return that single value, or even a `RANDOM` aggregate function which would return one value at random (since I know from the `WHERE` clause that they are all equal). Is there an idiomatic way to write the above in PostgreSQL?
Even simpler, you only need `ORDER BY` / `LIMIT 1` ``` SELECT a1, a2, a3 -- add more columns as you please FROM a WHERE a2 = 1 AND a3 = 1 ORDER BY 1 DESC -- 1 is just a positional reference (syntax shorthand) LIMIT 1; ``` `LIMIT 1` is Postgres specific syntax. The [SQL standard](http://en.wikipedia.org/wiki/Select_%28SQL%29#FETCH_FIRST_clause) would be: ``` ... FETCH FIRST 1 ROWS ONLY ``` My first answer with [`DISTINCT ON`](http://en.wikipedia.org/wiki/Select_%28SQL%29#FETCH_FIRST_clause) was for the more complex case where you'd want to retrieve the maximum `a1` per various combinations of `(a2,a3)` Aside: I am using [lower case identifiers for a reason](http://www.postgresql.org/docs/current/interactive/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS).
Does this work for you ? ``` select max(A1),A2,A3 from A GROUP BY A2,A3; ``` **EDIT** ``` select A1,A2,A3 from A where A1=(select max(A1) from A ) limit 1 ```
Unique aggregate function when singular value is guaranteed by the WHERE clause
[ "", "sql", "postgresql", "aggregate-functions", "sql-limit", "" ]
I set my column ID as IDENTITY with seed and increment = 1,1. But, it does not start counting from 1. Instead it starts at 2. When I insert the next row, it sets the ID = 7 and not 2. Why could this be happening and how do I fix it ? I scripted the table and also checked management studio. It is actually an IDENTITY(1,1).
`IDENTITY` does not guarantee a contiguous set of values with no gaps. If this is what you need, you should consider something else (e.g. rolling your own serializable `MAX+1` solution). All kinds of things can create gaps: * an INSERT that fails * a transaction that is rolled back * [this bug](http://connect.microsoft.com/SQLServer/feedback/details/739013/alwayson-failover-results-in-reseed-of-identity) * a subsequent DELETE (which can also ruin your own solution too) * etc. etc.
It's by Design. There is NO GUARANTEE of consecutive-ness for IDENTITY Column. See the response from **Microsoft** to this "Bug" Report: [Identity Column jumps by Seed value](http://connect.microsoft.com/SQLServer/feedback/details/743300/identity-column-jumps-by-seed-value). > Posted by Microsoft on 1/25/2013 at 3:38 PM > > Thanks for your feedback. This behavior is actually by design as we > try to guarantee uniqueness of the ID rather than making sure we don't > have gaps. As a result, we do skip some values just in case in certain > scenarios so that we don't have any issues around accidentally > repeating numbers.
IDENTITY not generating the seed and increment correctly
[ "", "sql", "sql-server", "" ]
I'm taking an online course about database. Now I understand the following query ``` SELECT column1 FROM t1 WHERE EXISTS (SELECT * FROM t2); ``` which is something like `SELECT column1 FROM t1 WHERE TRUE;` but coming to the subquery with EXISTS,I am falling into confusion. (College has (name, state) as its key.) the following query seeks the name of colleges that has other colleges in the same state. ``` SELECT name,state FROM College C1 WHERE EXISTS (SELECT * FROM College C2 WHERE C1.state = C2.state and C1.name != C2.name); ``` The subquery will return a relation, right? since it inside a parenthesis, it is computed first. And the returned relation actually exists due to my artifact-designed table. So the EXISTS operator is always return TRUE, the result is a full table with selected columns. Edits: sorry for not good demostration. here is my relation ``` Standford CA Berkley CA MIT MA ``` the true result is ``` Standford CA Berkley CA ``` but I think it should be ``` Standford CA Berkley CA MIT MA ``` Apparently there's bug in my line of reasoning. Could you tell me where it is and what need to be corrected.
There is no need for a correlated subquery or anything like that in this case, I don't think. You can just join college to college where the states are the same and the colleges are not: ``` SELECT c1.name,c1.state FROM College C1, College C2 WHERE C1.state = C2.state and C1.name != C2.name ``` [SQL Fiddle](http://sqlfiddle.com/#!2/eb4b67/1) If you want to do some reading on correlated sub-queries, here's a good resource: [Clicky!](http://technet.microsoft.com/en-us/library/ms187638%28v=sql.105%29.aspx)
``` SELECT name,state FROM College C1 WHERE EXISTS (SELECT * FROM College C2 WHERE C1.state = C2.state and C1.name != C2.name); ``` In English is roughly > > Select the columns name and state from the college table where there is a college that is in the same state and has a different name. So the result make perfect sense.
Interpretation of the SQL operator EXISTS in subquery?
[ "", "sql", "subquery", "correlated-subquery", "" ]
Okay, so I received my graded exam that I took last week, that is on SQL using Oracle Plus. Two of the questions that I answered were marked wrong, but I feel strongly that I am correct, but he is insisting that my answer is wrong... So I was hoping someone here could clarify whether or not my answer is correct! 1.) The first question stated: "Display the dog ids and names of those which were not treated in May 2003" So using previously created databases, one being called 'dog' and the other was named 'treatment'. *Note* Both tables filled with values The dog table consisted of values for the dog id's, name, etc and the treatment table consisted of treatment\_id, foreign key(dog\_id) referencing dog\_id from the table dog, treatment\_date and a few other fields... So my answer was: ``` SELECT dog.dog_id, dog.name, treatment.treatment_date FROM dog, treatment WHERE treatment.treatment_date NOT BETWEEN '01-MAY-03' and '31-MAY-03' AND dog.dog_id = treatment.dog_id; ``` But he said it was wrong... "--No, these are treated NOT IN MAY" 2.) My second question was "Find all the doctors who ever treated a dog adopted with the lowest adoption fee. Note that there may be more than one dog adopted with the lowest adoption fee. List the doctor's ids and the dog ids. Order the result by doctor ids in ascending order and for each doctor by dog ids in ascending order." So using other tables which we created before the exam, one named **'adoption'** with attributes of **'vet\_id'** a foreign key which references vet\_id from a table called vet, and **'dog\_id'** another foreign key; and also using another table named **'treatment'** with attributes **'vet\_id'** a foreign key which references vet\_id from a table called 'vet', **'dog\_id'**, and **'treatment\_fee'** \*Note - **'treatment\_fee'**'s lowest value or fee for this exam was '15'. My Answer Was: ``` SELECT treatment.vet_id, adoption.dog_id FROM adoption, treatment WHERE adoption.dog_id = treatment.dog_id AND treatment.fee = '15' ORDER BY treatment.vet_id; ``` But again he said this was wrong... So, I hope somebody here could help me and help give me an answer to this situation.
Issue #1 - your query returns "all dogs who had treatments that weren't in May", not "all dogs that did not have treatments in May". Note that if a dog had a treatment in both April and May, that dog would show up in your query. Issue #2 - the question is asking about "adoption fee", but you're looking at the lowest "treatment fee".
First answer: ``` SELECT dog.dog_id, dog.name FROM dog WHERE NOT EXISTS ( SELECT treatment.id FROM treatment WHERE tratment.dog_id = dog.id AND treatment.treatment_date BETWEEN '01-MAY-03' and '31-MAY-03' ) ``` Second one: ``` SELECT distinct treatment.vet_id, treatment.dog_id FROM adoption, treatment WHERE adoption.dog_id = treatment.dog_id AND dog.adoption_fee = (SELECT MIN(d2.adoption_fee) FROM dog d2 ) ORDER BY treatment.vet_id, treatment.dog_id; ```
SQL script, are my scripts accurate?
[ "", "mysql", "sql", "oracle", "sqlplus", "oracle-sqldeveloper", "" ]
Where can I find information about stored procedure parameters? In my situation I need to know only the input parameters of given store procedure. In the `sys.objects` there is only common details about the procedure. In `sys.sql_modules` I can extract the whole SQL text of a procedure. As (in SQL Server Management studio) I am able to extract information about the parameters in tabular view using `ALT+F1` when selecting the procedure name. I hope there is some place from which I can extract input parameters details in that way.
``` select 'Parameter_name' = name, 'Type' = type_name(user_type_id), 'Length' = max_length, 'Prec' = case when type_name(system_type_id) = 'uniqueidentifier' then precision else OdbcPrec(system_type_id, max_length, precision) end, 'Scale' = OdbcScale(system_type_id, scale), 'Param_order' = parameter_id, 'Collation' = convert(sysname, case when system_type_id in (35, 99, 167, 175, 231, 239) then ServerProperty('collation') end) from sys.parameters where object_id = object_id('MySchema.MyProcedure') ```
``` select * from sys.parameters inner join sys.procedures on parameters.object_id = procedures.object_id inner join sys.types on parameters.system_type_id = types.system_type_id AND parameters.user_type_id = types.user_type_id where procedures.name = 'sp_name' ```
How to get stored procedure parameters details?
[ "", "sql", "sql-server", "t-sql", "stored-procedures", "parameters", "" ]
*EDIT* Yes, you are correct. What I am looking for in pseudo code is a list that (st\_area(geom)>0.1) OR (COUNT(\*) > 1) and in words: return a list, that only has only states that have an area greater than 0.1, but don't exclude that state if it is the only one in the country (usually and island country, which has plenty room for labels next to it). The excluded states are places like Slovenia that has 100 provinces, but a tiny land area (Great Britain is also and offender). --- I have a table for the entire world listing all states and provinces (I call the table states, but it also can mean province). StateName, ContryName, Pop, geometry The table is on PostGreSQL 9.2 PostGIS 2.0 I need to remove small states (area too small) to label. But if it is an island (one country, one state) then I want to leave it in. My Naive query is like this, but there is a syntax error: ``` SELECT s.name,s.admin, st_area(geom) FROM vector.states s INNER JOIN ( SELECT ss.admin FROM vector.states ss GROUP BY ss.admin HAVING (COUNT(*) > 1) AND (st_area(ss.geom) > 0.01) ) a ON a.admin = s.admin ORDER BY s.admin ASC; ``` this is the syntax error (and I expected this to happen). ``` ERROR: column "ss.geom" must appear in the GROUP BY clause or be used in an aggregate function LINE 7: HAVING (COUNT(*) > 1) AND (st_area(ss.geom) > 0.01) ```
Two problems: 1. Just like the error message tells you, geom needs to be wrapped in an aggregate function if it is not listed in `GROUP BY`. You could just use `min()` ... 2. You got your logic backwards. It needs to be `COUNT(*) = 1 OR ..` But this can be solved more elegantly with an anti-semi-join using [`NOT EXISTS`](http://www.postgresql.org/docs/current/interactive/functions-subquery.html#FUNCTIONS-SUBQUERY-EXISTS): ``` SELECT s.name, s.admin, st_area(geom) FROM vector.states s WHERE st_area(s.geom) > 0.01 -- state big enough ... OR NOT EXISTS ( -- ... or there are no other counties SELECT 1 FROM vector.states s2 WHERE s2.admin = s.admin AND s2.pk_column <> s.pk_column -- exclude self-join ) ORDER BY s.admin; ``` Replace `pk_column` with your actual primary key column(s).
Editing the answer, I didn't understood the problem the 1st time sorry for that Try this: ``` SELECT s.name,s.admin, st_area(geom) FROM vector.states s WHERE s.admin in ( SELECT ss.admin FROM vector.states ss GROUP BY ss.admin HAVING (COUNT(*) > 1) ) AND (st_area(s.geom) > 0.01) ORDER BY s.admin ASC; ``` Hope it helps!
Compound OR SQL query (query #1 with WITH) and (query #2) with COUNT() excusions
[ "", "sql", "postgresql", "postgis", "" ]
Is there a way to write a SQL query to omit columns that have all of the same values? For example, ``` row A B 1 9 0 2 7 0 3 5 0 4 2 0 ``` I'd like to return just ``` row A 1 9 2 7 3 5 4 2 ```
Although it is possible to use SQL to find if all rows in a column have identical values, there is no way to make a fixed SQL statement not return a column based on the content of the query. Here is how to find out if all items in a column have identical values: ``` SELECT COUNT(row)=COUNT(DISTINCT B) FROM my_table ``` You can run a preliminary query to see if a column needs to be displayed, and then form the query dynamically, including the column only when you need it.
The only way to change what *columns* are returned is by executing separate queries. So you'd have to do something like: ``` IF EXISTS(SELECT null FROM myTable WHERE B <> 0) BEGIN SELECT row, A, B FROM myTable END ELSE BEGIN SELECT row, A FROM myTable END ``` However it's *generally* bad practice to return different columns based on the data - otherwise you make the *client* determine if a particular column is in the result set first before trying to access the data. This sort of requirement is more commonly done when *displaying* the data, e.g. in a web page, in a report, etc.
SQL omit columns with all same value
[ "", "sql", "" ]
i got into a problem. I have 3 tables, I join them but cant get the right answer, if anyone could help me, I would appreciate it. So what i want is, for Answer to show me unused Materials. Tables: ![enter image description here](https://i.stack.imgur.com/ot4e8.png) With a lot of help I got all the unused materials with this: ``` SELECT DISTINCT(Materials.Material_Name) AS Unused_materials FROM Materials LEFT JOIN Table2 ON Table2.Material_Number = Materials.Material_Number LEFT JOIN Table1 ON Table1.Procedure_Name = Table2.Procedure_Name WHERE Table1.Procedure_Name IS NULL ``` **UPDATED:** If I may ask one more question in the same post. Maybe anyone would know how to get the above information, just with the Wanted Day. Like: Unused materials in 2012-11-20?
Johan above almost has it (unused materials will still exist in table\_2, since Table2 is just a lookup of Procedure\_Name to Material\_Number (which if it's a 1:1 relationship seems like a silly addition). Still need to join against Table1 to get the null values. so ``` SELECT * FROM Materials c LEFT JOIN Table2 b ON b.Material_Number = c.Material_Number LEFT JOIN Table1 a ON a.Procedure_Name = b.Procedure_Name WHERE a.Procedure_Name is null; ``` All commands for repeat comparison: ``` create table Table1 (Procedure_Name char, date date); create table Table2 (Procedure_Name char, Material_Number int); create table Materials (Material_Name varchar(12), Material_Number int); insert into Table1 values ('A', '2012-11-22'); insert into Table1 values ('B', '2012-11-21'); insert into Table1 values ('C', '2012-11-20'); insert into Table2 values ('A', '101'); insert into Table2 values ('B', '102'); insert into Table2 values ('C', '103'); insert into Table2 values ('D', '104'); insert into Table2 values ('E', '105'); insert into Materials values ('Iron', 101); insert into Materials values ('Steel', 102); insert into Materials values ('Wood', 103); insert into Materials values ('Glass', 104); insert into Materials values ('Sand', 105); -- johan query SELECT Material_Name FROM Materials M LEFT JOIN Table2 T ON M.Material_Number = T.Material_Number WHERE Procedure_Name IS NULL; -- my query select Material_Name from Materials c left join Table2 b on b.Material_Number = c.Material_Number left join Table1 a on a.Procedure_Name = b.Procedure_name where a.Procedure_Name is null; ```
Joining in Table1 and Table2 as you have it will only give you rows for used materials. What you want to do is select all materials that have NOT BEEN used. To do this, simply use the NOT IN construct with a sub-query: ``` SELECT Material_Name AS Unused_materials FROM Materials WHERE Materials.Material_Number NOT IN (SELECT Material_Number FROM Table2) ``` **Update:** Now that I understand the data model (only wanting materials that have NOT been used in a procedure listed in Table1), here is the correct query to use if using NOT IN construct for the query: ``` SELECT Material_Name AS Unused_materials FROM Materials WHERE Materials.Material_Number NOT IN (SELECT Material_Number FROM Table2 INNER JOIN Table1 ON Table1.Procedure_Name = Table2.Procedure_Name) ```
MySQL Left Join (I think)
[ "", "mysql", "sql", "database", "" ]
Trying to do a select in SQL Server 2005 and send the output to xml. Table 2 is a general use table with various types of info. Some product info is in there if it's type 2, it's a sales lead if it's type 1. We can have multiple sales leads and products for each case\_num from table 1. **Table 1** ``` case_num, date ``` **table 2** (general use) ``` case_num, rec_type (1=sales lead; 2=product), various info based on type in generic columns = col_a, col_b, ``` I'm trying something like: ``` select case.case_num ,case.date ,product.col_a as product_name ,product.col_b as product_price ,lead.col_a as sales_lead_name ,lead.col_b as sales_lead_address from table_1 case ,table_2 product ,table_2 lead where (case.case_num = product.case_num AND product.rec_type = 2) OR (case.case_num = lead.case_num AND lead.rec_type = 1) for xml auto, elements ``` This is bringing back results like ``` <case> <case_num>1</case_num> <date>1/1/2013</date> <product> <product_name>name</product_name> <product_price>1.00</product_price> <lead> <sales_lead_name>bob smith</sales_lead_name> <sales_lead_address>address 1</sales_lead_address> </lead> </product> <product> <product_name>name2</product_name> <product_price>2.00</product_price> <lead> <sales_lead_name>bob smith</sales_lead_name> <sales_lead_address>address 1</sales_lead_address> </lead> </product> </case> ``` I don't want the name repeating for every product. With multiple products and multiple leads, how do I format the SQL so it doesn't make sort of a Cartesian product in my results? I made another example to illustrate my problem. [SQL Fiddle example](http://sqlfiddle.com/#!3/f937f/1) This is making a cartesian result, matching all parts to all persons. I want to have one case then each part then each person, then close case. I was trying DISTINCT and getting errors. I thought about UNION to tie two together, but I don't think I can do that within a bigger select for my case. What I’m getting: ``` CASE_NUM DATE PART_NAME PART_PRICE PERSON_NAME COMPANY 1 2013-01-01 stapler 1.00 bob smith acme supplies 1 2013-01-01 matches 2.00 bob smith acme supplies 1 2013-01-01 stapler 1.00 john doe john supply inc 1 2013-01-01 matches 2.00 john doe john supply inc ``` What I want: ``` CASE_NUM DATE PART_NAME PART_PRICE PERSON_NAME COMPANY 1 2013-01-01 bob smith acme supplies 1 2013-01-01 john doe john supply inc 1 2013-01-01 matches 2.00 1 2013-01-01 stapler 1.00 ```
A friend suggested only joining once, then filtering the select based on case statements and I think this is going to work. Thanks folks ``` select case_num = case when child.rec_type = '1' then mast.case_num when child.rec_type = '2' then mast.case_num else '' end ,mast_date = case when child.rec_type = '1' then mast.date when child.rec_type = '2' then mast.date else '' end ,child.rec_type ,part_name = case when child.rec_type = '1' then child.col_a else '' end ,part_price = case when child.rec_type = '1' then child.col_b else '' end ,subject_name = case when child.rec_type = '2' then child.col_a else '' end ,subject_type = case when child.rec_type = '2' then child.col_b else '' end from table_master mast join table_child child on mast.case_num = child.case_num --for xml auto, elements; ```
As @marc\_s points out, you create your Cartesian product yourself by 'joining' the tables the way you do. Always try to use `JOIN` instead. I believe the following query would fit you needs: ``` select [case].case_num ,[case].date ,lead.col_a as sales_lead_name ,lead.col_b as sales_lead_address ,product.col_a as product_name ,product.col_b as product_price from table_1 [case] JOIN table_2 lead ON [case].case_num = lead.case_num AND lead.rec_type = 1 JOIN table_2 product ON [case].case_num = product.case_num AND product.rec_type = 2 FOR XML auto, elements; ``` You can view it on [SQLFiddle.com](http://sqlfiddle.com/#!3/5fcac/4) The output will look like this: ``` <case> <case_num>1</case_num> <date>2013-01-01</date> <lead> <sales_lead_name>bob smith</sales_lead_name> <sales_lead_address>address 1</sales_lead_address> <product> <product_name>name</product_name> <product_price>1.00</product_price> </product> <product> <product_name>name2</product_name> <product_price>2.00</product_price> </product> </lead> </case> ```
SQL select output to XML
[ "", "sql", "sql-server", "xml", "sql-server-2005", "" ]
I have a data set in the form. ``` id | attribute ----------------- 1 | a 2 | b 2 | a 2 | a 3 | c ``` Desired output: ``` attribute| num ------------------- a | 1 b,a | 1 c | 1 ``` In MySQL, I would use: ``` select attribute, count(*) num from (select id, group_concat(distinct attribute) attribute from dataset group by id) as subquery group by attribute; ``` I am not sure this can be done in Redshift because it does not support group\_concat or any psql group aggregate functions like array\_agg() or string\_agg(). See [this question](https://stackoverflow.com/questions/21084913/amazon-redshift-mechanism-for-aggregating-a-column-into-a-string). An alternate solution that would work is if there was a way for me to pick a random attribute from each group instead of group\_concat. How can this work in Redshift?
This solution, inspired by Masashi, is simpler and accomplishes selecting a random element from a group in Redshift. ``` SELECT id, first_value as attribute FROM(SELECT id, FIRST_VALUE(attribute) OVER(PARTITION BY id ORDER BY random() ROWS BETWEEN unbounded preceding AND unbounded following) FROM dataset) GROUP BY id, attribute ORDER BY id; ```
I found a way to pick up a random attribute for each id, but it's too tricky. Actually I don't think it's a good way, but it works. SQL: ``` -- (1) uniq dataset WITH uniq_dataset as (select * from dataset group by id, attr) SELECT uds.id, rds.attr FROM -- (2) generate random rank for each id (select id, round((random() * ((select count(*) from uniq_dataset iuds where iuds.id = ouds.id) - 1))::numeric, 0) + 1 as random_rk from (select distinct id from uniq_dataset) ouds) uds, -- (3) rank table (select rank() over(partition by id order by attr) as rk, id ,attr from uniq_dataset) rds WHERE uds.id = rds.id AND uds.random_rk = rds.rk ORDER BY uds.id; ``` Result: ``` id | attr ----+------ 1 | a 2 | a 3 | c OR id | attr ----+------ 1 | a 2 | b 3 | c ``` Here are tables in this SQL. ``` -- dataset (original table) id | attr ----+------ 1 | a 2 | b 2 | a 2 | a 3 | c -- (1) uniq dataset id | attr ----+------ 1 | a 2 | a 2 | b 3 | c -- (2) generate random rank for each id id | random_rk ----+---- 1 | 1 2 | 1 <- 1 or 2 3 | 1 -- (3) rank table rk | id | attr ----+----+------ 1 | 1 | a 1 | 2 | a 2 | 2 | b 1 | 3 | c ```
Pick a random attribute from group in Redshift
[ "", "sql", "amazon-redshift", "" ]
How to exclude records with certain values in sql (MySQL) ``` Col1 Col2 ----- ----- A 1 A 20 B 1 C 20 C 1 C 88 D 1 D 20 D 3 D 1000 E 19 E 1 ``` Return Col1 (and Col2), but only if the value in Col2 is 1 or 20, but not if there's also another value (other than 1 or 20) Desired result: ``` Col1 Col2 ----- ----- A 1 A 20 B 1 ``` But not C,D and E because there's a value in Col2 other than 1 or 20 I've used fictitious values for Col2 and only two values (1 and 20) but in real there some more. I can use IN ('1', '20') for the values 1 and 20 but how to exclude if there's also another value in Col2. (there's no range !)
``` Select col1,col2 From table Where col1 not in (Select col1 from table where col2 not in (1,20)) ```
Use `SUM()` ``` SELECT * FROM t INNER JOIN (SELECT SUM(IF(Col2 IN (1, 20), 1, -1)) AS ranges, col1 FROM t GROUP BY col1 HAVING ranges=2) as counts ON counts.col1=t.col1 ``` *Update*: while it will work for non-repeated list, it may result in wrong set for table with repeated values (i.e. `1`, `20`, `20`, `1` in column - it will still fit request if repeats are allowed, but you've not mentioned that). For case with repeats where's a way too: ``` SELECT t.* FROM t INNER JOIN (SELECT col1, col2 FROM t GROUP BY col1 HAVING COUNT(DISTINCT col2)=2 AND col2 IN (1, 20)) AS counts ON test.col1=counts.col1 ``` (and that will work in common case too, of course)
How to exclude records with certain values in sql
[ "", "mysql", "sql", "" ]
I'm working in SQL Server 2008, I'm using the following query to retrieve the records, ``` SELECT Var_AssoId, Var_Geo, Var_Vertical, Var_AccountID, Dt_VisaValidFrom, Dt_VisaValidTill, Var_Grade, Var_ProjectID, Bit_SupervisorResponse, a.Int_CommentID, Var_CommentsEntered,Dt_Date, Bit_MailUploadStatus, Var_MailUploadPath, a.Dt_UpdatedOn, Var_UpdatedBy, b.Var_SupervisorComments FROM Testingpmo_Travelready_SupervisorInput a INNER JOIN Testingpmo_Travelready_SupervisorComments b ON a.Int_CommentID = b.Int_CommentID ORDER BY a.Dt_UpdatedOn ``` following is the output of this query, ``` Var_AssoId Int_CommentID Dt_UpdatedOn 251922 2 9/25/13 5:22 PM 305561 2 9/25/13 5:24 PM 109483 1 9/25/13 5:24 PM 305561 4 9/25/13 6:09 PM 109483 3 10/1/13 12:44 PM 109483 3 10/1/13 12:47 PM 109483 3 10/1/13 12:48 PM 109483 3 10/1/13 12:51 PM 109483 3 10/1/13 2:23 PM ``` I want to get only one latest updated record in each `Var_AssoId`. For example the output of the query should be like, ``` Var_AssoId Int_CommentID Dt_UpdatedOn 251922 2 9/25/13 5:22 PM 305561 4 9/25/13 6:09 PM 109483 3 10/1/13 2:23 PM ``` to get this output what i need to add in the query?
Assuming your table structures as: ``` create table Testingpmo_Travelready_SupervisorInput(Var_AssoId int, Var_Geo int, Var_Vertical int, Var_AccountID int, Dt_VisaValidFrom datetime, Dt_VisaValidTill datetime, Var_Grade char(1), Var_ProjectID int, Bit_SupervisorResponse bit, Int_CommentID int, Var_CommentsEntered varchar(10), Dt_Date datetime, Bit_MailUploadStatus bit, Var_MailUploadPath varchar(10), Dt_UpdatedOn datetime, Var_UpdatedBy varchar(10)); ``` and ``` create table Testingpmo_Travelready_SupervisorComments( Var_SupervisorComments varchar(10),Int_CommentID int) ``` you can write a query as: ``` SELECT a.Var_AssoId, Var_Geo, Var_Vertical, Var_AccountID, Dt_VisaValidFrom, Dt_VisaValidTill, Var_Grade, Var_ProjectID, Bit_SupervisorResponse, a.Int_CommentID, Var_CommentsEntered, Dt_Date, Bit_MailUploadStatus, Var_MailUploadPath, a.Dt_UpdatedOn Var_UpdatedBy, b.Var_SupervisorComments FROM Testingpmo_Travelready_SupervisorInput a inner join Testingpmo_Travelready_SupervisorComments b on a.Int_CommentID=b.Int_CommentID inner join (select t1.Var_AssoId,MAX(t1.Dt_UpdatedOn) as Dt_UpdatedOn FROM Testingpmo_Travelready_SupervisorInput T1 group by t1.Var_AssoId) T on a.Var_AssoId = T.Var_AssoId and a.Dt_UpdatedOn = T.Dt_UpdatedOn ORDER BY a.Dt_UpdatedOn ```
Query: ``` SELECT a.Var_AssoId, a.Var_Geo, a.Var_Vertical, a.Var_AccountID, a.Dt_VisaValidFrom, a.Dt_VisaValidTill, a.Var_Grade, a.Var_ProjectID, a.Bit_SupervisorResponse, a.Int_CommentID, a.Var_CommentsEntered, a.Dt_Date, a.Bit_MailUploadStatus, a.Var_MailUploadPath, a.Dt_UpdatedOn, a.Var_UpdatedBy, a.Var_SupervisorComments FROM ( SELECT Var_AssoId, Var_Geo, Var_Vertical, Var_AccountID, Dt_VisaValidFrom, Dt_VisaValidTill, Var_Grade, Var_ProjectID, Bit_SupervisorResponse, a.Int_CommentID, Var_CommentsEntered, Dt_Date, Bit_MailUploadStatus, Var_MailUploadPath, a.Dt_UpdatedOn, Var_UpdatedBy, b.Var_SupervisorComments, ROW_NUMBER()OVER(PARTITION BY Var_AssoId ORDER BY a.Dt_UpdatedOn Desc ) rnk FROM Testingpmo_Travelready_SupervisorInput a INNER JOIN Testingpmo_Travelready_SupervisorComments b ON a.Int_CommentID=b.Int_CommentID )a WHERE a.rnk = 1 ORDER BY a.Dt_UpdatedOn ```
Query to select latest updated record
[ "", "sql", "sql-server", "sql-server-2008", "sql-server-2005", "sql-server-2008-r2", "" ]
I've been trying to contrust a query to grab columns for 3 tables, but I keep getting mismatched rows. These are my tables: Messages - messageID - sender (can either be a customerID or employeeID) - receiver (can either be a customerID or employeeID) **(Note: for each message it will contain only 1 customer and 1 employee, ie. Customer's dont interact with eachother and employees don't message eachother also)** Customer - customerID Employee - employeeID - departmentID DEPARTMENT - departentID - departmentName For a particular customer with customerID = 5, I want to figure out what is the DepartmentName of the employee they were talking to. My Intial attemp at this was: ``` SELECT * FROM Messages,Employee, Departmnet, WHERE sender = '5' OR receiver = '5' AND (Employee.employeeID = Messages.sender OR Employee.employeeID = Messages.Receiver) AND Employee.departmentID = Department.DepartmentID; ``` However this returns way more rows than expected. I think it's because sender or receiver can potentially be the employeeID. My 2nd guess is maybe i have to join tables, but i dont have much experience in this. If anyone could show me or tell me how to perform this query I would appreciate it.
Did you try this: ``` SELECT * FROM Messages m INNER JOIN Employee e INNER JOIN Departmnet d ON ((m.sender=e.employeeId || m.receiver=e.employeeId) d.departmentId=e.departmentId) WHERE m.sender = '5' OR m.receiver = '5' ```
The fundamental problem is that you are missing parentheses around this clause: ``` sender = '5' OR receiver = '5' ``` However, I would recommend that you use ANSI-style joins to make the query more readable too. This should help: ``` SELECT * FROM Messages INNER JOIN Employee ON (Employee.employeeID = Messages.sender OR Employee.employeeID = Messages.Receiver) INNER JOIN Department ON Employee.departmentID = Department.DepartmentID WHERE (Messages.sender = '5' OR Messages.receiver = '5'); ```
What kind of JOIN or WHERE clause do I need?
[ "", "mysql", "sql", "join", "where-clause", "" ]
I came across two queries which seems to have the same result: applying aggregate function on partition. I am wondering if there is any difference between these two queries: ``` SELECT empno, deptno, sal, MIN(sal) OVER (PARTITION BY deptno) "Lowest", MAX(sal) OVER (PARTITION BY deptno) "Highest" FROM empl SELECT empno, deptno, sal, MIN(sal) KEEP (DENSE_RANK FIRST ORDER BY sal) OVER (PARTITION BY deptno) "Lowest", MAX(sal) KEEP (DENSE_RANK LAST ORDER BY sal) OVER (PARTITION BY deptno) "Highest" FROM empl ``` The first version is more logical but second one may be some kind special case, maybe some performance optimization.
In your example, there's no difference, because your aggregate is on the same column that you are sorting on. The real point/power of "KEEP" is when you aggregate and sort on *different* columns. For example (borrowing the "test" table from the other answer)... ``` SELECT deptno, min(name) keep ( dense_rank first order by sal desc, name ) , max(sal) FROM test group by deptno ``` ; This query gets the name of person with the highest salary in each department. Consider the alternative without a "KEEP" clause: ``` SELECT deptno, name, sal FROM test t WHERE not exists ( SELECT 'person with higher salary in same department' FROM test t2 WHERE t2.deptno = t.deptno and (( t2.sal > t.sal ) OR ( t2.sal = t.sal AND t2.name < t.name ) ) ) ``` The KEEP clause is easier and more efficient (only 3 consistent gets vs 34 gets for the alternative, in this simple example).
``` MIN(sal) KEEP (DENSE_RANK FIRST ORDER BY sal) OVER (PARTITION BY deptno) ``` The statement can be considered in (roughly) right-to-left order: * `OVER (PARTITION BY deptno)` means partition the rows into distinct groups of `deptno`; then * `ORDER BY sal` means, for each partition, order the rows by `sal` (implicitly using `ASC`ending order); then * `KEEP (DENSE_RANK FIRST` means give a (consecutive) ranking to the ordered rows for each partition (rows with identical values for the ordering columns will be given the same rank) and discard all rows which are not ranked first; and finally * `MIN(sal)` for the remaining rows of each partition, return the minimum salary. In this case the `MIN` and `DENSE_RANK FIRST` are both operating on the `sal` column so will do the same thing and the `KEEP (DENSE_RANK FIRST ORDER BY sal)` is redundant. However if you use a different column for the minimum then you can see the effects: [SQL Fiddle](http://sqlfiddle.com/#!4/1897b/1) **Oracle 11g R2 Schema Setup**: ``` CREATE TABLE test (name, sal, deptno) AS SELECT 'a', 1, 1 FROM DUAL UNION ALL SELECT 'b', 1, 1 FROM DUAL UNION ALL SELECT 'c', 1, 1 FROM DUAL UNION ALL SELECT 'd', 2, 1 FROM DUAL UNION ALL SELECT 'e', 3, 1 FROM DUAL UNION ALL SELECT 'f', 3, 1 FROM DUAL UNION ALL SELECT 'g', 4, 2 FROM DUAL UNION ALL SELECT 'h', 4, 2 FROM DUAL UNION ALL SELECT 'i', 5, 2 FROM DUAL UNION ALL SELECT 'j', 5, 2 FROM DUAL; ``` **Query 1**: ``` SELECT DISTINCT MIN(sal) KEEP (DENSE_RANK FIRST ORDER BY sal) OVER (PARTITION BY deptno) AS min_sal_first_sal, MAX(sal) KEEP (DENSE_RANK FIRST ORDER BY sal) OVER (PARTITION BY deptno) AS max_sal_first_sal, MIN(name) KEEP (DENSE_RANK FIRST ORDER BY sal) OVER (PARTITION BY deptno) AS min_name_first_sal, MAX(name) KEEP (DENSE_RANK FIRST ORDER BY sal) OVER (PARTITION BY deptno) AS max_name_first_sal, MIN(name) KEEP (DENSE_RANK LAST ORDER BY sal) OVER (PARTITION BY deptno) AS min_name_last_sal, MAX(name) KEEP (DENSE_RANK LAST ORDER BY sal) OVER (PARTITION BY deptno) AS max_name_last_sal, deptno FROM test ``` **[Results](http://sqlfiddle.com/#!4/1897b/1/0)**: ``` | MIN_SAL_FIRST_SAL | MAX_SAL_FIRST_SAL | MIN_NAME_FIRST_SAL | MAX_NAME_FIRST_SAL | MIN_NAME_LAST_SAL | MAX_NAME_LAST_SAL | DEPTNO | |-------------------|-------------------|--------------------|--------------------|-------------------|-------------------|--------| | 1 | 1 | a | c | e | f | 1 | | 4 | 4 | g | h | i | j | 2 | ```
PARTITION BY with and without KEEP in Oracle
[ "", "sql", "oracle", "" ]
This is my table ``` CREATE TABLE [dbo].[marks_581]( [Name] [varchar](30) NOT NULL, [Subject] [varchar](30) NOT NULL, [Marks] [int] NOT NULL ) INSERT INTO marks_581 select 'Dishant','English',40 union all select 'Dishant','Maths',45 union all select 'Dishant','Hindi',49 union all select 'Pranay','English',41 union all select 'Pranay','Maths',45 union all select 'Pranay','Hindi',50 union all select 'Gautham','English',41 union all select 'Gautham','Maths',45 union all select 'Gautham','Hindi',49 select * from marks_581 And my output should be like the below one Name |Subject1|Marks1|Subject2|marks2|Subject3|marks3 Dishant|English |40 |Maths |45 |hindi |49 Gautham|English |40 |Maths |45 |hindi |49 Pranay |English |40 |Maths |45 |hindi |49 ``` This is my query and I am not able to get the subject name in subjects column. It says like couldn't convert varchar to datatype int. Any anyone help me? ``` SELECT Name, SUM(CASE WHEN Subject= 'English' THEN Subject ELSE 0 END) AS Subject1, SUM(CASE WHEN Subject= 'English' THEN Marks ELSE 0 END) AS Marks1, SUM(CASE WHEN Subject= 'Maths' THEN Subject ELSE 0 END) AS Subject2, SUM(CASE WHEN Subject= 'Maths' THEN Marks ELSE 0 END) AS Marks2, SUM(CASE WHEN Subject= 'Hindi' THEN Subject ELSE 0 END) AS Subject3, SUM(CASE WHEN Subject= 'Hindi' THEN Marks ELSE 0 END) AS Marks3 FROM marks_581 GROUP BY Name ORDER BY Name ```
Use a different aggregate function, such as `MAX()`, for non-numeric data: ``` SELECT Name, MAX(CASE WHEN Subject= 'English' THEN Subject END) AS Subject1, SUM(CASE WHEN Subject= 'English' THEN Marks ELSE 0 END) AS Marks1, MAX(CASE WHEN Subject= 'Maths' THEN Subject END) AS Subject2, SUM(CASE WHEN Subject= 'Maths' THEN Marks ELSE 0 END) AS Marks2, MAX(CASE WHEN Subject= 'Hindi' THEN Subject END) AS Subject3, SUM(CASE WHEN Subject= 'Hindi' THEN Marks ELSE 0 END) AS Marks3 FROM marks_581 GROUP BY Name ORDER BY Name ``` Although, given the nature of this query, we know we can simplify it further: ``` SELECT Name, 'English' AS Subject1, SUM(CASE WHEN Subject= 'English' THEN Marks ELSE 0 END) AS Marks1, 'Maths' AS Subject2, SUM(CASE WHEN Subject= 'Maths' THEN Marks ELSE 0 END) AS Marks2, 'Hindi' AS Subject3, SUM(CASE WHEN Subject= 'Hindi' THEN Marks ELSE 0 END) AS Marks3 FROM marks_581 GROUP BY Name ORDER BY Name ```
``` SELECT Name, MAX(CASE WHEN Subject= 'English' THEN Subject ELSE Subject END) AS Subject1, SUM(CASE WHEN Subject= 'English' THEN Marks ELSE 0 END) AS Marks1, MAX(CASE WHEN Subject= 'Maths' THEN Subject ELSE Subject END) AS Subject2, SUM(CASE WHEN Subject= 'Maths' THEN Marks ELSE 0 END) AS Marks2, MAX(CASE WHEN Subject= 'Hindi' THEN Subject ELSE Subject END) AS Subject3, SUM(CASE WHEN Subject= 'Hindi' THEN Marks ELSE 0 END) AS Marks3 FROM marks_581 GROUP BY Name ORDER BY Name ```
Small change in SQL Query
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have the following 2 set of data and I want to combine it into one whole set (summing one of the column). In this example, only 1 item is shown. 1st set (master table): ``` start_date |item 2013-10-01 | 1 2013-10-15 | 1 2013-10-25 | 1 ``` 2nd set (detail table): ``` working_date | item_no | qty 2013-10-03 | 1 | 2 2013-10-05 | 1 | 6 2013-10-18 | 1 | 3 2013-10-23 | 1 | 4 2013-10-27 | 1 | 6 2013-10-28 | 1 | 3 ``` Then, I want to group the data in the detail table together with the master table according to the starting date and summing the qty as follow: Final table (result): ``` starting_date | item_no | total_qty 2013-10-01 | 1 | 8 2013-10-15 | 1 | 7 2013-10-25 | 1 | 9 ``` My difficult is that I do not know how to sum the qty by matching the working\_date into the starting\_date range before summing the qty. In other words, I need to sum the qty starting from 2013-10-01 to 2013-10-14, 2013-10-15 to 2013-10-24 and 2013-10-25 to 2013-10-31
For each record in detail table, you need to find the nearest previous date from master table. Once you find that, just aggregate on that date. ``` select starting_date, sum(qty) from ( select working_date, item_no, qty, (select max(master_table.starting_date) from master_table where master_table.starting_date <= details_table.working_date and master_table.item_no = details_table.item_no ) as starting_date from details_table ) as modfied_table group by starting_date order by starting_date; ``` [Demo](http://sqlfiddle.com/#!4/48986/3).
This will work for you: ``` SELECT start_date, item_no, sum(qty) as qty FROM #mastertable as aa INNER JOIN #detailtable as bb ON aa.item = bb.item_no WHERE (start_date <= working_date AND working_date < (SELECT TOP 1 start_date FROM #mastertable as innerT WHERE innerT.start_date > aa.start_date)) OR (working_date >= (SELECT TOP 1 start_date FROM #mastertable as innerT ORDER BY innerT.start_date DESC) AND start_date = (SELECT TOP 1 start_date FROM #mastertable as innerT ORDER BY innerT.start_date DESC)) GROUP BY start_date, item_no ORDER BY start_date ```
Grouping 2 sets of data into 1 set using the same key
[ "", "sql", "oracle", "group-by", "" ]
Assuming I have following data and I wanted to collect the most recent date info's, what would be the best way to write sql using max function or any other function? FYI - I only have a read only access to the server and cannot create temp table or what so ever. Thanks. ``` <table border="1"> <tr> <th>NAME</th> <th>ID</th> <th>CODE</th> <th>DATE</th> </tr> <tr> <td>A </td> <td>Z1 </td> <td>780.52 </td> <td>11/14/11</td> </tr> <tr> <td>A </td> <td>Z1 </td> <td>780.54 </td> <td>10/31/11</td> </tr> <tr> <td>A </td> <td>Z1 </td> <td>780.54 </td> <td>10/24/11 </td> </tr> <tr> <td>AB </td> <td>Z22 </td> <td>327.23 </td> <td>12/09/11</td> </tr> <tr> <td>AB </td> <td>Z22 </td> <td>327.23 </td> <td>11/02/11</td> </tr> <tr> <td>AB </td> <td>Z22 </td> <td>327.23 </td> <td>09/13/11</td> </tr> <tr> <td>B </td> <td>Z55 </td> <td>327.23 </td> <td>01/06/11</td> </tr> <tr> <td>C </td> <td>Z77 </td> <td>327.23 </td> <td>01/04/11</td> </tr> <tr> <td>DD </td> <td>Z888 </td> <td>327.23 </td> <td>10/31/11</td> </tr> <tr> <td>DD </td> <td>Z888 </td> <td>327.23 </td> <td>10/24/11</td> </tr> <tr> <td>DD </td> <td>Z888 </td> <td>327.23 </td> <td>10/06/11</td> </tr> <tr> <td>DD </td> <td>Z888 </td> <td>327.23 </td> <td>08/08/11</td> </tr> <tr> <td>DD </td> <td>Z888 </td> <td>780.54 </td> <td>07/28/11</td> </tr> <tr> <td>DD </td> <td>Z888 </td> <td>327.23 </td> <td>07/19/11</td> </tr> <tr> <td>EE </td> <td>Z2323 </td> <td>327.23 </td> <td>03/17/11</td> </tr> <tr> <td>EE </td> <td>Z2323 </td> <td>327.23 </td> <td>02/24/11</td> </tr> <tr> <td>EE </td> <td>Z2323 </td> <td>780.54 </td> <td>02/13/11</td> </tr> <tr> <td>FF </td> <td>Z99 </td> <td>327.23 </td> <td>07/07/11</td> </tr> <tr> <td>FF </td> <td>Z99 </td> <td>780.54 </td> <td>06/28/11</td> </tr> <tr> <td>II </td> <td>Z963 </td> <td>327.23 </td> <td>09/19/11</td> </tr> <tr> <td>II </td> <td>Z963 </td> <td>327.23 </td> <td>08/30/11</td> </tr> <tr> <td>II </td> <td>Z963 </td> <td>327.23 </td> <td>06/29/11</td> </tr> <tr> <td>II </td> <td>Z963 </td> <td>780.54 </td> <td>06/29/11</td> </tr> <tr> <td>II </td> <td>Z963 </td> <td>780.54 </td> <td>06/14/11</td> </tr> <tr> <td>L </td> <td>Z99999 </td> <td>327.23 </td> <td>09/16/11</td> </tr> <tr> <td>NN </td> <td>Z9870 </td> <td>327.23 </td> <td>11/23/11</td> </tr> <tr> <td>NN </td> <td>Z9870 </td> <td>327.23 </td> <td>10/06/11</td> </tr> <tr> <td>NN </td> <td>Z9870 </td> <td>327.23 </td> <td>06/07/11</td> </tr> <tr> <td>NN </td> <td>Z9870 </td> <td>780.54 </td> <td>01/18/11</td> </tr> <tr> <td>NN </td> <td>Z9870 </td> <td>780.54 </td> <td>01/11/11</td> </tr> </table> ```
``` SELECT tbl.Name, tbl.Id, tbl.Code, tbl.date FROM tbl JOIN ( SELECT Name, Id, Code, MAX(date) AS Max_Date FROM tbl GROUP BY Name, Id, Code) tbl_max ON tbl.Name=tbl_max.Namne AND tbl.Id=tbl_max.Id And tbl.Code=tbl_Max.Code AND tbl.Date=tbl_max.Max_Date ```
[SQL Fiddle](http://sqlfiddle.com/#!4/e34f2/3) **Query 1**: ``` SELECT MIN( NAME ) KEEP ( DENSE_RANK LAST ORDER BY "DATE" ASC ) AS NAME, MIN( ID ) KEEP ( DENSE_RANK LAST ORDER BY "DATE" ASC ) AS ID, MIN( CODE ) KEEP ( DENSE_RANK LAST ORDER BY "DATE" ASC ) AS CODE, MIN("DATE") KEEP ( DENSE_RANK LAST ORDER BY "DATE" ASC ) AS "DATE" FROM tbl ``` **[Results](http://sqlfiddle.com/#!4/e34f2/3/0)**: ``` | NAME | ID | CODE | DATE | |------|------|--------|---------------------------------| | AB | Z22 | 327.23 | December, 09 2011 00:00:00+0000 | ```
How to use max function on oracle SQL based on below data?
[ "", "sql", "oracle", "" ]
According to some weird requirement, i need to select the record where all the output values in both the columns should be unique. Input looks like this: ``` col1 col2 1 x 1 y 2 x 2 y 3 x 3 y 3 z ``` Expected Output is: ``` col1 col2 1 x 2 y 3 z ``` or ``` col1 col2 1 y 2 x 3 z ``` I tried applying the distinct on 2 fields but that returns all the records as overall they are distinct on both the fields. What we want to do is that if any value is present in the col1, then it cannot be repeated in the col2. Please let me know if this is even possible and if yes, how to go about it.
Great problem! Armunin has picked up on the deeper structural issue here, this is a recursive enumerable problem description and can only be resolved with a recursive solution - base relational operators (join/union/etc) are not going to get you there. As Armunin cited, one approach is to bring out the PL/SQL, and though I haven't checked it in detail, I'd assume the PL/SQL code will work just fine. However, Oracle is kind enough to support recursive SQL, through which we can build the solution in just SQL: -- Note - this SQL will generate *every* solution - you will need to filter for SOLUTION\_NUMBER=1 at the end ``` with t as ( select 1 col1, 'x' col2 from dual union all select 1 col1, 'y' col2 from dual union all select 2 col1, 'x' col2 from dual union all select 2 col1, 'y' col2 from dual union all select 3 col1, 'x' col2 from dual union all select 3 col1, 'y' col2 from dual union all select 3 col1, 'z' col2 from dual ), t0 as (select t.*, row_number() over (order by col1) id, dense_rank() over (order by col2) c2_rnk from t), -- recursive step... t1 (c2_rnk,ids, str) as (-- base row select c2_rnk, '('||id||')' ids, '('||col1||')' str from t0 where c2_rnk=1 union all -- induction select t0.c2_rnk, ids||'('||t0.id||')' ids, str||','||'('||t0.col1||')' from t1, t0 where t0.c2_rnk = t1.c2_rnk+1 and instr(t1.str,'('||t0.col1||')') =0 ), t2 as (select t1.*, rownum solution_number from t1 where c2_rnk = (select max(c2_rnk) from t1) ) select solution_number, col1, col2 from t0, t2 where instr(t2.ids,'('||t0.id||')') <> 0 order by 1,2,3 SOLUTION_NUMBER COL1 COL2 1 1 x 1 2 y 1 3 z 2 1 y 2 2 x 2 3 z ```
You can use a full outer join to merge two numbered lists together: ``` SELECT col1, col2 FROM ( SELECT col1, ROW_NUMBER() OVER ( ORDER BY col1 ) col1_num FROM your_table GROUP BY col1 ) FULL JOIN ( SELECT col2, ROW_NUMBER() OVER ( ORDER BY col2 ) col2_num FROM your_table GROUP BY col2 ) ON col1_num = col2_num ``` Change ORDER BY if you require a different order and use ORDER BY NULL if you're happy to let Oracle decide.
Apply the distinct on 2 fields and also fetch the unique data for each columns
[ "", "sql", "oracle", "" ]
I am trying to run this query and every time I try to fix it I get all sorts of different errors, I believe this to be the correct syntax but it keeps telling me I have an invalid date identifier. Ive looked online but can't quite find what I'm trying to do here. ``` SELECT CUST_FNAME, CUST_LNAME, CUST_STREET, CUST_CITY, CUST_STATE, CUST_ZIP FROM LGCUSTOMER, LGPRODUCT, LGBRAND WHERE BRAND_NAME = 'FORESTERS BEST' AND INV_DATE BETWEEN '15-JUL-11' AND '31-JUL-11' ORDER BY CUST_STATE, CUST_LNAME, CUST_FNAME; ```
It seems that data type of the column `INV_DATE` is `DATE`. If so, you should provide `DATE` value to the `BETWEEN` condition. You can check the `NLS_DATE_FORMAT` parameter of your database. It specifies the default date format for `TO_CHAR` and `TO_DATE` function. IF the `NLS_DATE_FORMAT` is `DD-MMM-YY` or `DD-MMM-RR`, your query should run ok. Otherwise, you should change the date string in your query to follow the `NLS_DATE_FORMAT`. Or you can use `TO_DATE` function like the following. ``` ... AND INV_DATE BETWEEN TO_DATE('15-JUL-11', 'DD-MMM-RR') AND TO_DATE('31-JUL-11', 'DD-MMM-RR') ... ``` Perhaps you should want to read what `NLS_DATE_FORMAT`, check [oracle documentation](http://docs.oracle.com/cd/B28359_01/server.111/b28320/initparams137.htm#REFRN10119).
In my version of Oracle I use `'15 Jul 11'` But this is setting dependent. To be certain you should use something like `TO_DATE('20110715','YYYYMMDD')` as it explicitly states the format.
Proper format for date in Oracle SQL
[ "", "sql", "oracle", "oracle-sqldeveloper", "" ]
I am having a hard time getting my oracle developer query to output correctly. When I have the avg function in it, it gives me not a single group group error. When I take it out it works fine. I have tried using group by instead of order by but then it tells me that its not a group by expression. ``` SELECT LGBRAND.BRAND_ID, LGBRAND.BRAND_NAME, AVG(LGPRODUCT.PROD_PRICE)AS AVGER FROM LGPRODUCT, LGBRAND WHERE LGPRODUCT.BRAND_ID = LGBRAND.BRAND_ID ORDER BY BRAND_NAME; ```
When you include an aggregate function (like avg, sum) in your query, you must group by all columns you aren't aggregating. ``` SELECT LGBRAND.BRAND_ID, LGBRAND.BRAND_NAME, AVG(LGPRODUCT.PROD_PRICE)AS AVGER FROM LGPRODUCT, LGBRAND WHERE LGPRODUCT.BRAND_ID = LGBRAND.BRAND_ID GROUP BY LGBRAND.BRAND_ID, LGBRAND.BRAND_NAME ORDER BY BRAND_NAME ```
``` SELECT LGBRAND.BRAND_ID, LGBRAND.BRAND_NAME, AVG(LGPRODUCT.PROD_PRICE)AS AVGER FROM LGPRODUCT, LGBRAND WHERE LGPRODUCT.BRAND_ID = LGBRAND.BRAND_ID GROUP BY LGBRAND.BRAND_ID, LGBRAND.BRAND_NAME ORDER BY BRAND_NAME; ``` **OR** ``` SELECT LGBRAND.BRAND_ID, LGBRAND.BRAND_NAME, AVG(LGPRODUCT.PROD_PRICE)AS AVGER FROM LGPRODUCT INNER JOIN LGBRAND ON LGPRODUCT.BRAND_ID = LGBRAND.BRAND_ID GROUP BY LGBRAND.BRAND_ID, LGBRAND.BRAND_NAME ORDER BY BRAND_NAME; ``` **Note** Whenever an aggregate function(`SUM, COUNT, AVG, MIN, MAX,` ..) is used in `SELECT` all the other column in that SELECT that are not contained in any aggregate function must come in `GROUP BY` clause
SQL not a single group group function error
[ "", "sql", "database", "" ]
I have added non-clustered index to some tables, Do I need to re-start, or recompile all of the stored procedure to get benefit of the new indexes. And how to do so?
No you don't need to. When the stored procedure will try to fetch data from you tables the query optimizer will see that there is an index to some of your tables and it will build the corresponding execution plan, which then it will be passed to execution engine, in order to be executed.
Yes. You will need to re-compile your stored procedures to take advantage of new indexes. There is a full explanation [here](https://stackoverflow.com/questions/4509579/do-we-need-to-recompile-all-stored-procedures-when-indexes-rebuild-happen)
Re compile stored procedure
[ "", "sql", "sql-server", "sql-server-2008", "stored-procedures", "" ]
View **MyView**: ``` ............. . Id | Value. ............. . 1 | A . . 2 | B . . 3 | C . . 4 | D . ............. ``` My query as following: ***"Select From dbo.MyView Where Id > 1"*** Real results: ``` ............. . Id | Value. ............. . 2 | B . . 3 | C . . 4 | D . ............. ``` --- My wished or desired result: ***"B, C, D"*** --- I need one string(one row, one column) which merege all values of the column Values. How can I do that? Any help will be greatly appreciated?
How about Scalar Valued Function: ``` Create FUNCTION [dbo].[GetResults] ( @ID int ) RETURNS varchar(200) AS BEGIN DECLARE @Result varchar(200) SELECT @Result = COALESCE(@Result + ', ','') + Value From ( SELECT Value FROM MyTable WHERE (ID > @ID) ) UniqueValues RETURN @Result END ```
Here you go `SELECT ','+Value from MyTable for xml path('')` OR ``` declare @test varchar(100) select @test = coalesce(@test+',','')+Value from MyTable select @test ```
Merging multiple rows into one row
[ "", "sql", "sql-server", "sql-server-2012", "" ]
I have a table 'items' (more than 1,000,000 rows), 12 columns: ``` id | name | group_id | cost | column5 | column6 | column7 | column8 | column9 | column10 | column11 | column12 1 | A1 | 100 | 10,00 | c_15 | c_16 | c_17 | c_18 | c_19 | c_110 | c_111 | c_112 2 | A2 | 300 | 17,00 | c_25 | c_26 | c_27 | c_28 | c_29 | c_210 | c_211 | c_212 3 | A3 | 200 | 16,24 | c_35 | c_36 | c_37 | c_38 | c_39 | c_310 | c_311 | c_312 4 | A4 | 100 | 8,01 | c_45 | c_46 | c_47 | c_48 | c_49 | c_410 | c_411 | c_412 5 | A5 | 100 | 19,62 | c_55 | c_56 | c_57 | c_58 | c_59 | c_510 | c_511 | c_512 6 | A6 | 400 | 10,00 | c_65 | c_66 | c_67 | c_68 | c_69 | c_610 | c_611 | c_612 7 | A7 | 200 | 16,22 | c_75 | c_76 | c_77 | c_78 | c_79 | c_710 | c_711 | c_712 8 | A8 | 300 | 7,00 | c_85 | c_86 | c_87 | c_88 | c_89 | c_810 | c_811 | c_812 ``` How can I get rows with unique group\_id and lowest cost for that group\_id and sort for any column (name, column5, column6 etc)? Eg sorted for column5: ``` id | name | group_id | cost | column5 | column6 | column7 | column8 | column9 | column10 | column11 | column12 4 | A4 | 100 | 8,01 | c_45 | c_46 | c_47 | c_48 | c_49 | c_410 | c_411 | c_412 6 | A6 | 400 | 10,00 | c_65 | c_66 | c_67 | c_68 | c_69 | c_610 | c_611 | c_612 7 | A7 | 200 | 16,22 | c_75 | c_76 | c_77 | c_78 | c_79 | c_710 | c_711 | c_712 8 | A8 | 300 | 7,00 | c_85 | c_86 | c_87 | c_88 | c_89 | c_810 | c_811 | c_812 ``` Eg sorted for cost: ``` id | name | group_id | cost | column5 | column6 | column7 | column8 | column9 | column10 | column11 | column12 8 | A8 | 300 | 7,00 | c_85 | c_86 | c_87 | c_88 | c_89 | c_810 | c_811 | c_812 4 | A4 | 100 | 8,01 | c_45 | c_46 | c_47 | c_48 | c_49 | c_410 | c_411 | c_412 6 | A6 | 400 | 10,00 | c_65 | c_66 | c_67 | c_68 | c_69 | c_610 | c_611 | c_612 7 | A7 | 200 | 16,22 | c_75 | c_76 | c_77 | c_78 | c_79 | c_710 | c_711 | c_712 ```
PostgreSQL have a special syntax for this - [`distinct on`](http://www.postgresql.org/docs/current/static/sql-select.html): ``` with cte as ( select distinct on (group_id) * from Table1 order by group_id, cost ) select * from cte order by column5 ``` I don't recommend to use `*` in production, here I'm using it only to simplify code.
You can use [ROW\_NUMBER()](http://www.postgresql.org/docs/8.4/static/functions-window.html) function: ``` select * from ( select *, ROW_NUMBER() OVER (PARTITION BY group_id order by cost) as rn from items ) as i where i.rn=1 order by column5 ``` `SQLFiddle demo`
How to get sorted by distinct or group?
[ "", "sql", "postgresql", "group-by", "distinct", "" ]
Other answer on internet is not very clear, so I ask this question again. Again, I have a table tb1 ``` ID 114035 114035 ``` How to delete 1 row & keep other? The result should be ``` ID 114035 ```
If you don't care about which one is removed: ``` DELETE FROM tb1 WHERE ID = 114035 LIMIT 1; ``` Otherwise find out what differentiates the one you want to keep and use that in the `WHERE` clause.
Try ``` ALTER IGNORE TABLE tb1 ADD UNIQUE INDEX idx_name (ID); ``` --- Add `UNIQUE` index to the column using `ALTER` and also use `IGNORE` keyword --- This will delete duplicate entries and also no `duplicate` entry in future. --- <http://dev.mysql.com/doc/refman/5.1/en/alter-table.html> --- Update after OP's comment Drop `Unique` constraint ``` DROP INDEX idx_name ON tb1 ``` or ``` ALTER TABLE tb1 DROP INDEX idx_name ```
How to delete 1 row out of 2 exact rows in MYSQL?
[ "", "mysql", "sql", "" ]
I know this question may ask many times but I could not find one line SQL statement. I remember I did it before but now I could not remember how I did I want to drop all tables whose name starts with "EXT\_". Is it possibile to make it happen with one line SQL statement.
You could use a short anonymous block to do this. ``` BEGIN FOR c IN ( SELECT table_name FROM user_tables WHERE table_name LIKE 'EXT_%' ) LOOP EXECUTE IMMEDIATE 'DROP TABLE ' || c.table_name; END LOOP; END; ```
It's not possible with only one statement. Usually, I write a sql to get all the tables and then execute the results: ``` select 'drop table ' || table_name || ';' from user_tables where table_name like 'EXT_%'; ```
DROP all tables starting with "EXT_" in Oracle SQL
[ "", "sql", "oracle", "" ]
How can I get a list of column names and datatypes of a table in PostgreSQL using a query?
``` SELECT a.attname as "Column", pg_catalog.format_type(a.atttypid, a.atttypmod) as "Datatype" FROM pg_catalog.pg_attribute a WHERE a.attnum > 0 AND NOT a.attisdropped AND a.attrelid = ( SELECT c.oid FROM pg_catalog.pg_class c LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace WHERE c.relname ~ '^(hello world)$' AND pg_catalog.pg_table_is_visible(c.oid) ); ``` ![Change the hello world with your table name](https://i.stack.imgur.com/zIL93.png) More info on it : <http://www.postgresql.org/docs/9.3/static/catalog-pg-attribute.html>
``` SELECT column_name, data_type FROM information_schema.columns WHERE table_name = 'table_name'; ``` with the above query you can retrieve columns and its datatype.
How to get a list column names and datatypes of a table in PostgreSQL?
[ "", "sql", "postgresql", "sqldatatypes", "" ]
have table ``` CREATE TABLE #tbl ( id int identity(1,1), obj_type int , obj_id nvarchar(50) ) ``` have data like : **153:0|114:0|147:0|148:0|152:0|155:0** want insert which data ise before **" : "** to obj\_id , which data is next to **" : "** insert tu obj\_type. it's must be like ``` id obj_type obj_id 1 0 153 2 0 114 3 0 147 4 0 148 5 0 152 6 0 155 ``` How do it in stored procedure ? not function
``` declare @S varchar(100) = '153:0|114:0|147:0|148:0|152:0|155:0' declare @xml xml select @xml = '<item><value>'+replace(replace(@s, ':','</value><value>'), '|','</value></item><item><value>')+'</value></item>' select N.value('value[1]', 'int') as obj_id, N.value('value[2]', 'int') as obj_type from @xml.nodes('item') as T(N) ``` [SQL Fiddle](http://sqlfiddle.com/#!3/d41d8/25654)
Another Solution : ``` Create FUNCTION [dbo].[SplitString] ( @List NVARCHAR(MAX), @Delim VARCHAR(255) ) RETURNS TABLE AS RETURN ( SELECT [Value] FROM ( SELECT [Value] = LTRIM(RTRIM(SUBSTRING(@List, [Number], CHARINDEX(@Delim, @List + @Delim, [Number]) - [Number]))) FROM (SELECT Number = ROW_NUMBER() OVER (ORDER BY name) FROM sys.all_objects) AS x WHERE Number <= LEN(@List) AND SUBSTRING(@Delim + @List, [Number], LEN(@Delim)) = @Delim ) AS y ); ``` taken form: [T-SQL split string](https://stackoverflow.com/questions/10914576/tsql-split-string) And then select the values: ``` Declare @Text varchar (100) = '153:0|114:0|147:0|148:0|152:0|155:0', @Delim varchar(50) = ':0|' select case when CHARINDEX(':0', Value) > 0 then Left(Value, Len(Value)-2) else Value End AS Result from dbo.SplitString(@Text, @Delim) ```
Split data from 2 symbols
[ "", "sql", "sql-server", "" ]
How can I delete All rows in SQL Server that were created later than 10 minutes. Thanks.
Assuming that you have a column name Date\_column which is storing the timestamp. You may try like this, where mi is the abbreviation for minute: ``` DELETE FROM Table_name WHERE Date_column < DATEADD(mi,-10,GETDATE()) ```
SQL Server has no way to tell at what time the row was created. You will need to have an extra column in your table to log the time. If you do that, your delete statement becomes fairly straight forward.
SQL Server Delete all rows more than 10 minutes old
[ "", "sql", "sql-server", "" ]
Shortly, I am wondering which of the following is the better practice: * to encapsulate my code in `TRY/CATCH` block and display the error message * to write own checks and display custom error messages As I have read the `TRY/CATCH` block is not handling all types of errors. In my situation this is not an issue. I am building dynamic SQL that executes specific store procedure. I can make my own checks like: * is the procedure that is going to be executing exists * are all parameters supplied * can be all values converted properly or just to encapsulate the statement like this: ``` BEGIN TRY EXEC sp_executesql @DynamicSQLStatement SET @Status = 'OK' END TRY BEGIN CATCH SET @Status = ERROR_MESSAGE() END CATCH ``` Is there any difference if I make the checks on my own (for example performance difference) or I should leave this work to the server? The reason I am asking is because in some languages (like JavaScript) the use of `try/catch` blocks is known as bad practice.
Typically it is going to be better to check for these things yourself than letting SQL Server catch an exception. Exception handling is not cheap, as I demonstrate in these posts: * [Checking for potential constraint violations before entering SQL Server TRY and CATCH logic](http://www.mssqltips.com/sqlservertip/2632/checking-for-potential-constraint-violations-before-entering-sql-server-try-and-catch-logic/?utm_source=AaronBertrand) * [Performance impact of different error handling techniques](http://www.sqlperformance.com/2012/08/t-sql-queries/error-handling) Depending on the schema, volume, concurrency, and the frequency of failures you expect, your tipping point may be different, so at very high scale this may not always be the absolute most efficient way. But in addition to the fact that in most cases you're better off, writing your own error handling and prevention - while it takes time - allows you to fully understand all of your error conditions. That all said, you should still have `TRY / CATCH` as a failsafe, because there can always be exceptions you didn't predict, and having it blow up gracefully is a lot better than throwing shrapnel everywhere.
A try/catch should only be used for critical errors. If there are possible errors that you can prevent by doing a simple if statement, you should do it. Don't rely on try/catch to make sure, for example, that a user entered a date in the correct format. Check that yourself. Personally, if I get a critical exception like that, I'd prefer to see the call stack and where it happened. So, I only do a try/catch in the topmost layer of my project.
TRY/CATCH block vs SQL checks
[ "", "sql", "sql-server", "sql-server-2012", "try-catch", "" ]
I am finding hard time underestanding this query. I know this gives me all supplier names for suppliers who supply all parts that exist to a project. but only because I found the answers online! ``` select sname from s where not exists (select * from p where not exists (select * from spj spjx where s.sno = spjx.sno and p.pno = spjx.pno ) ); ```
It helps to reformat it: ``` select sname from s -- show all supplier names where not exists -- that there is not (select * from p -- a part where not exists -- that is not (select * from spj spjx -- supplied where s.sno = spjx.sno -- by them and p.pno = spjx.pno)); ``` Basically: select all sname from s, where no p exists where no spj such that spj matches s and p. Think of each layer as a filter. And the result looks like a relational division, as point out by Martin in a comment.
You can think of it like sets filtering sets. There are three sets here: ``` select * from spj spjx where s.sno = spjx.sno and p.pno = spjx.pno select * from p where not exists ({previous set}) select sname from s where not exists ({previous set}) ``` So, everywhere you see `{previous set}`, the outer set is being filtered by the result of that set. Further, for completeness, when you see this: ``` from spj spjx ``` that's equivalent to: ``` from spj AS spjx ``` thus making `spjx` the `alias` in this example.
What does this SQL-query mean?
[ "", "sql", "database", "" ]
I have records like these: RCV0001 RCV0002 RTN0003 RTN0004 SLE0005 RCV0006 I want to query for records that start with 'RCV' only and display only records. This is what I've tried so far: ``` select substring(documentnumber, 1) LIKE '%RCV%' from transactionheader ``` But I'm not getting my desired result. Any ideas? I'd gladly appreciate your help. Thanks.
Will need to add a filter on the where statement ``` select documentnumber from transactionheader where documentnumber LIKE 'RCV%' ```
The expression in the select list of your query returns a boolean, so the query will only return 0, 1 or NULL for every row in the table. ``` SELECT SUBSTRING(documentnumber, 1) LIKE '%RCV%' FROM transactionheader ``` For every row in the table, the first character of documentnumber will be inspected to see if it contains the string 'RCV', which will never be true. The query is going to return 0 or NULL for every row. --- There is more than one query that will return documentnumber that start with 'RCV'. Here is one example: ``` SELECT h.documentnumber FROM transactionheader h WHERE h.documentnumber LIKE 'RCV%' ``` The `WHERE` clause specifies the conditional tests that will be performed on each row, only rows that "satisfy" the predicate will be returned.
mysql substring get only characters
[ "", "mysql", "sql", "" ]
Given three tables like this: ``` fruit: taste: fruit_taste: id name id name id fruit_id taste_id -- ---- -- ---- -- -------- -------- 1 apple 1 sweet 1 1 1 2 lime 2 tart 2 1 2 3 banana 3 2 2 4 lemon 4 4 2 ``` How can I `SELECT` only the `fruit` which do have not been identified by the join table as having a sweet `taste`? My desired results should also include 'banana', since there is not a `fruit_taste` identifying it as sweet. I tried (along with a **ton** of other things): ``` SELECT * FROM fruit LEFT OUTER JOIN fruit_taste ON fruit_taste.fruit_id = fruit.id WHERE (fruit_taste.taste_id != 1 OR fruit_taste.id IS null) ``` which *almost* works, but still gives me 'apple' in my result, since my apple is a Granny Smith, and is both sweet and tart. How can I exclude 'apple' in my query? Is my only option a subquery with `NOT IN`?
You can use a LEFT JOIN and avoid the subquery if you add the excluded taste to the JOIN filter and then exclude matching rows. This should generally be better for performance than using NOT IN on a subquery, but YMMV. ``` SELECT * FROM fruit LEFT JOIN fruit_taste ON fruit_taste.fruit_id = fruit.id AND fruit_taste.taste_id = 1 WHERE fruit_taste.id IS NULL ```
``` select * from fruit where id not in (select ft.fruit_id from fruit_taste ft inner join taste t on ft.taste_id = t.id where t.name = 'sweet') ```
SELECT statement to exclude results through join table relationship
[ "", "sql", "select", "join", "" ]
**CUSTOMERS TABLE** ``` CUSTOMER_ID FIRST_NAME LAST_NAME DOB PHONE 1 John Brown 1/1/1965 800-555-1211 2 Cynthia Orange 2/5/1968 800-555-1212 3 Steve White 3/16/1971 800-555-1213 4 Gail Black 800-555-1214 5 Doreen Blue 5/20/1970 6 Fred Brown 1/1/1970 800-555-1215 ``` If a run a query ``` select * from customers where customer_id not in (2,3,5,NULL); ``` I get an output no rows returned.....Please help me to sort this problem out..
You got bitten by SQL 3-valued logic. For the rows with a `customer_id` of 2, 3, or 5, the `WHERE` clause evaluates to false, as you expected. For the other rows, it evaluates to `UNKNOWN` (or `NULL`; I don't know if Oracle distinguishes them), **not** to true. This may be more clear if the `IN` expression were expanded to `(customer_id != 2) AND (customer_id != 3) AND (customer_id != 5) AND (customer_id != NULL)`. For a `customer_id` of 1, 4, or 6, the first three subexpressions evaluated to true, as you'd expected. But the last one evaluates to unknown, because the `NULL` (a marker for an unknown value) *might* "really" be 1, 4, or 6. Thus, the entire expression has an unknown truth value. The `SELECT` statement will only return rows for which the `WHERE` clause is definitely true, not unknown. You can get the results you wanted by making the query. ``` select * from customers where customer_id not in (2, 3, 5) and customer_id is not null; ``` However, it seems that your `customer_id` is an autoincrement column and can't really be null anyway. If so, just write: ``` select * from customers where customer_id not in (2, 3, 5); ```
In this particular instance, you are looking for ``` select * from customers where customer_id not in (2,3,5); ``` The null would be omitted in this case. Why? As explained [here](https://stackoverflow.com/questions/129077/not-in-constraint-and-null-values), A Not In statment does, in this case the following: ``` select * where CustomerID <> 2 and CustomerID <> 3 and CustomerID <> 5 and CustomerID <> NULL ``` Using the default ansi\_nulls notation on, customerID <> NULL would result in an UNKNOWN. When SQL has an UNKNOWN, it will return no rows. When this is off, it would return true. You have two options at this point: 1. Change the statement to not have a null in it 2. Change your Database Engine to have ansi\_nulls off I think 1 would be the much easier choice in this case...
NOT IN returns false if a value in the list is null
[ "", "sql", "oracle", "oracle11g", "null", "" ]
I am importing sql into my Access database and am working on parsing the data into the correct tables and fields. I have run into an issue as my import creates column names and enters the values into the columns but the database uses those column names as row values and the values in a separate column Current table ``` SC | DO | temp | pH | etc val|val | val |val | val ``` table I am attempting to parse into ``` Characteristic_Name | Result_Value SC | val DO | val temp | val pH | val etc. | val ``` I have done a union query to get the results column to populate nicely but I cannot get the column names to parse Is there a way to do this?
Since you are using MS Access, there is no UNPIVOT function so you can use a UNION ALL query: ``` select 'SC' as Characteristic_Name, SC as Val from yourtable union all select 'DO' as Characteristic_Name, DO as Val from yourtable union all select 'temp' as Characteristic_Name, temp as Val from yourtable union all select 'pH' as Characteristic_Name, pH as Val from yourtable union all select 'etc' as Characteristic_Name, etc as Val from yourtable; ``` As a side note, when you are doing a UNION ALL or UNPIVOT, the datatypes must be the same so you might have to convert the data in the `val` column so it is the same.
You can create a VBA function similar to this one: ``` Set db = CurrentDb() Set rs1 = db.OpenRecordset("CurrentTable") Dim fld As DAO.Field Dim SQL as string For Each fld In rs1.Fields SQL = SQL & fld.Name & "," Next ... ``` With this you can extract field names and add the code needed to compose a SQL string for insert data in your normalized table.
converting column names to use as row fields in Access
[ "", "sql", "xml", "database", "ms-access-2010", "" ]
I have two (very large) tables of identical structure, holding two types of locations : ## LocA * Id - INT * X - FLOAT (latitude) * Y - FLOAT (longitude) and ## LocB * Id - INT * X - FLOAT (latitude) * Y - FLOAT (longitude) Each of them hold several million rows. I need to select all locations in LocA and for each location, the closest location in LocB. What would be the most efficient query to do this? EDIT1 : The distance algorithm would be a dumb one : SQRT(POWER(LocB.X - LocA.X, 2) + POWER(LocB.Y - LocA.Y, 2)) EDIT2 : An implementation that I've done but I'm really not sure if it's optimal (I highly doubt it), would be : ``` SELECT A.Id AS AId, ( SELECT TOP 1 B.Id FROM B ORDER BY SQRT(POWER(B.X - A.X, 2) + POWER(B.Y - A.Y, 2)) ASC ) AS BId FROM A ``` EDIT3 : It's common to have "duplicates" in table LocB but I would want any of the matching "closest" to be returned for a location in LocA, not all.
How about : ``` SELECT id as Aid,x,y,m % 100 as bId FROM ( SELECT A.id,A.x,A.y,MIN(CAST(((A.x-B.x)*(A.x-B.x)+(A.y-B.y)*(A.y-B.y)) AS BIGINT)*100+B.id) as m FROM A CROSS JOIN B GROUP BY A.id,A.x,A.y) j; ```
Have you thought to take into consideration [geography::Point](http://technet.microsoft.com/en-us/library/bb964711%28v=sql.105%29.aspx), [STDistance](http://technet.microsoft.com/en-us/library/bb933808%28v=sql.105%29.aspx) method, and create a [spatial index](http://technet.microsoft.com/en-us/library/bb933796%28v=sql.105%29.aspx) on those points columns? If your database structure is fixed, you can add a new persisted computed column.
Most efficient way of selecting closest point
[ "", "sql", "sql-server", "performance", "t-sql", "geospatial", "" ]
I am trying to group results by 3 months period starting by the current month as shown below: ``` row1 15 -- This should contain November, September and October row2 25 -- This should contain August, July and June row3 5 -- This should contain May, April and March row4 2 -- This is should contain February and Janvier ``` I have no idea about how to accomplish this. Any help please? So far I can group by month: ``` SELECT MONTH(date), MONTHNAME(date) as month, COUNT(*) FROM table_name WHERE MONTH(date) < NOW() GROUP BY MONTH(date) ORDER BY MONTH(date) DESC ``` Thanks
You can use [PERIOD\_DIFF](http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_period-diff) and some maths to find the records. ``` -- This line creates YYYYMM representation of today; you can use PHP instead SET @T1 = DATE_FORMAT(CURRENT_DATE, '%Y%m'); SELECT MIN(`date`) AS `Range Start`, MAX(`date`) AS `Range End`, COUNT(*) AS `Count` FROM `table` GROUP BY FLOOR(PERIOD_DIFF(@T1, DATE_FORMAT(`date`, '%Y%m')) / 3) ORDER BY 1 DESC ``` Sample output: ``` Range Start Range End Count ----------- ---------- ----- 2013-09-01 2013-11-26 87 2013-06-01 2013-08-31 92 2013-03-01 2013-05-31 92 2012-12-01 2013-02-28 90 2012-09-01 2012-11-30 91 ``` `PERIOD_DIFF` returns the number of months between periods P1 and P2 (both arguments are strings in YYYYMM format). In the above query we calculate the month difference for each row (e.g. NOV-2013 is 0, OCT-2013 is 1, SEP-2013 is 2, AUG-2013 is 3 and so on). The difference divided by 3 plus `FLOOR` yields the quarter number (NOV-2013 is 0.00 -> 0, OCT-2013 is 0.33 -> 0, SEP-2013 is 0.66 -> 0, AUG-2013 is 1.00 -> 1, and so on).
Something like this should work... ``` SELECT FLOOR((MONTH(date))/3)+1 quarter , COUNT(*) total FROM my_dates GROUP BY FLOOR((MONTH(date))/3); ```
Is that possible to group result on 3 months period basis?
[ "", "mysql", "sql", "date", "" ]
I am wanting to find the MAX value after subtracting a column from one another column. What I have currently does not seem to work syntactically. ``` SELECT [EmpName] as 'Employee Name', MAX([EndYear]-[BeginYear]) as 'Total Years' FROM [dbo].[EMPLOYEE]; ```
You could use something along these lines... ``` ;WITH emps AS ( SELECT [EmpName] AS 'EmployeeName', [EndYear] - [Begin Year] AS 'TotalYears' FROM [dbo].[Employee] ) SELECT * FROM emps WHERE TotalYears = (SELECT MAX(TotalYears) FROM emps) ```
Use the [RANK](http://technet.microsoft.com/en-us/library/ms176102.aspx) statement. ``` SELECT *, RANK() OVER (PARTITION BY [Employee Name] ORDER BY [Total Years] DESC) AS Ranking FROM ( SELECT EmpName AS 'Employee Name', EndYear - BeginYear AS 'Total Years' FROM Employee ) TotalYears WHERE Ranking = 1 ``` Keep in mind, I haven't actually tested the above SQL yet, I just wrote it from memory. I'll go spin up a database now to check syntax.
SQL syntax with MAX
[ "", "sql", "sql-server-2012", "" ]
I have a requirement to convert postive value to negative and negative to positive and if its 0 then leave as it is. Im able to do it in sql, just need to know if there is any better way/alternate way to do it? ``` create table test_tab (a number); insert into test_tab values (10); insert into test_tab values (-10); insert into test_tab values (0); insert into test_tab values (10.15); insert into test_tab values (-10.15); select a , decode(sign(a),-1,abs(a),1,-abs(a),0) "changed value" from test_tab; ``` Database Oracle - 11g
What about multiplying with -1 ? ``` select a , -1 * a "changed value" from test_tab; ```
how about just putting a negative sign ``` SELECT - -15 as inverse; -->15 SELECT -15 as inverse; -->-15 ```
Positive value to negative and negative to positive in oracle
[ "", "sql", "oracle", "" ]
Yesterday I had an issue: [High availability Search implementation in PHP+MySQL](https://stackoverflow.com/questions/20078540/high-availability-search-implementation-in-phpmysql) after implementing melc's solution, I needed to change my query and i have this issue: so I have: ``` product: id int(10) unsigned not null primary key auto_increment name varchar(200), price float(5,2), type int(10) attribute: id int(10) name varchar(100) /*e.g. red, small, large, green, metal, plastic etc...*/ product_attribute: attribute_id int(10) product_id int(10) ``` So I have all my custom attributes like color, size, type of metal/plastic etc... in the "attribute" table. I have so far, over 500 and 50 types (size, color, shapes etc...) So, if I want to get all the red products i do: ``` select * from product_attribute where attribute_id = 4 (for reg) ``` Now, if I want all the product that matches this: red and blue, small, plastic i tried to do: ``` select * from product_attribute where attribute_id = 4 and attribute_id = 5 and attribute_id = 10 and attribute_id = 38 ``` but this return nothing, so I tried: ``` select * from product_attribute where attribute_id = 4 OR attribute_id = 5 OR attribute_id = 10 OR attribute_id = 38 ``` but then I get attributes that does not correspond to what I want! what can I do ? is my DB tables wrong?
You can use the UNION: If they choose 2 properties: ``` SELECT * FROM product_attribute where attribute_id = 4 UNION SELECT * FROM product_attribute where attribute_id = 8 ``` Or 3: ``` SELECT * FROM product_attribute where attribute_id = 4 UNION SELECT * FROM product_attribute where attribute_id = 8 UNION SELECT * FROM product_attribute where attribute_id = 37 ``` You get the idea. This will get all the results, now you have to group them by product id, so ``` SELECT * FROM ( SELECT * FROM product_attribute where attribute_id = 4 UNION SELECT * FROM product_attribute where attribute_id = 8 UNION SELECT * FROM product_attribute where attribute_id = 37 ) AS allItems INNER JOIN product ON (allItems.product_id = product.id) GROUP BY allItems.product_id /* ORDER BY price */ ``` This will return the list of products you need
You have to use [JOINS](http://dev.mysql.com/doc/refman/5.0/en/join.html), if you want to find products that are both Red AND Plastic, because I am guessing your Product\_Attribute Table saves multiple attributes as separate rows... For Example if you need to get products that are Red and Plastic ``` SELECT a.product_id, a.attribute_id, b.attribute_id FROM product_attribute a JOIN product_attribute b JOIN a.product_id = b.product_id AND a.attribute_id <> b.attribute_id WHERE a.attribute_id = 4 AND b.attribute_id = 10 ``` This would get you products with 2 attributes, similarly if you need products with more than 2 attributes you will have to use more JOINs...
Create the query for multiple product property
[ "", "mysql", "sql", "database-design", "" ]