Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
Give an array like this: ``` my_array = [2,3,5,23,4] ``` and a table like this: ``` column1 | column2 ---------+---------- 1 | 2 | 3 | 4 | 5 | ``` How can I insert the array values into a table. Roughly I want to do something like this with SQL: ``` for item in my_array: UPDATE my_table SET colum2 = item ``` The updated table should be like this ``` column1 | column2 ---------+---------- 1 | 2 2 | 3 3 | 5 4 | 23 5 | 4 ``` UPDATE: I am using Python psycopg2 but I am wondering if there is a way with pure SQL.
You need to somehow generate an array "index" for each row in the table. *If* the `column1` value **always** matches the array index, you can do it like this. ``` update test set column2 = (array[2,3,5,23,4])[column1]; ``` However if the value in `column1` does not reflect the array index, you need to generate the array index based on the sort order in the table. If that is the case you can do something like this: ``` with numbered_data as ( select ctid, row_number() over (order by column1) as rn --<< this generates the array index values from test ) update test set column2 = (array[2,3,5,23,4])[nd.rn] from numbered_data nd where nd.ctid = test.ctid; ``` If your table has a proper primary key, then you can use that instead of the `ctid` column.
In Postgres 9.4 use the `WITH ORDINALITY` for this. Faster and cleaner than anything else. ``` UPDATE test t SET column2 = a.column2 FROM unnest('{2,3,5,23,4}'::int[]) WITH ORDINALITY a(column2, column1) WHERE t.column1 = a.column1; ``` * [PostgreSQL unnest() with element number](https://stackoverflow.com/questions/8760419/postgresql-unnest-with-element-number/8767450#8767450) Assuming that `column1` represents the position of `column2` in the given array, this only updates columns that are supposed to be updated and does not touch other rows (like the simple query in @a\_horse's answer would). The ordinal position of an element is also the default array subscript in a 1-dimensional array, but Postgres allows arbitrary array indices: * [Normalize array subscripts for 1-dimensional array so they start with 1](https://stackoverflow.com/questions/12011569/normalize-array-subscripts-for-1-dimensional-array-so-they-start-with-1) This works irregardless of actual array subscripts.
How to insert array items into PostgreSQL table
[ "", "sql", "arrays", "postgresql", "" ]
I'm struggling to understand the use of `GROUP BY` in this query and am looking for clarification: ``` Flights(flno: integer, from: string, to: string, distance: integer, departs: time, arrives: time, price: real) Aircraft(aid: integer, aname: string, cruisingrange: integer) Certified(eid: integer, aid: integer) Employees(eid: integer, ename: string, salary: integer) ``` The question is: For all aircraft with cruising range over 1000 miles, find the name of the aircraft and the average salary of all pilots certified for this aircraft. ``` SELECT Temp.name, Temp.AvgSalary FROM ( SELECT A.aid, A.aname AS name, AVG (E.salary) AS AvgSalary FROM Aircraft A, Certified C, Employees E WHERE A.aid = C.aid AND C.eid = E.eid AND A.cruisingrange > 1000 GROUP BY A.aid, A.aname ) AS Temp ``` Why is the `GROUP BY` necessary here? Wouldn't the following query return the aircraft and the corresponding salary, or would it return the average salary of all employees not specific to each aircraft? ``` SELECT A.aname, AVG(E.salary) FROM Aircraft A, Certified C, Employees E WHERE A.aid = C.aid AND C.eid = E.eid AND A.cruisingrange > 1000 ``` Does using `GROUP BY` change the format of the table so that using `GROUP BY A.aid` would specify that we are only grouping the aircraft table and leaving the certified and employee tables untouched?
The GROUP BY is required to preform aggregation (in this case, taking the average) properly. If you don't group by anything, MySQL will preform this aggregation over your entire table. In other words, if you used your last query it will return the average salary for all aircraft with a cruising range over 1000, with no distinction of which aircraft is which. Try it, and you will see this behavior. However, if you use the `GROUP BY` clause here, you will see the average for each *individual* aircraft with a cruising range over 1000, which is what you want. Without it, you're taking the average of *all* aircraft. Try these queries on some sample data, and the difference in behavior will become much more clear. --- **EDIT** Regarding your last few statements: yes, we are not doing anything with the certified or employee table. To step back, the problem states *for each aircraft*. Many times, if you are given a problem statement that spells out for which group of items you need results, it is a good start to place that as your group by clause.
Anytime you select aggregate functions like AVG, SUM, MAX, MIN, etc... alongside other columns, you must group by all columns which are not aggregate functions or constants. The only exception to this I can think of is when you use windowing functions (not available in MySQL). In this example, I'm unclear as to why the `a.aid` is not selected from `Temp`. If there are two Aircraft with the same name but different ids, you could see results like... ``` aname avg ------ ------- 747 100,000 747 110,000 DC10 90,000 ``` ...where the two records are for different aircraft with the same name (747) The group by here says to average the salaries per aircraft, giving you the average salaries per aircraft...as in only include salaries for the aircraft you're finding the average for.
Basic usage of GROUP BY in SQL
[ "", "mysql", "sql", "group-by", "" ]
Is it possible for me to join the two tables below? My aim is to create a query in `ASP.NET` to list all the messages in *Messages* and *Messages2* in one set of data. **Messages1:** ``` Messages1_ID, Thread_ID, DateCreated, Title, Body ``` **Messages2**: ``` Thread_ID, Messages2_ID, DateSent, DateMsg2Downloaded, Msg2Subject, Body ``` I have tried this query: ``` SELECT Messages1.body, Messages2.Body from Messages1, Messages2 where Messages1 = '9' ``` I have 1 row in *Messages1* and 9 rows in *Messages2* (with a thread id of 9) But I get all the unique rows in *Messages2*, and for each row returned *Messages1* body is repeated over and over again I think my tables are designed incorrectly but worth an ask first.
What you are looking for is a union. This will combine the result of two query. ``` SELECT Messages1.body from Messages1 where Messages1.Thread_ID = '9' union SELECT Messages2.Body from Messages2 where Messages2.Thread_ID = '9' ```
``` SELECT Messages1.body, Messages2.Body from Messages1 inner join Messages2 ON Messages1.Thread_ID = Messages2.Thread_ID where Messages1.Thread_ID = '9' ```
SQL Join on two tables for messages
[ "", "sql", "sql-server", "vb.net", "join", "" ]
I need to calculate the difference between two dates in Oracle sql and show it in the following format: 'x years y months z days'. Now I see two ways to achieve this: The first way is to get the difference in days, and after that, bring it to 'x year y month z days' The second way is to extract years, month, and days with an extract function. In the first and second cases, I see problems on how to handle leap-year and difference in month with 30 days versus 31 days. Is there another, easier way to implement this?
As another option you could use [the `months_between()` function](https://docs.oracle.com/cd/E11882_01/server.112/e41084/functions102.htm), and manipulate the value that gives you into the three parts. For example, with some sample dates: ``` select months_between(date '2015-03-19', date '2012-01-06') as month_diff from dual; MONTH_DIFF ---------- 38.4193548 ``` The number of whole months can be split into three multiples of twelve (i.e. years) with two left over; and the fractional part is the number of days in the partial month, based on a 31-day month. So you can extract the parts with: ``` with t (start_date, end_date) as ( select date '2012-01-06', date '2015-03-19' from dual ) select start_date, end_date, trunc(month_diff/12) as year_diff, trunc(mod(month_diff, 12)) as month_diff, 31*(month_diff - trunc(month_diff)) as day_diff from ( select start_date, end_date, months_between(end_date, start_date) as month_diff from t ); START_DATE END_DATE YEAR_DIFF MONTH_DIFF DAY_DIFF ---------- ---------- ---------- ---------- ---------- 2012-01-06 2015-03-19 3 2 13 ``` And then you can use string concatenation to format it however you want, e.g. ``` select trunc(month_diff/12) || ' years ' || trunc(mod(month_diff, 12)) || ' months ' || 31*(month_diff - trunc(month_diff)) || ' days' as diff from ( ... DIFF ------------------------ 3 years 2 months 13 days ```
I just used this formula and I add some case where a gap of 1 day between 2 dates will result of 2 days diff. * Case 1 : 08/04/2019 to 30/04/2021 -> 2 years and 22 days * Case 2 : 08/04/2019 to 01/05/2021 (one day more) -> 2 years and 24 days (2 days more) I have corrected the formula : ``` with t (start_date, end_date) as ( select date '2012-01-06', date '2015-03-19' from dual ) select start_date, end_date, trunc(month_diff/12) as year_diff, trunc(mod(month_diff, 12)) as month_diff, end_date - add_months(start_date, month_diff) as day_diff from ( select start_date, end_date, trunc(months_between(end_date, start_date)) as month_diff from t ); ```
How to get difference between two dates in Oracle and show it as ' x years y month z days'
[ "", "sql", "oracle", "date", "" ]
Consider following table:- ``` ----------------------------------------- ID TEACHER SUBJECT GRADE ----------------------------------------- 1 Jack Physics 8 2 Paul Chemistry 10 3 Susan English 6 4 Jack Maths 8 5 Jack Physics 10 6 Paul Chemistry 10 ``` --- i want to identify the rows with same values for TEACHER and SUBJECT columns. irrespective of values in ID and GRADE columns. The following rows thus will be identified:- ``` ------------------------------------- ID TEACHER SUBJECT GRADE ------------------------------------- 1 Jack Physics 8 2 Paul Chemistry 10 5 Jack Physics 10 6 Paul Chemistry 10 ------------------------------------- ``` how to achieve such a result in oracle sql.
``` SELECT * FROM table1 t1 WHERE EXISTS ( SELECT null FROM Table1 t2 WHERE t1."TEACHER" = t2."TEACHER" AND t1."SUBJECT" = t2."SUBJECT" AND t1."ID" <> t2."ID" ) ORDER BY "ID" ``` demo: <http://sqlfiddle.com/#!4/3d2bd4/3>
You can use the below select: ``` SELECT a.* FROM (SELECT COUNT(1) cnt, teacher, subject FROM your_table GROUP BY teacher,subject HAVING COUNT(1) > 1)rec, Your_table a WHERE a.teacher=rec.teacher AND a.subject=rec.subject ```
Finding Rows based on a set of same columns in ORACLE
[ "", "sql", "oracle", "" ]
My code is as below, somehow there is always an error near `@Name` ``` DECLARE @Name nvarchar(MAX) = '(mm.dll, ben and jerry.exe)' DECLARE @sql nvarchar(MAX)= 'SELECT OrderName, customer.version, count(DISTINCT company.CID) as Counts FROM [CompanyData] company INNER JOIN [vendor] mav on company.CID = mav.CID LEFT OUTER JOIN [Customer] customer on company.VendorId = customer.VendorId AND company.DId = customer.DId WHERE OrderName in' + @Name+ ' GROUP BY customer.version, OrderName' EXEC sp_executesql @sql ```
Put a single quote in your declaration of `@Name` and just remove the hashes(#) in it: ``` DECLARE @Name nvarchar(MAX)='(''mm.dll'', ''ben and jerry.exe'')' DECLARE @sql nvarchar(MAX)= 'SELECT OrderName, customer.version, count(DISTINCT company.CID) as Counts FROM [CompanyData] company INNER JOIN [vendor] mav on company.CID = mav.CID LEFT OUTER JOIN [Customer] customer on company.VendorId = customer.VendorId AND company.DId = customer.DId WHERE OrderName in ' + @Name+ ' GROUP BY customer.version, OrderName' EXEC sp_executesql @sql ```
Please add double quotes in @Name variable like.. ``` DECLARE @Name nvarchar(MAX) = '("mm.dll", "ben and jerry.exe")' ```
Execute a sql string in sql server
[ "", "sql", "sql-server", "t-sql", "" ]
I want to format a datetime column like so: "March 2004." Currently, I tried ``` DECLARE @Date VARCHAR(20) = '2004-03-05 01:00' SELECT CONVERT(VARCHAR(20),CAST(@Date AS DATETIME),13) AS DateFormats ``` but not getting the right result.
Something like: ``` DECLARE @Date VARCHAR(20) = '2004-03-05 01:00' SELECT DATENAME(MONTH, @Date) + ' ' + DATENAME(YEAR, @Date) ``` More on `DATENAME` can be found here <https://msdn.microsoft.com/en-gb/library/ms174395.aspx> Essentially it gets the month part of the date and the year part of the date and concatenate them together.
You should really apply formatting when you present your data, not in the query, but to answer the question, if you're using SQL Server 2012 or above you can use [FORMAT](https://msdn.microsoft.com/en-GB/library/hh213505.aspx) ``` DECLARE @Date datetime = '2004-03-05 01:00' SELECT FORMAT( @Date, 'MMMM yyyy' ) AS DateFormats ```
Datetime Format in Month and Year
[ "", "sql", "t-sql", "" ]
I have a large amount of card tokens (16 digits) uploaded from xml file to sql-server. The problem is I see them as expression, sample below: ``` 3.3733E+15 3.3737E+15 3.3737E+15 3.3737E+15 3.37391E+15 3.37391E+15 3.37398E+15 3.37453E+15 3.37468E+15 3.37468E+15 3.3747E+15 3.37486E+15 3.37486E+15 3.37567E+15 3.3759E+15 3.3759E+15 ``` Any suggestion to change them to a 16 digit number? I have tried to change the data type, but got error`"Conversion failed when converting the varchar value '3.37201E+15' to data type int"` Thanks for help! Edit: @X.L.Ant see my code below. I create this table from another one, which is just purely inserted from xml file. Is this may cause an error because some rows are empty in column TOKEN? ``` CREATE TABLE MULTICURRENCY_CHECK ( TOKEN varchar(255) ) /*Merges all card tokens into 1 column, as in xml they are spread across different columns*/ INSERT INTO MULTICURRENCY_CHECK ( TOKEN ) SELECT no FROM gpstransactionsnew2 UNION ALL SELECT no19 FROM gpstransactionsnew2 UNION ALL SELECT no68 FROM gpstransactionsnew2 UNION ALL SELECT no93 FROM gpstransactionsnew2 UNION ALL SELECT no107 FROM gpstransactionsnew2 UNION ALL SELECT no121 FROM gpstransactionsnew2 SELECT REPLACE(TOKEN, 'OW1', ' ') FROM MULTICURRENCY_CHECK /*Converts exponential expression to number*/ SELECT CONVERT(numeric(16,0), CAST(TOKEN AS FLOAT)) FROM MULTICURRENCY_CHECK ```
Try to cast your string to float before converting it : ``` SELECT CONVERT(numeric(16,0), CAST(TOKEN AS FLOAT)) FROM MULTICURRENCY_CHECK ``` * See [this fiddle](http://sqlfiddle.com/#!6/a484f/1). I don't know what's the format of those numbers in your XML source, but with the data you provide, you'll end up with 33733 for instance followed by a bunch of zeroes. If you have a bigger precision in your XML, maybe you should tweak your importing settings to keep this precision instead of trying to deal with that in the DB. **EDIT:** Try testing your strings with `ISNUMERIC` to avoid the casting errors you're getting. Adding a raw output of your column will allow you to check which value fails to convert (i.e. converts to 0). ``` SELECT TOKEN, CONVERT(NUMERIC(16, 0), CAST(CASE WHEN ISNUMERIC(TOKEN) = 1 THEN TOKEN ELSE 0 END AS FLOAT)) FROM MULTICURRENCY_CHECK ```
**For SQL Server 2012+, use [TRY\_CONVERT()](https://learn.microsoft.com/en-us/sql/t-sql/functions/try-convert-transact-sql).** The use of ISNUMERIC() in xlecoustillier's edited answer does not protect against conversion failures. Given the following scenario: ``` CREATE TABLE test(a varchar(100)); insert into test values ('3.3733E+15'), ('3.3737E+15'), ('3.37391E+30'), --fails conversion. included to demonstrate the nature of TRY_CONVERT(). ('3.37398E+15'), ('3.37453E+15'), ('3.37468E+15'), ('3.3747E+15'), ('3.37486E+15'), ('3.37567E+15'), ('3.3759E+15'); SELECT TRY_CONVERT(numeric(16,0), CAST(a AS FLOAT)) FROM test ``` Results in only valid converted values: ``` --------------------------------------- 3373300000000000 NULL 3373910000000000 3373980000000000 3374530000000000 3374680000000000 3374700000000000 3374860000000000 3375670000000000 3375900000000000 ``` However: ``` SELECT a, CONVERT(NUMERIC(16, 0), CAST(CASE WHEN ISNUMERIC(a) = 1 THEN a ELSE 0 END AS FLOAT)) FROM test ``` Fails with: > Conversion failed when converting the varchar value '3.3733E+15' to > data type int. The issue is that all values in the 'a' column return 1 when passed to the ISNUMERIC() function. ``` SELECT CASE WHEN ISNUMERIC(a) = 1 THEN 'Yes' ELSE 'No' END as IsValueNumeric FROM test ``` [Try it on SQLFiddle](http://sqlfiddle.com/#!18/564a13/3) and/or compare with [xlecoustillier's sqlfiddle](http://sqlfiddle.com/#!6/a484f/1)
Convert exponential to number in sql
[ "", "sql", "sql-server", "" ]
I am trying to write a query that will return 1 row for each 'Applicant' with a column that will display the employerId for the applicant if the number of employerIds is 1, otherwise it would return a string on that column saying "Multiple Applications". Please see below, I am working with MSSQL. Could someone guide me towards the right direction? ``` Table: Applicant ID | FirstName | LastName ---------------------------- 01 | John | Smith 02 | Mark | Doe 03 | George | Washington Table: Employer ID | ApplicantId | EmployerId ---------------------------- 01 | 01 | 100 02 | 01 | 103 03 | 02 | 101 04 | 03 | 106 Desired Output: FirstName | LastName | Employer --------------------------------- John | Smith | Multiple Applications Mark | Doe | 101 George | Washington | 106 ``` Thanks!
You can use a simple grouping and make use of MIN to get the value when there's only one item: ``` SELECT Applicant.FirstName, Applicant.Surname, CASE WHEN COUNT(Employer.ID) = 1 THEN CAST(MIN(Employer.Id) AS varchar(10)) ELSE 'Multiple Applications' END AS Employer FROM Applicant INNER JOIN Employer ON Applicant.Id = Employer.Id GROUP BY Applicant.Id, Applicant.FirstName, Applicant.Surname ```
``` select min(FirstName) as FirstName, min(LastName) as LastName, case when count(*) > 1 then 'Multiple Applications' else cast(min(EmployerId) as varchar(11)) end as Employer from Applicant as a inner join Employer as e on e.ApplicantId = a.Id group by a.Id ``` It's certainly possible that you could have multiple applicants with the same name, especially just last name. So you wouldn't want to group on those columns without grouping on the applicant id. Many people like to add extra group columns to avoid the need for a dummy aggregate expression on column from the one side of a 1:M join. (And other platforms allow you to refer to a constant or "functionally dependent" column without the aggregate at all.) I personally think that that approach masks the intent of the query. Those extra grouping could potentially change the query plan and/or performance. This case is interesting because we have to use a dummy aggregate inside the case expression anyway. So it makes that much more sense to mirror it in the first and last name columns as well.
SQL Case with conditional display
[ "", "sql", "sql-server", "t-sql", "" ]
I have this table: club(clubname, membername) ``` photography | Jim photography | Eve photography | Alex woodworking | Jim woodworking | Alex cooking | Alex ``` How do I find the names of people who are in at least the same clubs as Jim? In this example, I want to return Alex. --- I know how to find the names of people in ANY of the same clubs as Jim ``` SELECT DISTINCT C1.membername FROM clubname C1, clubname C2 WHERE C1.clubname = C2.clubname AND C2.membername = 'Jim" AND C1.membername <> 'Jim' ``` But how do I specify that I only want the people who are in ALL of the same clubs as Jim?
you may do it in multiple ways, one way is to use `where not exists` like below: ``` SELECT DISTINCT C1.membername FROM club C1 WHERE NOT EXISTS ( SELECT C2.membername FROM club C2 LEFT OUTER JOIN club C3 ON C2.clubname=C3.clubname AND C3.membername=C1.membername WHERE C2.membername = 'Jim' AND C3.membername IS NULL ) AND C1.membername != 'Jim' ``` just tried it with T-SQL, [here is the DEMO](http://sqlfiddle.com/#!6/1a0f4/1).
I was looking for a solution similar to a set difference operator which is basically the link PM 77-1 posted. This is what I got in the end: ``` SELECT membername FROM club WHERE clubname NOT IN ( SELECT C.clubname FROM club C WHERE C.clubname NOT IN ( SELECT clubname FROM club WHERE membername = 'Jim' ) ) AND membername <> 'Jim' GROUP BY membername HAVING COUNT(membername) = ( SELECT COUNT(*) FROM club WHERE membername = 'Jim' ) ``` Thanks for all your help.
SQL: Find the names of the people in at least the same clubs as one specific person
[ "", "sql", "" ]
Given the following schema: ``` CREATE TABLE identifiers ( id TEXT PRIMARY KEY ); CREATE TABLE days ( day DATE PRIMARY KEY ); CREATE TABLE data ( id TEXT REFERENCES identifiers , day DATE REFERENCES days , values NUMERIC[] ); CREATE INDEX ON data (id, day); ``` What is the best way to count all distinct days between two timestamps? I've tried the following two methods: ``` EXPLAIN ANALYZE SELECT COUNT(DISTINCT day) FROM data WHERE day BETWEEN '2010-01-01' AND '2011-01-01'; QUERY PLAN ---------------------------------------------------------------------------------------------------------------------------------------------------------- Aggregate (cost=200331.32..200331.33 rows=1 width=4) (actual time=1647.574..1647.575 rows=1 loops=1) -> Index Only Scan using data_day_sid_idx on data (cost=0.56..196942.12 rows=1355678 width=4) (actual time=0.348..1180.566 rows=1362532 loops=1) Index Cond: ((day >= '2010-01-01'::date) AND (day <= '2011-01-01'::date)) Heap Fetches: 0 Total runtime: 1647.865 ms (5 rows) EXPLAIN ANALYZE SELECT COUNT(DISTINCT day) FROM days WHERE day BETWEEN '2010-01-01' AND '2011-01-01'; QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------- Aggregate (cost=18.95..18.96 rows=1 width=4) (actual time=0.481..0.481 rows=1 loops=1) -> Index Only Scan using days_pkey on days (cost=0.28..18.32 rows=252 width=4) (actual time=0.093..0.275 rows=252 loops=1) Index Cond: ((day >= '2010-01-01'::date) AND (day <= '2011-01-01'::date)) Heap Fetches: 252 Total runtime: 0.582 ms (5 rows) ``` The `COUNT(DISTINCT day)` against `days` runs well, but it requires me to keep a secondary table (`days`) to keep the performance reasonable. In a general sense, I'd like to test if a recursive cte will allow me to achieve similar performance **without** maintaining a secondary table. My query looks like this, but doesn't run yet: ``` EXPLAIN ANALYZE WITH RECURSIVE cte AS ( (SELECT day FROM data ORDER BY 1 LIMIT 1) UNION ALL ( -- parentheses required SELECT d.day FROM cte c JOIN data d ON d.day > c.day ORDER BY 1 LIMIT 1 ) ) SELECT day FROM cte WHERE day BETWEEN '2010-01-01' AND '2011-01-01'; ``` **Updates** Thanks to everyone for the ideas. Looks like maintaining a trigger-based table of distinct days is the best way to go, both storage and performance-wise. Thanks to @Erwin's update, the recursive CTE is back in the running. Very useful. ``` WITH RECURSIVE cte AS ( ( -- parentheses required because of LIMIT SELECT day FROM data WHERE day >= '2010-01-01'::date -- exclude irrelevant rows early ORDER BY 1 LIMIT 1 ) UNION ALL SELECT (SELECT day FROM data WHERE day > c.day AND day < '2011-01-01'::date -- see comments below ORDER BY 1 LIMIT 1) FROM cte c WHERE day IS NOT NULL -- necessary because corr. subq. always returns row ) SELECT count(*) AS ct FROM cte WHERE day IS NOT NULL; QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------------------------------------- Aggregate (cost=53.35..53.36 rows=1 width=0) (actual time=18.217..18.217 rows=1 loops=1) CTE cte -> Recursive Union (cost=0.43..51.08 rows=101 width=4) (actual time=0.194..17.594 rows=253 loops=1) -> Limit (cost=0.43..0.46 rows=1 width=4) (actual time=0.191..0.192 rows=1 loops=1) -> Index Only Scan using data_day_idx on data data_1 (cost=0.43..235042.00 rows=8255861 width=4) (actual time=0.189..0.189 rows=1 loops=1) Index Cond: (day >= '2010-01-01'::date) Heap Fetches: 0 -> WorkTable Scan on cte c (cost=0.00..4.86 rows=10 width=4) (actual time=0.066..0.066 rows=1 loops=253) Filter: (day IS NOT NULL) Rows Removed by Filter: 0 SubPlan 1 -> Limit (cost=0.43..0.47 rows=1 width=4) (actual time=0.062..0.063 rows=1 loops=252) -> Index Only Scan using data_day_idx on data (cost=0.43..1625.59 rows=52458 width=4) (actual time=0.060..0.060 rows=1 loops=252) Index Cond: ((day > c.day) AND (day < '2011-01-01'::date)) Heap Fetches: 0 -> CTE Scan on cte (cost=0.00..2.02 rows=100 width=0) (actual time=0.199..18.066 rows=252 loops=1) Filter: (day IS NOT NULL) Rows Removed by Filter: 1 Total runtime: 19.355 ms (19 rows) ``` And the also discussed `EXISTS` query ``` EXPLAIN ANALYZE SELECT count(*) AS ct FROM generate_series('2010-01-01'::date, '2010-12-31'::date, '1d'::interval) d(day) WHERE EXISTS (SELECT 1 FROM data WHERE day = d.day::date); QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------- Aggregate (cost=674.32..674.33 rows=1 width=0) (actual time=95.049..95.049 rows=1 loops=1) -> Nested Loop Semi Join (cost=0.45..673.07 rows=500 width=0) (actual time=12.438..94.749 rows=252 loops=1) -> Function Scan on generate_series d (cost=0.01..10.01 rows=1000 width=8) (actual time=9.248..9.669 rows=365 loops=1) -> Index Only Scan using data_day_idx on data (cost=0.44..189.62 rows=6023 width=4) (actual time=0.227..0.227 rows=1 loops=365) Index Cond: (day = (d.day)::date) Heap Fetches: 0 Total runtime: 95.620 ms (7 rows) ```
Several notes: ## Simple query on table `day` ``` SELECT COUNT(DISTINCT day) FROM days WHERE day BETWEEN '2010-01-01' AND '2011-01-01'; ``` While `day` is defined as PK, `DISTINCT` is just expensive noise. ## Recursive CTE with correlated suquery This is the alternative if there is **no `day` table** with unique entries. The technique pays if there are multiple to many rows per day, so that the equivalent of a loose index scan is actually faster than a simple `DISTINCT` on the base table: ``` WITH RECURSIVE cte AS ( ( -- parentheses required because of LIMIT SELECT day FROM data WHERE day >= '2010-01-01' -- exclude irrelevant rows early ORDER BY 1 LIMIT 1 ) UNION ALL SELECT (SELECT day FROM data WHERE day > c.day AND day < '2011-01-01' -- see below ORDER BY 1 LIMIT 1) FROM cte c WHERE day IS NOT NULL -- necessary because corr. subq. always returns row ) SELECT count(*) AS ct FROM cte WHERE day IS NOT NULL; ``` ### Index Only makes sense in combination with a matching index on `data`: ``` CREATE INDEX data_day_idx ON data (day); ``` `day` must be the leading column. The index you have in the question on `(id, day)` can be used too, but is far less efficient: * [Working of indexes in PostgreSQL](https://dba.stackexchange.com/a/7484/3684) * [Is a composite index also good for queries on the first field?](https://dba.stackexchange.com/a/27493/3684) ### Notes It is much cheaper to exclude irrelevant rows early. I integrated your predicate into the query. Detailed explanation: * **[Optimize GROUP BY query to retrieve latest row per user](https://stackoverflow.com/questions/25536422/optimize-group-by-query-to-retrieve-latest-record-per-user/25536748#25536748)** The case at hand is even simpler - the simplest possible actually. Your original time frame was `day BETWEEN '2010-01-01' AND '2011-01-01'`. But `BETWEEN .. AND ..` *includes* upper and lower bound, so you'd get all of 2010 plus 2011-01-01. You probably want to *exclude* the upper bound. Use `d.day < '2011-01-01'` (not `<=`). See: * [How to add a day/night indicator to a timestamp column?](https://dba.stackexchange.com/a/267877/3684) ### `EXISTS` for this special case Since you are testing for a range of enumerable days (as opposed to a range with an infinite number of possible values), you can test this alternative with an [`EXISTS`](https://www.postgresql.org/docs/current/functions-subquery.html#FUNCTIONS-SUBQUERY-EXISTS) semi-join: ``` SELECT count(*) AS ct FROM generate_series(timestamp '2010-01-01' , timestamp '2010-12-31' , interval '1 day') AS d(day) WHERE EXISTS (SELECT FROM data WHERE day = d.day::date); ``` Why is this form of `generate_series()` optimal? * [Generating time series between two dates in PostgreSQL](https://stackoverflow.com/questions/14113469/generating-time-series-between-two-dates-in-postgresql/46499873#46499873) The same simple index is essential again. *db<>fiddle [here](https://dbfiddle.uk/?rdbms=postgres_13&fiddle=ba380a63600b343009f47a80afaa881d)* demonstrating both with big test table. Old [sqlfiddle](http://sqlfiddle.com/#!17/58f88/1)
I'm not really sure why the index on data(day) is slower, that would seem the simplest option. But if that's too slow, you could try creating a materialised view of your days. Basically just: ``` create materialized view days as select day from data group by day; ``` I don't believe postgres updates materialised views automatically, but at least then all the maintenance you need to do is periodically refresh it. Or perhaps create a trigger on data which refreshes the view. Bear in mind of course that refreshing this view might take some time depending on the size of the data table, you might only want to do it hourly or nightly if you can get away with it. Alternatively if this table gets a lot of updates and you need the distinct day count to be consistent at all times, you could consider going back to your original separate days table, but reduce the maintenance overhead by creating a trigger on the data table to update it.
Counting distinct rows using recursive cte over non-distinct index
[ "", "sql", "postgresql", "count", "postgresql-9.3", "unique-index", "" ]
I have a `SELECT` which has result: ``` | NAZEV 1 test 2 test 3 test111 4 test111 5 test111 6 test12 ``` I want to add function `SUM()` [or `count`] to result was: ``` | NAZEV | Pocet 1 test 2 2 test111 3 3 test12 1 ``` My current script: ``` select sb.nazev from Aplikacni_log lg join zadavaci_postup zp on zp.id = lg.id_zp join subjekt sb on sb.id = zp.id_zadavatel where lg.create_cas > to_date('08.11.2014', 'DD.MM.YYYY') order by sb.nazev asc ```
In order to use functions like SUM() and COUNT() for a specific field you need to use a GROUP BY on that field. Some samples could be found at `http://www.w3schools.com/sql/sql_func_count.asp` your code should become: ``` select sb.nazev, count(sb.nazev) from Aplikacni_log lg join zadavaci_postup zp on zp.id = lg.id_zp join subjekt sb on sb.id = zp.id_zadavatel where lg.create_cas > to_date('08.11.2014', 'DD.MM.YYYY') group by sb.nazev order by sb.nazev asc ```
In order to use `sum` or `count` *and a column to group on*, you need the `group by` clause: ``` select sb.nazev, count(*) from Aplikacni_log lg ... group by sb.nazev order by sb.nazev asc ``` If you don't need a column (or columns) to group on, you can omit the `group by`. The results will be the total `count` or `sum`: ``` select count(*) from Aplikacni_log lg ... ```
add sum() or count () to query
[ "", "sql", "oracle", "" ]
Hope someone can help. I have been trying a few queries but I do not seem to be getting the desired result. I need to identify the highest ‘’claimed’’ users within my table without discarding the columns from the final report. The user can have more than one record in the table, however the data will be completely different as only the user will match. The below query only provides me the count per user without giving me the details. ``` SELECT User, count (*) total_record FROM mytable GROUP BY User ORDER BY count(*) desc ``` Table: ``` mytable Column 1 = User Column 2 = Ref Number Column 3 = Date ``` The first column will be the unique identifier, however the data in the other columns will differ, therefore it needs to descend the highest claimed user with all the relevant rows to the user to the least claimed user. ``` User|Ref Num|Date 1|a|20150317 1|b|20150317 2|c|20150317 3|d|20150317 4|e|20150317 1|f|20150317 4|e|20150317 ``` The below data is how the values should be returned. ``` User|Ref Num|Date|Count 1|a|20150317|3 1|b|20150317|3 1|f|20150317|3 2|c|20150317|1 3|d|20150317|1 4|e|20150317|2 4|e|20150317|2 ``` Hope it makes sense. Thank you
As you're using MSSQL you can use the `OVER()` clause like so: ``` SELECT [user], mt.ref_num, mt.[date], COUNT(mt.[user]) OVER(PARTITION BY mt.[user]) FROM myTable mt ``` More about the `OVER` clause can be found here: <https://msdn.microsoft.com/en-us/library/ms189461.aspx> As per your comment you can use the wildcard `*` like so: ``` SELECT mt.*, COUNT(mt.[user]) OVER(PARTITION BY mt.[user]) FROM myTable mt ``` This would get you every column as well as the result of the count.
If you want to order by the number of record for each user, then use window functions instead of aggregation: ``` SELECT t.* FROM (SELECT t., count(*) OVER (partition by user) as cnt FROM mytable t ) t ORDER BY cnt DESC, user; ``` Note that I added `user` to the `order by` so users with the same count will appear together in the list.
Highest Record for a set user
[ "", "sql", "sql-server", "t-sql", "" ]
I have a string `"abc"`, and I want to search, in a SQL table, for the values that contain either a, b or c. This is what I have right now: ``` Select * from TABLE where NAME like "abc" ; ``` This didn't work, but I know if I try something like ``` where Name like "a" or Name like "b" or .... ``` it will work. Is there an easier way to do this? Since I don't want to separate my string into characters.
You can use regular expression for this. Have a look at the following : ``` Select * from TABLE where NAME REGEXP "[abc]"; ```
select \* from ab where name REGEXP '[abc]'
SQL search for string containing characters
[ "", "sql", "" ]
Hey I'm having a bit of trouble coming up with the SQL for the action I would like to perform. I have a table listings which has a start\_price field and an active field. I only want to pull up listings that are active(1). I want to order these listings by the current price. I do not have a current price field in either my listings or bids table. So if a listing\_id is in the bids table I want to assign the largest value in the amount field that corresponds to the matching listing\_id from the bids table, to the current price, otherwise the start\_price from listings should be assigned to the current price. This is what I have come up with so far. I have been mucking around with different things and cannot come up with the correct syntax. ``` SELECT DISTINCT * FROM listings l LEFT JOIN bids b ON l.id = b.listing_id WHERE l.active = 1 ORDER BY l.start_price ```
You'll need a GROUP BY and an aggrate function, in this case MAX(): ``` SELECT l.*, /* columnlist here */ COALESCE(MAX(b.current_price), l.start_price) price FROM listings l LEFT JOIN bids b ON l.id = b.listing_id WHERE l.active = 1 GROUP BY l.listing_id ORDER BY price ``` When there are no bids on a listing the MAX functions returns NULL so the second value in de COALESCE will be selected.
There are multiple ways that you can do the calculation. I think the key idea for you is that you can use a column alias in the `order by` clause. The following does the calculation using a correlated subquery, then using `coalesce()` if there is no match: ``` select l.*, coalesce( (select max(b.price) from bids b where b.listing_id = l.listing_id ), l.start_price ) as current_price where l.active = 1 from listings l order by current_price; ``` Note: I am a little concerned about using the maximum price from the bids, rather than the most recent price. Is it possible that bids could be withdrawn, but the row remains in the table?
MySQL - Trying to sort by two different columns from different tables
[ "", "mysql", "sql", "" ]
With a `SAS` dataset like ``` Ob x year pid grp 1 3.88 2001 1 a 2 2.88 2002 1 a 3 0.13 2004 1 a 4 3.70 2005 1 a 5 1.30 2007 1 a 6 0.95 2001 2 b 7 1.79 2002 2 b 8 1.59 2004 2 b 9 1.29 2005 2 b 10 0.96 2007 2 b ``` I would like to get ``` Ob x year pid grp grp X_F1 XL1 1 3.88 2001 1 a a 2.88 . 2 2.88 2002 1 a a . 3.88 3 0.13 2004 1 a a 3.7 . 4 3.7 2005 1 a a . 0.13 5 1.3 2007 1 a a . . 6 0.95 2001 2 b b 1.79 . 7 1.79 2002 2 b b . 0.95 8 1.59 2004 2 b b 1.29 . 9 1.29 2005 2 b b . 1.59 10 0.96 2007 2 b b . . ``` where for observations with the same `pid` and each year `t`, * `x_F1` is the value of `x` in year `t+1` and * `x_L1` is the value of x in year `t-1` In my data set, not all `pid`s have observations in successive years. My attempt using the `expand proc` ``` proc expand data=have out=want method=none; by pid; id year; convert x = x_F1 / transformout=(lead 1); convert x = x_F2 / transformout=(lead 2); convert x = x_F3 / transformout=(lead 3); convert x = x_L1 / transformout=(lag 1); convert x = x_L2 / transformout=(lag 2); convert x = x_L3 / transformout=(lag 3); run; ``` did not account for the fact that years are not consecutive.
You could stick with `proc expand` to insert the missing years into your data (utilising the `extrapolate` statement). I've set the `from` value to `day` as this is a sequential integer check for days which will work with your data as YEAR is stored as an integer rather than a date. Like the other answers, it requires 2 passes of the data, but I don't think there's an alternative to this. ``` data have; input x year pid grp $; datalines; 3.88 2001 1 a 2.88 2002 1 a 0.13 2004 1 a 3.70 2005 1 a 1.30 2007 1 a 0.95 2001 2 b 1.79 2002 2 b 1.59 2004 2 b 1.29 2005 2 b 0.96 2007 2 b ; run; proc expand data = have out = have1 method=none extrapolate from=day to=day; by pid; id year; run; proc expand data=have1 out=want method=none; by pid; id year; convert x = x_F1 / transformout=(lead 1); convert x = x_F2 / transformout=(lead 2); convert x = x_F3 / transformout=(lead 3); convert x = x_L1 / transformout=(lag 1); convert x = x_L2 / transformout=(lag 2); convert x = x_L3 / transformout=(lag 3); run; ``` or this can be done in one go, subject to whether the value of x is important in the final dataset (see comment below). ``` proc expand data=have1 out=want1 method=none extrapolate from=day to=day; by pid; id year; convert x = x_F1 / transformout=(lead 1); convert x = x_F2 / transformout=(lead 2); convert x = x_F3 / transformout=(lead 3); convert x = x_L1 / transformout=(lag 1); convert x = x_L2 / transformout=(lag 2); convert x = x_L3 / transformout=(lag 3); run; ```
Here is a simple approach using `proc sql`. It joins the data with itself twice; once for the forward and once for the backward lag, then takes the required values where they exist. ``` proc sql; create table want as select a.*, b.x as x_f1, c.x as x_l1 from have as a left join have as b on a.pid = b.pid and a.year = b.year - 1 left join have as c on a.pid = c.pid and a.year = c.year + 1 order by a.pid, a.year; run; ``` Caveats: * It will not expand too well with larger numbers of lags. * This is probably not the quickest approach. * It requires that there be only one observation for each `pid` `year` pair, and would need modifying if this is not the case.
Generating Leads and lags for non-consecutive time periods in SAS
[ "", "sql", "sas", "" ]
I am trying to find the max date across multiple columns (in multiple tables). I have an SQL query that I think is almost workable, but am having a hard time debugging it. The problem is that it never returns anything. Any ideas on what I'm doing incorrectly? I don't get any errors, just an empty row "maxdate." Here's my query: ``` SELECT ( SELECT MAX(dates) FROM ( SELECT dates = t1.UpdateDate UNION ALL SELECT dates = t2.UpdateDate UNION ALL SELECT dates = t3.UpdateDate UNION ALL SELECT dates = t4.UpdateDate ) as dateAcrossColumns ) as maxdate FROM table1 as t1 join table2 as t2 on t1.ID = t2.ID join table3 as t3 on t1.ID = t3.ID join table4 as t4 on t1.ID = t4.ID join table5 as t5 on t1.Status = t5.Status WHERE t1.id = @param and t5.status <> 3 ```
First join you join to table 2 using table 1's ID and table 3's ID. I presume you mean table 1's ID and table 2's ID? ``` join table2 as t2 on t1.ID = t3.ID ``` Should be: ``` join table2 as t2 on t1.ID = t2.ID ``` Now that this has been changed; I tried your query and it works fine, are you sure that your joins are bringing back any rows? To test it replace your `SELECT` statement with `SELECT *`, if you get no rows back then they are being filtered out somewhere in your joins / `WHERE`.
If it's possible for any of the tables to not have any rows for the given @param, the (inner) joins will filter out all of the other tables. Try unioning the dates and get the max of the unions ``` WITH allDates AS ( SELECT UpdateDate FROM Table1 WHERE ID = @param UNION SELECT UpdateDate FROM Table2 WHERE ID = @param UNION SELECT UpdateDate FROM Table3 WHERE ID = @param UNION SELECT UpdateDate FROM Table4 WHERE ID = @param UNION SELECT t5.UpdateDate FROM Table5 AS t5 JOIN Table1 AS t1 ON t5.Status = t1.Status WHERE t1.ID = @param ) SELECT MAX( UpdateDate ) AS MaxDate FROM allDates ```
SQL: Calculating the Max date across multiple tables
[ "", "sql", "sql-server", "database", "select", "join", "" ]
I want to remove last character in a row but if the last character is "-" . For example my row looks like this: ``` hello-world today-is-monday- ``` It should be like this: ``` hello-world today-is-monday ``` How can i do that in SQL ?
You can use `trim()` function as ``` mysql> select trim( trailing '-' from 'hello-'); +-----------------------------------+ | trim( trailing '-' from 'hello-') | +-----------------------------------+ | hello | +-----------------------------------+ 1 row in set (0.00 sec) mysql> select trim( trailing '-' from 'hello'); +----------------------------------+ | trim( trailing '-' from 'hello') | +----------------------------------+ | hello | +----------------------------------+ 1 row in set (0.00 sec) ``` <http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_trim> So in the query replace the hard coded `hello-` with your column name.
If you mean UPDATE the table then this ``` UPDATE table SET field = SUBSTRING(field, 1, CHAR_LENGTH(field) - 1) WHERE field LIKE '%-' ``` If you mean you want to edit the output but not the data in the table then this ``` SELECT CASE WHEN field LIKE '%-' THEN SUBSTRING(field, 1, CHAR_LENGTH(field) - 1) ELSE field END AS field ```
SQL Remove Last Character if it is "-"
[ "", "mysql", "sql", "" ]
I have the following REGEX: ^[-A-Za-z0-9/.]+$ This currently checks whether the value entered into a textbox matches this. If not, it throws an error. I need to check whether anything has already gone into the database that doesnt match this. I have tired: ``` SELECT * FROM *table* WHERE ([url] NOT LIKE '^[-A-Za-z0-9/.]+$') SELECT * FROM *table* WHERE PATINDEX ('^[-A-Za-z0-9/.]+$', [url]) ``` **UPDATE** So after a bit of research I've realised I don't think I can use REGEXP. I thought I could do something like this? Its not giving me the expected results but its running unlike anything else. Can anyone spot anything wrong with it? ``` SELECT *, CASE WHEN [url] LIKE '^[-A-Za-z0-9/.]+$' THEN 'Match' ELSE 'No Match' END Validates FROM *table* ```
This is what I have used in the end: ``` SELECT *, CASE WHEN [url] NOT LIKE '%[^-A-Za-z0-9/.+$]%' THEN 'Valid' ELSE 'No valid' END [Validate] FROM *table* ORDER BY [Validate] ```
> **Disclaimer:** The original question was about MySQL. The SQL Server answer is below. # MySQL In MySQL, the regex syntax is the following: ``` SELECT * FROM YourTable WHERE (`url` NOT REGEXP '^[-A-Za-z0-9/.]+$') ``` Use the `REGEXP` clause instead of `LIKE`. The latter is for pattern matching using `%` and `_` wildcards. --- # SQL Server Since you made a typo, and you're using SQL Server (not MySQL), you'll have to create a user-defined CLR function to expose regex functionality. Take a look at [this article](http://web.archive.org/web/20160307210334/http://blogs.msdn.com/b/sqlclr/archive/2005/06/29/regex.aspx) for more details.
MSSQL Regular expression
[ "", "sql", "sql-server", "regex", "" ]
I have a small mysql table (MySQL version 5.6.23) : ``` +-----------+-----------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------+-----------------+------+-----+---------+----------------+ | id | int(6) unsigned | NO | PRI | NULL | auto_increment | | dividends | float(8,6) | YES | | NULL | | +-----------+-----------------+------+-----+---------+----------------+ ``` My where clause follows the [row-subqueries](http://dev.mysql.com/doc/refman/5.6/en/row-subqueries.html) syntax. If I do: ``` SELECT id, dividends FROM test where (id,dividends) >= (660,0.5); ``` or ``` SELECT id, dividends FROM test where (id,dividends) >= (660,CAST(0.5 AS DECIMAL(8,6))); ``` I get this result: ``` +-----+-----------+ | id | dividends | +-----+-----------+ | 660 | 0.500000 | | 661 | 0.470000 | | 662 | 0.470000 | | 663 | 0.470000 | | 664 | 2.580000 | | 665 | 2.581000 | ... ``` It seems to me that dividends >= 0.5 is not taken into consideration. Why?
You're using row constructors. MySQL treats them exactly like *rows of a table*. Thus `WHERE (id,dividends) >= (660,0.5)` effectively does the same as: 1. `ORDER BY id,dividends`; 2. Find the point at which `(660,0.5)` would sit within that ordering; 3. Filter for only those records that are equal to or greater than that point in the ordering. Consequently, it is the same as `WHERE (id=660 AND dividends>=0.5) OR id>660`. It appears as though the logic you really wish to express is `WHERE id>=660 AND dividends>=0.5`.
Looking at your where clause everything regarding id evaluates to true first and then everything else is evaluated but must not conflict with first id results. Its also possible that you do not have any id greater than 660 which has a dividend greater than 0.5. ``` (id,dividends) >= (660,CAST(0.5 AS DECIMAL(8,6))); ``` SQL query will always start evaluating id first for all fields >= 660 ... Then evaluate dividends... you can try running the query below and check results ``` where ((id) >= (660)) AND ((dividends) >= (0.5)); ```
MySQL row subquery comparison issue
[ "", "mysql", "sql", "subquery", "" ]
## Employee Table ``` Id Name ------------- 1 Joy 2 Moni 3 Evan 4 farhad ``` ## Absent table ``` Date(y/m/d) Id ----------------- 2015/1/1 1 2015/1/3 1 2015/1/4 1 2015/1/5 1 2015/1/1 2 2015/1/4 2 2015/1/5 2 2015/1/5 3 ``` I have need data from above two tables as like as bellow ``` Name Date Joy 2015/1/5, 2015/1/4, 2015/1/3 Moni 2015/1/5, 2015/1/4 Evan 2015/1/5 ``` Point 1: I will not take date `2015/1/1` because date `2015/1/2` is missing for employee id '1' For Joy in Date Table Point 2: I will not take date '2015/1/1' because date '2015/1/3' and '2015/1/2' is missing for employee id '2' for moni in date Table I have tried the problem like this bellow, this worked fine for my problem but its take two much execution time for big data. How can i do that another way so that i will get minimum execution time. ## CODE ``` select a.Id,a.name , [dbo].[hello] ('2015/1/1','2015/1/5',a.Id) From Employee a ALTER FUNCTION [dbo].[hello](@start datetime,@End datetime,@Id int) returns varchar(1111) AS begin declare @TempDate DateTime, @CombainedDate varchar(1111)= '', while(@End>=@start) begin select @ TempDate = (select distinct Date from Absent d where Date=@End and EmployeeId=@Id) if @ TempDate = @End begin set @End = DATEADD(day, -1, @End) set @ CombainedDate += ',' + cast(@TempDate as varchar(1111)) end else begin break end end return @ CombainedDate end ```
Code(Modifying Modifying Giorgi Nakeuri's solution) ``` DECLARE @sd DATETIME, @ed DATETIME SET @sd = '20150101' SET @ed = '20150106' DECLARE @e TABLE ( ID INT , Name NVARCHAR(MAX) ) INSERT INTO @e SELECT 1, 'Joy' UNION SELECT 2, 'Moni' UNION SELECT 3, 'Evan' UNION SELECT 4, 'Farhad' DECLARE @a TABLE ( ID INT, d DATETIME ) INSERT INTO @a (ID, D) SELECT 1, '20150101' union SELECT 1, '20150103' union SELECT 1, '20150104' union SELECT 1, '20150105' union SELECT 2, '20150101' union SELECT 2, '20150104' union SELECT 2, '20150105' union SELECT 3, '20150105'; DECLARE @T TABLE ( ID INT, d DATETIME ) INSERT INTO @T(ID,d) SELECT X.ID, X.D FROM (SELECT ID,d FROM @a WHERE d BETWEEN @sd AND @ed) X INNER JOIN (SELECT ID, d FROM @a WHERE d = @ed) Y ON X.ID=Y.ID; WITH cte AS ( SELECT ID , sd = MIN(d) , ed = MAX(d) , ROW_NUMBER() OVER ( PARTITION BY ID ORDER BY MAX(d) - MIN(d) DESC, MAX(d) DESC ) AS rn FROM ( SELECT ID , CAST(d AS INT) AS d , rn = CAST(d AS INT) - ROW_NUMBER() OVER ( PARTITION BY ID ORDER BY d ) FROM @T WHERE d >= @sd AND d <= @ed ) a GROUP BY ID , rn ) SELECT e.Name , ( SELECT STUFF((SELECT ',' + CONVERT(NVARCHAR(8), d, 112) FROM @T a WHERE a.ID = c.ID AND a.d >= c.sd AND a.d <= c.ed ORDER BY d desc FOR XML PATH('') , TYPE).value('.', 'NVARCHAR(MAX)'), 1, 1, '') ) AS Date FROM cte c JOIN @e e ON e.ID = c.ID WHERE rn = 1 ```
Here is demo. It uses some island solution and then XML query technique for concatenating rows to one string: ``` DECLARE @sd DATE = '20150101' , @ed DATE = '20150105' DECLARE @e TABLE ( ID INT , Name NVARCHAR(MAX) ) DECLARE @a TABLE ( ID INT, d DATETIME ) INSERT INTO @e VALUES ( 1, 'Joy' ), ( 2, 'Moni' ), ( 3, 'Evan' ), ( 4, 'Farhad' ) INSERT INTO @a VALUES ( 1, '20150101' ), ( 1, '20150103' ), ( 1, '20150104' ), ( 1, '20150105' ), ( 2, '20150101' ), ( 2, '20150104' ), ( 2, '20150105' ), ( 3, '20150105' ); WITH cte AS ( SELECT ID , sd = MIN(d) , ed = MAX(d) , ROW_NUMBER() OVER ( PARTITION BY ID ORDER BY MAX(d) - MIN(d) DESC, MAX(d) DESC ) AS rn FROM ( SELECT ID , CAST(d AS INT) AS d , rn = CAST(d AS INT) - ROW_NUMBER() OVER ( PARTITION BY ID ORDER BY d ) FROM @a WHERE d >= @sd AND d <= @ed ) a GROUP BY ID , rn ) SELECT e.Name , ( SELECT STUFF((SELECT ',' + CONVERT(NVARCHAR(8), d, 112) FROM @a a WHERE a.ID = c.ID AND a.d >= c.sd AND a.d <= c.ed ORDER BY d desc FOR XML PATH('') , TYPE).value('.', 'NVARCHAR(MAX)'), 1, 1, '') ) AS Date FROM cte c JOIN @e e ON e.ID = c.ID WHERE rn = 1 ``` Output: ``` Name Date Joy 20150105,20150104,20150103 Moni 20150105,20150104 Evan 20150105 ```
How to select multiple rows in one column with a given condition
[ "", "sql", "sql-server", "" ]
I'm pretty new to SQL and have this problem: I have a filled table with a date column and other not interesting columns. `date | name | name2` `2015-03-20 | peter | pan` `2015-03-20 | john | wick` `2015-03-18 | harry | potter` What im doing right now is counting everything for a date ``` select date, count(*) from testtable where date >= current date - 10 days group by date ``` what i want to do now is counting the resulting lines and only returning them if there are less then 10 resulting lines. What i tried so far is surrounding the whole query with a temp table and the counting everything which gives me the number of resulting lines (yeah) ``` with temp_count (date, counter) as ( select date, count(*) from testtable where date >= current date - 10 days group by date ) select count(*) from temp_count ``` What is still missing the check if the number is smaller then 10. I was searching in this Forum and came across some "having" structs to use, but that forced me to use a "group by", which i can't. I was thinking about something like this : ``` with temp_count (date, counter) as ( select date, count(*) from testtable where date >= current date - 10 days group by date ) select * from temp_count having count(*) < 10 ``` maybe im too tired to think of an easy solution, but i can't solve this so far Edit: A picture for clarification since my english is horrible <http://imgur.com/1O6zwoh> I want to see the 2 columned results ONLY IF there are less then 10 rows overall
I think you just need to move your `having` clause to the inner query so that it is paired with the `GROUP BY`: ``` with temp_count (date, counter) as ( select date, count(*) from testtable where date >= current date - 10 days group by date having count(*) < 10 ) select * from temp_count ``` If what you want is to know whether the total # of records (after grouping), are returned, then you *could* do this: ``` with temp_count (date, counter) as ( select date, counter=count(*) from testtable where date >= current date - 10 days group by date ) select date, counter from ( select date, counter, rseq=row_number() over (order by date) from temp_count ) x group by date, counter having max(rseq) >= 10 ``` This will return 0 rows if there are less than 10 total, and will deliver ALL the results if there are 10 or more (you can just get the first 10 rows if needed with this also).
In your temp\_count table, you can filter results with the WHERE clause: ``` with temp_count (date, counter) as ( select date, count(distinct date) from testtable where date >= current date - 10 days group by date ) select * from temp_count where counter < 10 ```
Counting an already counted column in SQL (db2)
[ "", "sql", "count", "db2", "" ]
I'm trying to get the most expensive and cheapest items from two different tables. The output should be one row with the values for MostExpensiveItem, MostExpensivePrice, CheapestItem, CheapestPrice I was able to get the price of the most expensive and cheapest items in the two tables with following query: ``` SELECT MAX(ExtrasPrice) as MostExpensivePrice, MIN(ExtrasPrice) as CheapestPrice FROM ( SELECT ExtrasPrice FROM Extras UNION ALL SELECT ItemPrice FROM Items ) foo ``` How can I add the names of the items (ItemName, ExtrasName) to my output? Again, there should only be one row as the output.
Try this: ``` SELECT TOP 1 FIRST_VALUE(Price) OVER (ORDER BY Price) AS MinPrice, FIRST_VALUE(Name) OVER (ORDER BY Price) AS MinName, LAST_VALUE(Price) OVER (ORDER BY Price DESC) AS MaxPrice, LAST_VALUE(Name) OVER (ORDER BY Price DESC) AS MaxName FROM ( SELECT ExtrasName AS Name, ExtrasPrice AS Price FROM Extras UNION ALL SELECT ItemName As Name, ItemPrice AS Price FROM Items) u ``` [**SQL Fiddle Demo**](http://sqlfiddle.com/#!6/8e37c/1)
`TOP 1` with `order by` clause should work for you. Try this ``` SELECT * FROM (SELECT TOP 1 ExtrasPrice,ExtrasName FROM Extras ORDER BY ExtrasPrice Asc), (SELECT TOP 1 ItemPrice,ItemName FROM Items ORDER BY ItemPrice Desc) ``` **Note:** `Comma` can be replaced with `CROSS JOIN`
Get most expensive and cheapest items from two tables
[ "", "sql", "sql-server", "" ]
Please suppose you have a multi-column index in an Oracle table: ``` CREATE INDEX MY_USER.MY_INDEX ON MY_USER.MY_TABLE (YEAR, SEMESTER); ``` The column YEAR is NUMBER(4). The column SEMESTER is NUMBER(1). What is the most efficient way to determine the most recent couple (YEAR, SEMESTER)? I think something like this: ``` select max(YEAR) into MY_YEAR from MY_TABLE; select max(SEMESTER) from MY_TABLE where YEAR = MY_YEAR; ``` Any other clues? Thank you very much for considering my request. EDIT: The table has over 50 million records. SECOND EDIT: I am using this Oracle version: ``` BANNER Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi PL/SQL Release 10.2.0.5.0 - Production CORE 10.2.0.5.0 Production TNS for IBM/AIX RISC System/6000: Version 10.2.0.5.0 - Productio NLSRTL Version 10.2.0.5.0 - Production ```
In case `semester` can be only `1` or `2`, this could be efficient: ``` with t as (select max(year), 1 as semester from MY_TABLE where semester = 1 union all select max(year), 2 as semester from MY_TABLE where semester = 2) select MAX(TO_CHAR(YEAR) || TO_CHAR(SEMESTER)) from t; ``` Of course, this is not a generic solution but it should work in your case. **Update:** Another idea is this one: ``` SELECT year, semester FROM MY_TABLE ORDER BY year, semester FETCH FIRST 1 ROWS ONLY; ``` The [row\_limiting\_clause](http://docs.oracle.com/database/121/SQLRF/statements_10002.htm#BABHFGAA) was introduced in Oracle 12.1
Question: What is the most efficient way to determine the more recent couple (YEAR, SEMESTER)? If this is a one off query that will not need to be optimized how about Select MAX(TO\_CHAR(YEAR) || TO\_CHAR(SEMESTER)) FROM MY\_TABLE; Not an elegant solution but it works.
Oracle multi-column index: determine the most recent couple (year, semester)
[ "", "sql", "oracle", "max", "" ]
What is the best way to keep updates (diffs) of the some entity in the database? Here, at StackOverflow, we can edit questions and answers. And then we can look at any revision of the question or answer we want. For example: [revisions](https://stackoverflow.com/posts/60174/revisions) of some random question. Maybe someone knows how it realized in StackOverflow? To be clear in my case I have some entity (`article`) with some fields (`name`, `description`, `content`). Many users can edit the same article. I want to keep history of the article updates (something like version control) and I want to keep only diffs, not the whole content of the updated article. By the way I use PostgreSQL, but can migrate to any other database. **UPD** Open bounty, so here is some requirements. You don't need to fully satisfy them. But if you do it will be much better. Nevertheless any answer is much appreciated. So I want to have an ability: 1. to keep only diffs to not waste my space for no purpose. 2. to fetch any revision (version) of some article. But fetching of the last revision of the article must be really quick. Fetching speed of other revisions is not so important. 3. to fetch any diff (and list of diffs) of some article. Article can have changes in fields: `header`, `description` or `content` (like StackOverflow have changes in header and content), so it must be taken into account.
In the past, I have used [diff-match-patch](https://code.google.com/p/google-diff-match-patch/) with excellent (and fast) results. It is available for several languages (my experience with it was in C#). I did not use it for exactly the process you are describing (we were interested in merging), but it seems to me you could: 1. Save the initial version of an article's text/header/whatever. 2. When a change is made, use diff-match-patch to compute a patch between the newly edited version and what is already in the database. To get the latest version in the database, simply apply any patches that have already been generated to the original article in order. 3. Save the newly generated patch. If you wanted to speed things up even more, you could cache the latest version of the article in its own row/table/however-you-organize-things so that getting the latest version is a simple SELECT. This way, you'd have the initial version, the list of patches, and the current version, giving you some flexibility and speed. Since you have a set of patches in sequence, fetching any version of the article would simply be a matter of applying patches up to the one desired. You can [take a look at the patch demo](https://neil.fraser.name/software/diff_match_patch/svn/trunk/demos/demo_patch.html) to see what its patches look like and get an idea of how big they are. Like I said, I have not used it for exactly this scenario, but diff-match-patch has been designed for doing more or less exactly what you are talking about. This library is on my short list of software I can use when I have no restrictions on libraries developed out-of-house. ## Update: Some example pseudocode As an example, you could set up your tables like so (this assumes a few other tables, like Authors): ``` Articles -------- id authorId title content timestamp ArticlePatches -------------- id articleId patchText timestamp CurrentArticleContents ---------------------- id articleId content ``` Then some basic CRUD could look like: Insert new article: ``` INSERT INTO Articles (authorId, title, content, timestamp) VALUES(@authorId, @title, @content, GETDATE()) INSERT INTO CurrentArticleContents(articleId, content) VALUES(SCOPE_IDENTITY(),@content) GO ``` Get all articles with latest content for each: ``` SELECT a.id, a.authorId, a.title, cac.content, a.timestamp AS originalPubDate FROM Articles a INNER JOIN CurrentArticleContents cac ON a.id = cac.articleId ``` Update an article's content: ``` //this would have to be done programatically currentContent = (SELECT content FROM CurrentArticleContents WHERE articleId = @articleId) //using the diff-match-patch API patches = patch_make(currentContent, newContent); patchText = patch_toText(patches); //setting @patchText = patchText and @newContent = newContent: (INSERT INTO ArticlePatches(articleId, patchText, timestamp) VALUES(@articleId, @patchText, GETDATE()) INSERT INTO CurrentArticleContents(articleId, content, timestamp) VALUES(@articleId, @newContent, GETDATE()) GO) ``` Get the article at a particular point in time: ``` //again, programatically originalContent = (SELECT content FROM Articles WHERE articleId = @articleId) patchTexts = (SELECT patchText FROM ArticlePatches WHERE articleId = @articleId AND timestamp <= @selectedDate ORDER BY timestamp ASCENDING) content = originalContent foreach(patchText in patchTexts) { //more diff-match-patch API patches = patch_fromText(patchText) content = patch_apply(patches, content)[0] } ```
i have similar issue in workplace. i implement the use of trigger after update to record all needed data to a another table (which of course you can save only the difference field only), and the new on exist on the real table, while the log live in another table.
How to keep updates (diffs) of some entity in the database
[ "", "mysql", "sql", "database", "postgresql", "" ]
I am using a mysql case for some calculation and my demand is that if the output is negative then i will show 0 in the place of the negative column.Otherwise the positive value. Let me post the query: ``` select concat(jo.title,' (', CCP.name, ')'), PL.analyst, PL.consultant, PL.csm, (CASE WHEN PL.productType like 'Staffing' THEN (SELECT ((DATEDIFF(PL.dateEnd, PL.dateClientEffective)) -((WEEK(PL.dateEnd) - WEEK(PL.dateClientEffective)) * 2) - (case when weekday(PL.dateEnd) = 6 then 1 else 0 end) - (case when weekday(PL.dateClientEffective) = 5 then 1 else 0 end)) as DifD) * 8 * (PL.clientBillRate-PL.payRate) ELSE (PL.salary*PL.fee)END) Value ``` Now if the value is negative, then it should show 0 else will show the original value
please check this, haven't tested yet. Note: this might not be the best solution ``` select concat(jo.title,' (', CCP.name, ')'), PL.analyst, PL.consultant, PL.csm, if( (CASE WHEN PL.productType like 'Staffing' THEN (SELECT ((DATEDIFF(PL.dateEnd, PL.dateClientEffective)) -((WEEK(PL.dateEnd) - WEEK(PL.dateClientEffective)) * 2) - (case when weekday(PL.dateEnd) = 6 then 1 else 0 end) - (case when weekday(PL.dateClientEffective) = 5 then 1 else 0 end)) as DifD) * 8 * (PL.clientBillRate-PL.payRate) ELSE (PL.salary*PL.fee)END) < 0,0, (CASE WHEN PL.productType like 'Staffing' THEN (SELECT ((DATEDIFF(PL.dateEnd, PL.dateClientEffective)) -((WEEK(PL.dateEnd) - WEEK(PL.dateClientEffective)) * 2) - (case when weekday(PL.dateEnd) = 6 then 1 else 0 end) - (case when weekday(PL.dateClientEffective) = 5 then 1 else 0 end)) as DifD) * 8 * (PL.clientBillRate-PL.payRate) ELSE (PL.salary*PL.fee)END) ) Value ```
The simplest way is simply to use `greatest()`: ``` select greatest(<expression>, 0) as col ``` It is unclear what column/expression you want to do this for in your query. But, you can just plug it in. The advantage of this method over other methods is: * The only needs to appear once in the query. Not having to duplicate code reduces the likelihood of errors. * There is no need for a subquery. This is an issue in MySQL only because MySQL materializes subqueries.
How to check negative values in mysql and decide the output after that
[ "", "mysql", "sql", "" ]
I have a table which appears like this (I've shortened it for example purposes) ``` no no19 no68 3387034694344500 3387452540705400 3388486878919450 3371522572594880 3372232397709690 3373608476884750 3382142940562320 3382142940562320 3383084144363070 ``` so no, no19 and no68 are 3 different columns, but the data in column 'no19' starts in the next row after data in column 'no' ends. As between these columns I have a lot more data, I would like to create a readable table. I have merged these columns into one, using this code: ``` CREATE TABLE MULTICURRENCY_CHECK ( TOKEN varchar(255) ) INSERT INTO MULTICURRENCY_CHECK ( TOKEN ) SELECT no FROM book1 UNION ALL SELECT no19 FROM book1 UNION ALL SELECT no68 FROM book1 ``` The problem is, the result I got looks like this: ``` TOKEN 3387034694344500 3387452540705400 3388486878919450 3371522572594880 3372232397709690 3373608476884750 3382142940562320 3382142940562320 3383084144363070 ``` So there are blank rows between in column TOKEN. I've tried to delete them, but it by simple delete command but it's not working (tried those two below): ``` delete from multicurrency_check where TOKEN = ' ' delete from multicurrency_check where TOKEN is NULL ``` Perhaps there is a different way I should deal with this table, maybe quicker? As the original table looks like these (just sample data) ``` no a b no19 c d no68 3387034694344500 data1 data4 3387452540705400 data2 data5 3388486878919450 data3 data6 3371522572594880 data7 data10 3372232397709690 data8 data11 3373608476884750 data9 data12 3382142940562320 3382142940562320 3383084144363070 ``` so what I would like to have in the end is table like this: ``` | TOKEN | a | b | c | d ``` where token is a merge of no, no19 and no68, and then folowwing a,b,c,d columns with data matching appropriate id from TOKEN column (a,b,c,d can be null)
You could delete them in your UNION query like: ``` SELECT no FROM table WHERE no IS NOT NULL UNION SELECT no19 FROM table WHERE no19 IS NOT NULL UNION SELECT no68 FROM table where no68 IS NOT NULL ``` You could also use COALESCE() instead of union since a column only contains data when the other's are null: ``` SELECT COALESCE(no, no19, no68) FROM table ``` Instead of putting these values in their own table, you could start with the above queries and build off of them. Say you want to also bring in A,B or C,D into the results: ``` SELECT COALESCE(no, no19, no68) as newno, COALESCE(a,c) as ac, COALESCE(b,d) as bd FROM table; ``` As to why your DELETE didn't work, perhaps those NULL's aren't NULL. Perhaps they hold a TAB character or 50 spaces? In which case @sidux's comment on your Q would do the trick. Trimming the field and looking where its value is `=''`.
Maybe something like this: ``` select isnull(no,'')+isnull(no19,'')+isnull(no68,''), a,b,c,d from book1 ``` That should concatenate all token in a row (and only one from no, no19 and no68 will have a value).
Union columns and remove blanks
[ "", "sql", "sql-server", "syntax", "" ]
I'm confused. How could you explain this diffenece in variable concatenation with ORDER BY? ``` declare @tbl table (id int); insert into @tbl values (1), (2), (3); declare @msg1 varchar(100) = '', @msg2 varchar(100) = '', @msg3 varchar(100) = '', @msg4 varchar(100) = ''; select @msg1 = @msg1 + cast(id as varchar) from @tbl order by id; select @msg2 = @msg2 + cast(id as varchar) from @tbl order by id+id; select @msg3 = @msg3 + cast(id as varchar) from @tbl order by id+id desc; select TOP(100) @msg4 = @msg4 + cast(id as varchar) from @tbl order by id+id; select @msg1 as msg1, @msg2 as msg2, @msg3 as msg3, @msg4 as msg4; ``` Results ``` msg1 msg2 msg3 msg4 ---- ---- ---- ---- 123 3 1 123 ```
As many have confirmed, this is not the right way to concatenate all the rows in a column into a variable - even though in some cases it does "work". If you want to see some alternatives, please check out [this blog](http://www.sqlmatters.com/Articles/Converting%20row%20values%20in%20a%20table%20to%20a%20single%20concatenated%20string.aspx). [According to MSDN](https://msdn.microsoft.com/en-us/library/ms187330.aspx) (applies to SQL Server 2008 through 2014 and Azure SQL Database) , the `SELECT` should not be used to assign local variables. In the remarks, it describes how, when you do use the `SELECT`, it attempts to behave. The interesting points to note: * While typically it should only be used to return a single value to a variable, when the expression is the name of the column, it can return multiple values. * When the expression does return multiple values, the variable is assigned the last value that is returned. * If no value is returned, the variable retains its original value (not directly relevant here, but worth noting). The first two points here are key - concatenation happens to work because `SELECT @msg1 = @msg1 + cast(id as varchar)` is essentially `SELECT @msg1 += cast(id as varchar)`, and as the syntax notes, `+=` is an accepted compound assignment operator on this expression. Please note here that it should not be expected this operation to continue to be supported on `VARCHAR` and to do string concatenation - just because it happens to work in some situations doesn't mean it is ok for production code. The bottom line as to the underlying reason is whether the `Compute Scalar` that runs on the select expression uses the original id column or an expression of the id column. You probably can't find any docs on why the optimizer might choose the specific plans for each query, but each example highlights different use cases that allow the msg value to be evaluated from the column (and therefore multiple rows being returned and concatenated) or expression (and therefore only the last column). 1. @msg1 is '123' because the `Compute Scalar` (the row-by-row evaluation of the variable assignment) occurs after the `Sort`. This allows the scalar computation to return multiple values on the id column concatenating them through the `+=` compound operator. I doubt there is specific documentation why, but it appears the optimizer chose to do the sort before the scalar computation because the order by was a column and not an expression. 2. @msg2 is '3' because the `Compute Scalar` is done before the sort, which leaves the @msg2 in each row just being the ('' + id) - so never concatenated, just the value of the id. Again, probably not any documentation why the optimizer chose this, but it appears that since the order by was an expression, perhaps it needed to do the (id+id) in the order by as part of the scalar computation before it could sort. At this point, your original column is no longer referencing the source column, but it has been replaced by an expression. Therefore, as MSDN stated, your first column points to an expression, not a column, so the behavior assigns the last value of the result set to the variable in the SELECT. Since you sorted ASC, you get '3' here. 3. @msg3 is '1' for the same reason as example 2, except you ordered DESC. Again, this becomes an expression in the evaluation - not the original column, so therefore the assignment gets the last value of the DESC order, so you get '1'. 4. @msg4 is '123' again because the `TOP` operation forces an initial scalar evaluation of the `ORDER BY` so that it can determine your top 100 records. This is different than examples 2 and 3 in which the scalar computation contained both the order by and select computations which caused each example to be an expression and not refer back to the original column. Example 4 has the TOP separating the ORDER BY and SELECT computations, so after the SORT (TOP N SORT) is applied, it then does the scalar computation for the SELECT columns in which at this point you are still referencing the original column (not an expression of the column), and therefore it returns multiple rows allowing the concatenation to occur. Sources: * MSDN: <https://msdn.microsoft.com/en-us/library/ms187330.aspx>
SQL Server will calculate the results, then sort them, then return them. In the case of assigning a variable, only the first result will be used to populate your variable. You are receiving the first value from the sorted result sets, which can move around the order SQL Server will scan the records as well as the position within the results. TOP will always produce special query plans as it immediately forces SQL Server to stick to the natural ordering of the results instead of producing query plans that would statistically reduce the number of records it must read. To explain the differences, you'll have to refer to how SQL Server decided to implicitly sort the values to optimize the query. **Query 1** > Insert -> Table Insert -> Constant Scan **Query 2** > SELECT -> Compute Scalar -> Sort -> Table Scan **Query 3, and 4** > SELECT -> Sort -> Compute Scalar -> Table Scan **Query 5 and 6 (using TOP)** > SELECT -> Compute Scalar -> Sort (Top N) -> Compute Scalar -> Table > Scan **I added Query 6:** ``` select top (100) @msg5 = @msg5 + cast(id as varchar) from @tbl order by id+id desc ```
Using SQL concatenation with ORDER BY
[ "", "sql", "sql-server", "t-sql", "sql-server-2014", "" ]
Here is what I am trying to do: ``` IF len(Variable) > 1 BEGIN SELECT * INTO #TEMPTAB FROM multiple joins END ELSE BEGIN SELECT * INTO #TEMPTAB FROM different multiple joins END SELECT * FROM #TEMPTAB more large number of multiple joins & where & groupby ERROR: There is already an object #TEMPTAB defined -- Because of select * into in IF and ELSE both ``` I don't want to create a temp table prior cause it has a lot of columns to be defined. Is there a way around it?
This was a fun problem for me that is... Well I figured out four ways to do it. One is with a view, one with a temp Table, one with a physical table, and one with a stored procedure and global temp table. Let me know if you have any questions. ## View ``` DECLARE @Variable VARCHAR(10) = 'aa'; IF LEN(@Variable) > 1 BEGIN EXEC('CREATE VIEW yourView AS SELECT ''Greater than 1'' col') END ELSE BEGIN EXEC('CREATE VIEW yourView AS SELECT ''Less than 1'' col') END SELECT * FROM yourView; DROP VIEW yourView; ``` ## Temp Table ``` DECLARE @Variable VARCHAR(100) = 'aa', --Default value is 0 @percent INT = 0; --If the length > 1, then change percent to 100 as to return the whole table IF LEN(@Variable) > 1 SET @percent = 100; --If the length <=1, then @percent stays 0 and you return 0 percent of the table SELECT TOP(@percent) PERCENT 'Greater than 1' col INTO #TEMPTAB --If you didn't populate the table with rows, then use this query to populate it IF(@percent = 0) BEGIN INSERT INTO #TEMPTAB SELECT 'Less than 1' col END /*your 1k lines of code here*/ SELECT * FROM #TEMPTAB --Cleanup DROP TABLE #tempTab ``` ## Physical Table ``` DECLARE @Variable VARCHAR(10) = 'A'; IF len(@Variable) > 1 BEGIN SELECT 'Greater than 1' col INTO TEMPTAB END ELSE BEGIN SELECT 'Less than 1' col INTO TEMPTAB2 END IF OBJECT_ID('TEMPTAB2') IS NOT NULL --SP_Rename doesn't work on temp tables so that's why it's an actual table EXEC SP_RENAME 'TEMPTAB2','TEMPTAB','Object' SELECT * FROM TEMPTAB DROP TABLE TEMPTAB; ``` ## Stored Procedure with Global Temp Table ``` IF OBJECT_ID('yourProcedure') IS NOT NULL DROP PROCEDURE yourProcedure; GO CREATE PROCEDURE yourProcedure AS IF OBJECT_ID('tempdb..##TEMPTAB') IS NOT NULL DROP TABLE ##tempTab; SELECT 'Greater than 1' col INTO ##TEMPTAB GO DECLARE @Variable VARCHAR(10) = 'aaa'; IF LEN(@Variable) > 1 BEGIN EXEC yourProcedure; END ELSE BEGIN SELECT 'Less than 1' col INTO ##TEMPTAB END SELECT * FROM ##TEMPTAB IF OBJECT_ID('tempdb..##TEMPTAB') IS NOT NULL DROP TABLE ##TEMPTab; ```
Didn't you consider dynamic query with global temporary tables? This works for me: ``` DECLARE @sql NVARCHAR(MAX) = CASE WHEN 1 = 2 THEN 'SELECT * INTO ##TEMPTAB FROM dbo.SomeTable1' ELSE 'SELECT * INTO ##TEMPTAB FROM dbo.SomeTable2' END EXEC (@sql) SELECT * FROM ##TEMPTAB DROP TABLE ##TEMPTAB ```
T-SQL If Else condition on the same Temp Table
[ "", "sql", "sql-server", "t-sql", "" ]
I have to create a SQL query to list all the Nurses in the ‘Sparrow’ Wing ordered by last name, first name. However, I need to pull Nurse\_name and Nurse\_surname from the Nurse table, which is linked to another table called Sister by the foreign key Sister\_ID, this table is then linked to another table called Wing which has the foreign key Sister\_ID. The nurse is managed by the sister and the sister manages the wing. Can anyone help me with an SQL query for this? As it is, I am only able to get the data from nurse and sister tables.
Since you seem to be aware that you should use the `inner join` to connect tables (but apparently not that the connection needs to be via the related columns) you should apply that knowledge to connect all the tables you need to answer the query. If you start at the end result and work your way backwards you first chose the columns you need: ``` select Nurse.Nurse_name, Nurse.Nurse_surname ``` and then as they belong to the Nurse table you use that as source in the from clause ``` from Nurse ``` to get the Wing you need to connect the Sister table, but to connect that and Nurse you first need the SisterNurse table joined on the shared attribute ``` join SisterNurse on Nurse.Nurse_ID = SisterNurse.Nurse_ID ``` now you can join Sister on the attribute shared with SisterNurse ``` join Sister on Sister.Sister_ID = SisterNurse.Sister_Id ``` and finally you can join Wing ``` join Wing on Wing.sister_ID = Sister.Sister_ID ``` limit the Wings to the one names 'Sparrow' ``` where Wing.Wing_Name = 'Sparrow' ``` and order the data ``` order by Nurse.Nurse_surname, Nurse.Nurse_name ``` Put it all together and you get: ``` select Nurse.Nurse_name, Nurse.Nurse_surname from Nurse join SisterNurse on Nurse.Nurse_ID = SisterNurse.Nurse_ID join Sister on Sister.Sister_ID = SisterNurse.Sister_Id join Wing on Wing.sister_ID = Sister.Sister_ID where Wing.Wing_Name = 'Sparrow' order by Nurse.Nurse_surname, Nurse.Nurse_name ```
You don't give much information about the schema involved but maybe this will help: ``` select n.Nurse_name , n.Nurse_surname , w.Wing_name , managing_nurse.Nurse_name , managing_nurse.surname from Nurse n join Sister s on n.Sister_ID=n.Sister_ID join Wing w on s.Sister_ID=w.Sister_ID join Nurse managing_nurse on w.Nurse_Manager_ID=managing_nurse.Sister_ID ```
SQL query to pull data from three different tables
[ "", "sql", "oracle", "" ]
I'm trying to return information on cars in my table 'Cars' which have MOT's due within the next 2 weeks. MOTDUE has been defined as `to_date('2015-03-25', 'yyyy-mm-dd');` ``` SELECT * FROM Cars WHERE MOTDUE = CURRENT_DATE + INTERVAL '2' WEEK; ``` just gives me "missing or invalid datetime field" error. Thanks.
Depending on what you mean by "within the next two weeks" and assuming that the datatype of the `MOTDUE` column is `DATE`, then the following ought to give you what you need (or at least, allow you to amend appropriately): ``` select * from cars where motdue < trunc(sysdate) + 14 and motdue >= trunc(sysdate); ```
There are two types of `INTERVAL`s in Oracle - `YEAR TO MONTH` and `DAY TO SECOND`. So `YEAR`, `MONTH`, `DAY`, `HOUR`, `MINUTE` and `SECOND` are all valid increments. `WEEK`, unfortunately, is not (nor is `FORTNIGHT`, sadly). The ANSI standard way of doing this would be something like the following: ``` SELECT * FROM cars WHERE motdue >= CURRENT_DATE AND motdue < CURRENT_DATE + INTERVAL '14' DAY; ``` The problem with the above is that in Oracle, the ANSI-standard `CURRENT_DATE` doesn't adhere to the ANSI standard - it has a time portion just like `SYSDATE`. That's fine if you don't want to truncate dates at midnight.
Oracle SQL Retrieve dates within 2 weeks
[ "", "sql", "oracle", "date", "" ]
I have SQL Server not Express and when db grows to 10240 I get error: > Could not allocate space for object in database because the 'PRIMARY' > filegroup is full. Create disk space by deleting unneeded files, > dropping objects in the filegroup, adding additional files to the > filegroup, or setting autogrowth on for existing files in the > filegroup. I tried to change Initial size from 10240 to more but then got error: > CREATE DATABASE or ALTER DATABASE failed because the resulting > cumulative database size would exceed your licensed limit of 10240 MB > per database. (Microsoft SQL Server, Error: 1827) But this is really not Express but full SQL Server, so how it is possible that it has this limitation?
The instance name for SQL Server Express is by default `SQLEXPRESS` - but it can be anything you choose during installation. If you install SQL Server Express as the **default** (un-named) instance, then you get `MSSQLSERVER` as the pseudo instance name for SQL Server Express. Hence, you really cannot rely on the *instance name* to judge whether your SQL Server is the Express edition or not. You need to use ``` SELECT @@Version ``` to get that information.
I had the same error in my Express Edition as the official documentatios says, to fix it without shrink the DB I upgraded my version, from Express to Developer edition. Go to SQL Server Installation Center->Maintenance->Edition upgrade.
CREATE DATABASE or ALTER DATABASE failed because the resulting cumulative database size would exceed your licensed limit of 10240 MB per database
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have a model called Category that can have many subcategories (which are also Category records). I want to collect all Categories in the database that have no subcategories with something like: ``` Category.where(subcategories: []) ``` But every intuitive [to me] variation of this that I've tried either throws an error or returns an empty set. It's generating some SQL that I don't quite understand: ``` Category Load (0.5ms) SELECT "categories".* FROM "categories" WHERE 1=0 => #<ActiveRecord::Relation []> ``` Can anyone tell me the proper way to do this? EDIT: This is the way the subcategories relationship is defined ``` class Category < ActiveRecord::Base has_many :subcategories, class_name: 'Category', foreign_key: 'parent_id' belongs_to :parent, class_name: 'Category' end ```
OK, solved. I just needed to tweak Sharvy's suggestion (one big thing I forgot is to add a "joins"). The following worked: ``` Category.includes(:subcategories).where(subcategories: nil) ``` **UPDATE 2:** OK, this time I *really* got it working (pretty sure, at least...). I had to use some ugly SQL rather than just some pretty ActiveRecord syntax, but the following seems to do what I want: ``` Category.joins("LEFT JOIN categories AS subcategories ON subcategories.parent_id = categories.id").where("subcategories IS NULL") ``` **UPDATE 1:** Sorry, nevermind, this doesn't work. As I commented below, I jumped the gun because I saw that this query was returning the proper category count for both `where(... nil)` and `where.not(... nil)`, but it wasn't actually returning the proper categories. It is returning all the categories that do not have a parent category rather than all the categories that do not have any subcategories. Here is some sample output from this query: ``` Category.includes(:subcategories).where(subcategories: nil).last Category Load (0.7ms) SELECT "categories".* FROM "categories" WHERE "categories"."parent_id" IS NULL ORDER BY "categories"."id" DESC LIMIT 1 Category Load (0.5ms) SELECT "categories".* FROM "categories" WHERE "categories"."parent_id" IN (3158) => #<Category id: 3158, name: "A parent", parent_id: nil, created_at: "2015-03-23 19:18:40", updated_at: "2015-03-23 19:18:40", operator_id: nil> ```
Try this: ``` Category.joins(:subcategories).where(subcategories: {parent_id: nil}) ``` or ``` Category.includes(:subcategories).where('subcategories.parent_id IS NULL') ```
How can I check for an empty association with a where statement in rails?
[ "", "sql", "ruby-on-rails", "where-clause", "" ]
The follow tables I have are: ``` CUSTOMERS (CustomerID, firstname, surname etc) ROOMS (RoomID, RoomType, Description) BOOKING (BookingID, CustomerID, Arrival Date, Departure Date, number of people, total cost) ROOMS_BOOKED (BookingID, RoomID) ``` The user enters two dates arrival and departure. Then checks for room availability. There is a room booked at the moment which I have inserted into the database myself. So the BOOKING table now includes BookingID = 1, CustomerID = 1, Arrival Date = 24/03/2015, Departure Date = 26/03/2015, number of people = 1, total cost = 40.00. Then in the ROOMS\_BOOKED table I Have BookingID = 1 and RoomID = 2 which is a standard room. I am using this sql code - ``` SELECT RoomType FROM ROOMS WHERE RoomID NOT IN (SELECT RoomID FROM ROOMS_BOOKED) ``` This query is returning rooms that have not been booked, but is not what Im looking for. For example the user selects the same dates that are booked in the database. I therefore want to be able to display all room types on the next page and display an x or something for the particular room type that is booked on those selected dates, something similar to www.galgorm.com. I need help with this, everytime I post a question about this I never get an answer or some form of help. I have got so far with this hotel system with the design, I now want to be able to finish it off. Please help me, im sure sometimes you have been in the same situation needing help.
You have the following cases ``` The user's selected period: --------[---------]------- Booking no1 [-----]------------------- Booking no2 --------------------[----] Booking no3 -----[----]--------------- Booking no4 -----------[---]---------- Booking no5 ------[-------]----------- Booking no6 --------------[--------]-- Booking no7 -----[----------------]--- ``` You will have to find which periods cross over. Obviously cases 1 and 2 are free. Cases 3,5,6 are easy to catch as you can search if either the start date of the booking or the end date of the booking is within the user's selection. Cases 4 and 7 you would need to find if either of the user's selection dates would be between the bookings. So the following finds free rooms: ``` DECLARE @ArrivalDate AS DATETIME DECLARE @DepartureDate AS DATETIME SELECT RoomType FROM ROOMS WHERE RoomID NOT IN ( SELECT RoomID FROM BOOKING B JOIN ROOMS_BOOKED RB ON B.BookingID = RB.BookingID WHERE (ArrivalDate <= @ArrivalDate AND DepartureDate >= @ArrivalDate) -- cases 3,5,7 OR (ArrivalDate < @DepartureDate AND DepartureDate >= @DepartureDate ) --cases 6,6 OR (@ArrivalDate <= ArrivalDate AND @DepartureDate >= ArrivalDate) --case 4 ) ```
this query list all rooms and for each room shows if it is available within [Arrival , Departure] dates ``` SELECT RoomType, case when NOT EXISTS (SELECT RoomID FROM ROOMS_BOOKED rb JOIN BOOKING b on b.BookingID = rb.BookingID WHERE rb.RoomID = r.Id and ArrivalDate < 'param Departure Date here' and DepartureDate > 'param Arrival Date here') then 1 else 0 end IsAvailable FROM ROOMS r ```
SQL query to search for room availability
[ "", "sql", "sql-server", "" ]
I'm looking for a way to select top 3 rows from 4 vendors from a table of products, following this criteria: 1. Must select 4 vendors. 2. Must select top 3 products for each vendor ordered by product rating. I tried doing something like: ``` select top 12 * product, vendor from products order by productrating ``` but obvisously that goesn't give me 3 products for each vendor. The product table has: ``` productid (int), productname (nvarchar(500)), productrating (float), vendor (id), price (float). ``` These are the relevant columns.
You can use the ANSI standard `row_number()` function to get 3 products for each vendor: ``` select p.* from (select p.*, row_number() over (partition by vendor order by rating desc) as seqnum from products p ) p where p.seqnum <= 3 ``` If you want 4 vendors: ``` select top 12 p.* from (select p.*, row_number() over (partition by vendor order by rating desc) as seqnum from products p ) p where p.seqnum <= 3 order by vendor; ```
This will give you top 3 Products per vendor. You didn't specify how you're selecting the 4 vendors. That logic could easily be included using the WHERE clause or using a different ORDER BY depending on how you select the 4 vendors. ``` SELECT TOP 12 vnd.Vendor, apl.ProductName FROM Vendors vnd CROSS APPLY ( SELECT TOP 3 ProductID FROM Products prd WHERE vnd.ProductID = prd.ProductID ORDER BY prd.ProductRating DESC ) apl ORDER BY vnd.VendorName ```
SQL select x number of rows from table based on column value
[ "", "sql", "sql-server", "" ]
I have a query I don't really understand: ``` SELECT DISTINCT TBL.* FROM ( SELECT attribute1, attribute2, etc... FROM table) TBL ``` I'm guessing TBL is giving the result set a name, but why is it necessary in the select statement and what does the '.' do?
The dot `.` indicates that you are reffering to a column (or all columns) of the table TBL (in your case the subquery) The outer query just applies `DISTINCT` to every column of the inner query Note that in your case it doesn't make much sense to have a nested query, you can rewrite the query like this: ``` SELECT DISTINCT att1, att2, ... FROM table ``` Also note that if you are not using aggregates, your query is functionally equivalent to a Group by: ``` SELECT att1, att2, ... FROM table GROUP BY att1, att2, ... ```
In your example, there is no difference between `TBL.*` and `*`, but suppose your query was something like ``` Select * from Customer Inner Join Country on Country.ID = Customer.CountryID where Country.Code = 'UK' ``` This will return every column in **both** Customer and Country tables whereas ``` Select Customer.* from Customer Inner Join Country on Country.ID = Customer.CountryID where Country.Code = 'UK' ``` will only return the columns in the customer table
Explain SQL Syntax: Select tbl.*
[ "", "sql", "syntax", "subquery", "" ]
I need a help in escaping single quotes in postgresql insert. I am using a ETL tool for data extraction from source and loading in a postgresql database.The Select statement includes some variables. For example My select query is ``` SELECT ${UNIVERSITY},date from education. ``` The variable `${UNIVERSITY}` has the value `Duke's university`. This statement gives me an error ``` ERROR: syntax error at or near "s" ``` and not getting loaded into the postgres table. Could anyone help me in esacping the single quote and how should I use in variables?
You can do `SELECT REPLACE(${UNIVERSITY},'''', ''''''),date from education.` But probably you just need to do `SELECT '${UNIVERSITY}',date from education` as your query looks like: ``` SELECT Duke's university,date from education ``` That is definitely wrong by SQL syntax.
Since you have mentioned it does not refer a column in the table education. You can achieve the same expected output through this transformation. ![enter image description here](https://i.stack.imgur.com/yGsir.png) Here in - `Table input` step you can write the query `SELECT date from education` - In `split fields` step you can add a new column to your result as `University` - From `Get Variables` step you can assign the parameter values to the new column University -Obtain the result in `Select Values` step
Postgres SQL Single quote Escape
[ "", "sql", "postgresql", "pentaho", "" ]
Simple problem. I have a simple SQL as thus... ``` SELECT a.Col1, a.Col2, XXX FROM table1 AS a LEFT JOIN table2 as b ON b.Key1 = a.Key1 ``` What can I put in the 'XXX' above to say something like "does table B exists?". ie: `EXISTS(b) AS YesTable2` I am hoping there is a simpler solution than just using CASE...END statements. Thanks
You could use [ISNULL](https://msdn.microsoft.com/en-us/library/ms184325.aspx)(b.Key1, 'XXX') Or [COALESCE](https://msdn.microsoft.com/en-us/library/ms190349.aspx) for checking against multiple values for the first non null value.
Pick any column from `b` that is not allowed to be NULL. If there is a NULL there, the record does not exist. If there is a value there, the record does exist. If every column in `b` is allowed to be NULL (rare: you should always have something that's not nullable in the primary key) you'll have to build an expression that mimics the JOIN conditions.
SQL: How to know if a LEFT JOIN returned a row?
[ "", "sql", "join", "" ]
Working in a school district I have a database that contains information about students' Education Plans. I'm using Management Studio to access a SQL Server 2012 database. I need to grab the information from their 2nd most recent plan. Here are the columns. * Database Name is `PlansAnoka` * Table name is `dbo.plans` Columns: * `PlanID` (primary key) * `StudentID` (these would all be unique for each student) * `PlanDate` (this is the column that I want to use as the date to pull the 2nd most recent record. * `Meeting Date` (this is just another data point that I need) I know how to create a query to grab the most recent one, but not the 2nd most recent one. AND I need 2nd most recent record for EACH student. Any help would be appreciated!
Use the `ROW_NUMBER` function and partition by `StudentID`: ``` WITH A AS ( SELECT StudentID , PlanDate , MeetingDate, ROW_NUMBER() OVER(PARTITION BY StudentID ORDER BY PlanDate DESC) rownum FROM dbo.plans ) SELECT * FROM a WHERE rownum=2 ```
use `row_number` if you want get `n'th` record: ``` select * from (select studentid , plandata , row_number() over(partition by plandata order by plandata desc) rn from dbo.plans) t where t.rn=2 -- or n ```
How to grab data from the 2nd most recent record for each student?
[ "", "sql", "sql-server-2012", "" ]
I want to group ids in my table in three groups and every day to select one of that group and repeat the cycle every 4th day. I don't know if I explained correctly but I will try to do it on an example. Let's say we have ids `1,2,3,4,5,6,7,8,9,10...n` The first day I want to select `1,4,7,10...n` the second day `2,5,8,11...n` and the third day `3,6,9,12...n`. This was very easy when I was doing pretty much the same with odd and even ids. On odd days I was selecting odd ids and on even days even ids. But now I need to do it with 3 day period. Thank you in advance.
Use the MOD(x,y) function which returns the remainder of x divided by y and the function TO\_DAYS(some\_date\_here) which returns the number of days since year 0. By using the function NOW() inside TO\_DAYS we get a new number every day. Put that in modulus also, and this will generate a cycle every 3 days that will select ids like 1,4,7,10... the next day 2,5,8,11... the next day 3,6,9,12 and the next day back from the top again So this query will work without modification every day: ``` SELECT * FROM my_table WHERE MOD(my_id,3)=MOD(TO_DAYS(NOW()),3) ``` This grouping can change easily, just change the "3" to whatever you want (be carefull you ll need to change the "3" in **BOTH** `MOD()`s )
``` SELECT * FROM TABLE WHERE id%3 = 1 -- for day 1 id%3 = 2 -- for day 2 id%3 = 0 -- for day 3 ``` Or you can use: ``` SELECT * FROM TABLE WHERE CASE WHEN id % 3 = 0 THEN 3 ELSE id % 3 END = YourDayNumber ``` Sample: ``` SELECT id, 'Day ' + CAST(CASE WHEN id % 3 = 0 THEN 3 ELSE id % 3 END AS VARCHAR(100)) DayNbr FROM ( SELECT 1 id UNION SELECT 2 UNION SELECT 3 UNION SELECT 4 UNION SELECT 5 UNION SELECT 6 UNION SELECT 7 UNION SELECT 8 UNION SELECT 9 ) a ```
Selecting from database with a sequence of three
[ "", "mysql", "sql", "math", "" ]
I'm trying to use CASE in a SELECT statement that will change two values of the possible 12 in a column. ``` CASE WHEN grade = 0 THEN 'R2' WHEN grade = -1 THEN 'R1' ELSE --ignore-- END AS "Grade level" ``` Does anyone have any idea what to replace the --ignore-- with, so that only those two possible values will be altered? I don't really want to keep writing out separate WHENs for each value in the column.
Since grade is a number, you need to convert it to a character so it'll fit with `R1` and `R2`. ``` CASE WHEN grade = 0 THEN 'R2' WHEN grade = -1 THEN 'R1' ELSE to_char(grade) END AS "Grade level" ```
Just remove the `ELSE` part: ``` CASE WHEN grade = 0 THEN 'R2' WHEN grade = -1 THEN 'R1' END AS "Grade level" ```
CASE ... ELSE *ignore* in Oracle SQL
[ "", "sql", "oracle", "" ]
I am using a Select Query to insert data into a Temp Table .In the Select Query I am doing order by on two columns something like this ``` insert into #temp Select accnt_no,acct_name, start_date,end_date From table Order by start_date DESC,end_date DESC Select * from #temp ``` Here when there is an entry present in start\_date field and an Null entry in the end\_date field .During the order by operation Sybase is filling it with an Default date ( jan 1 1900 ) . I dont want that to happen .If the end\_date field is Null . The data should be written just as Null .Any suggestion on how to keep it as Null even while fetching the data from the table .
The 1/1/1900 usually comes from trying to cast an empty string into a datetime. Is your 'date' source column an actual datetime datatype or a string-ish varchar or char?
Sounds like the table definition requires that end\_date not be null, and has default values inserted automatically to prevent them. Are you sure there are even nulls when you do a straight select on the table without the confusion of ordering and inserting?
Order by on Two Datetime Fields .If one field is Null How to avoid Sybase from populating it with default datetime
[ "", "sql", "datetime", "sybase", "sap-ase", "" ]
I am attempting to modify the datatype of a specific column in a specific table in a SQL Server 2012 database. in the beginning of the script, the user will be setting the new desired length of the column datatype. However when I attempt to alter the column in the table to set the new `varchar` length I am getting an error. Here is the code snippet and the resulting error: ``` Declare @newPrecis int Set @newPrecis = 23 -- New length of the index Alter Table [dbo].[VTAB0047] Alter Column VAL varchar(@newPrecis) ``` Error: > Msg 102, Level 15, State 1, Line 5 > Incorrect syntax near '@newPrecis'. Currently the length of the `VAL` column is `varchar(20)` and I'd like it to be set to a length of 23 or whatever length is inputted in the `set` statement. I'm ignoring any type of error checking at the moment because I'm simply trying to get the basic functionality down.
if you want to be able to change the `length` of a `varchar column` `dynamically` then use `dynimic sql`, because `varchar(n)` does not allow parameters for `n`: ``` declare @sql varchar(500); Declare @newPrecis int Set @newPrecis = 23 --or any other value set @sql='Alter Table [dbo].[VTAB0047] Alter Column VAL varchar('+cast(@newPrecis as varchar(2))'+')' exec(@sql) ``` or you could directly use : ``` Alter Table [dbo].[VTAB0047] Alter Column VAL varchar(23) ``` **Note:** if your new value for `n` be less than old value then you may get `String or binary data would be truncated` error if you had value with the length greater than new value for length.
``` Declare @newPrecis int Declare @query varchar(max) Set @newPrecis = 23; -- New length of the index set @query ='Alter Table [dbo].[VTAB0047] Alter COLUMN VAL varchar(' + Cast(@newPrecis as varchar) +')' exec (@query) ```
Define varchar with variable length
[ "", "sql", "sql-server", "t-sql", "varchar", "alter-table", "" ]
I'm trying to create trigger that checks if a phone number if in the (###) ###-#### format, if it this then nothing will happen, if is not then it will be fixed; however if there are more than 10 digits in the number then it will be turned to NULL. Unfortunately I keep getting the Invalid NEW or OLD specification error in this trigger and I don't know why. ``` CREATE OR REPLACE TRIGGER phone_correction BEFORE INSERT OR UPDATE OF vendor_phone ON vendors FOR EACH ROW WHEN (NEW.vendor_phone != REGEXP_LIKE(vendor_phone, '^\(\d{3}\) \d{3}-\d{4}$')) BEGIN IF :NEW.vendor_phone != REGEXP_LIKE(vendor_phone, '^\D*(?:\d\D*){10}$') THEN :NEW.vendor_phone := null; DBMS_OUTPUT.PUT_LINE( 'The phone number is bad so setting to null.'); ELSE :NEW.vendor_phone := REGEXP_LIKE(vendor_phone, '^\(\d{3}\) \d{3}-\d{4}$'); END IF; END; ```
There are several errors here; as others have said you need to explicitly use `:new.` and `:old.` to reference columns in your trigger, so `REGEXP_LIKE(vendor_phone` becomes `REGEXP_LIKE(:new.vendor_phone`. However, there are some more fundamental errors. 1. As with the [`LIKE` operator](http://docs.oracle.com/cd/E11882_01/appdev.112/e25519/fundamentals.htm#LNPLS264), [`REGEXP_LIKE()`](http://docs.oracle.com/cd/E11882_01/server.112/e41084/conditions007.htm#SQLRF00501) returns a Boolean. Thus, your statement: ``` IF :NEW.vendor_phone != REGEXP_LIKE(vendor_phone, '^\D*(?:\d\D*){10}$') ``` is actually `IF <string> != <Boolean>`, which'll never work. 2. Using `DBMS_OUTPUT` in a trigger isn't of any help to you unless you're going to be there to look at whatever logs you're keeping for *every* row that's been inserted, and then do something to correct whatever issues there are. 3. Silently removing data is bad practice, if you're going to change something then it's better to raise an error and let the calling code/user decide what to do instead. If you don't want to let the calling code/user do anything and definitely want to NULL the column if it doesn't conform to a pattern then don't try and insert the data at all. 4. The `ELSE` condition in your `IF` statement is unnecessary, as `:new.vendor_phone` is already in the correct format. Personally, I'd completely remove the trigger and add a constraint to check that the format in the column is the one in which you want: ``` SQL> alter table vendors 2 add constraint chk_vendors_phone 3 check (regexp_like(vendor_phone, '^\(\d{3}\) \d{3}-\d{4}$')); ``` Then, when trying to insert data it'll be successful if the format is correct and unsuccessful if the format is incorrect: ``` SQL> insert into vendors (vendor_phone) 2 values ('(123) 123-1234'); 1 row created. SQL> insert into vendors (vendor_phone) 2 values ('(123) 123-124'); insert into vendors (vendor_phone) * ERROR at line 1: ORA-02290: check constraint (CHK_VENDORS_PHONE) violated SQL> ``` You can then decide what to do with the phones that have errored. As I've stated above, if you *definitely* want to NULL the incorrectly formatted phones then *only* insert data which matches this pattern. If anyone touches the code the check constraint will ensure that the data is still in the correct format. --- If you absolutely must use a trigger, then it can be simplified to something like the following: ``` create or replace trigger phone_correction before insert or update of vendor_phone on vendors for each row when (not regexp_like(new.vendor_phone, '^\(\d{3}\) \d{3}-\d{4}$')) begin :new.vendor_phone := null; end; ``` This checks to see (using Boolean logic) whether the result of the `REGEXP_LIKE()` function is false. If it is, then it NULLs the phone. Here's an example of it working: ``` SQL> create table vendors (id number, vendor_phone varchar2(100)); Table created. SQL> create trigger phone_correction 2 before insert or update of vendor_phone 3 on vendors 4 for each row 5 when (not regexp_like(new.vendor_phone, '^\(\d{3}\) \d{3}-\d{4}$')) 6 begin 7 :new.vendor_phone := null; 8 end; 9 / Trigger created. SQL> insert into vendors 2 values (1, '(123) 123-1234'); 1 row created. SQL> insert into vendors 2 values (2, '(123) 123-124'); 1 row created. SQL> select * from vendors; ID VENDOR_PHONE ---------- -------------------- 1 (123) 123-1234 2 SQL> ``` --- > ... instead of setting a phone number to null :new.vendor\_phone := null; how would you make so it can automatically modify the phone number into the correct format? (###) ###-#### This is actually the example in the documentation for [`REGEXP_REPLACE()`](http://docs.oracle.com/cd/B28359_01/server.111/b28286/functions137.htm#SQLRF06302). To make this more extensible, I'd remove all non-numeric characters from the string and then attempt the transformation. In order to remove the non-numeric characters: ``` regexp_replace(vendor_phone, '[^[:digit:]]') ``` This means replace everything that's not in the character class `[:digit:]` with nothing. Then, to transform you can use sub-expressions as described in the documentation: ``` regexp_replace(regexp_replace(vendor_phone, '[^[:digit:]]') , '^([[:digit:]]{3})([[:digit:]]{3})([[:digit:]]{4})$' , '(\1) \2-\3') ``` This looks for 3 (`{3}`) digits twice and then 4 digits, splitting them into sub-expressions and then putting them in the correct format. There are many ways to do this, and this may not be the quickest, but it makes your intention most clear. I would not do this in a trigger, do this when you insert into the table instead. Better, and if this is a client-side application, you should be ensuring that your numbers are in the correct format before you hit the database at all.
You have to specify the :NEW whenever you are using the column names. try this: ``` CREATE OR REPLACE TRIGGER phone_correction BEFORE INSERT OR UPDATE OF vendor_phone ON vendors FOR EACH ROW WHEN (NEW.vendor_phone != REGEXP_LIKE(NEW.vendor_phone, '^\(\d{3}\) \d{3}-\d{4}$')) BEGIN IF :NEW.vendor_phone != REGEXP_LIKE(:NEW.vendor_phone, '^\D*(?:\d\D*){10}$') THEN :NEW.vendor_phone := null; DBMS_OUTPUT.PUT_LINE( 'The phone number is bad so setting to null.'); ELSE :NEW.vendor_phone := REGEXP_LIKE(:NEW.vendor_phone, '^\(\d{3}\) \d{3}-\d{4}$'); END IF; END; ```
Invalid NEW or OLD specification error
[ "", "sql", "oracle", "triggers", "" ]
I am getting a problem with using 'Next Sequence Values' inside a procedure. I have my 'external' function to return the Next Value of Sequence. Inside my procedure, i want to assign the values of that sequence into a variable and use that variable inside my cursor (since i am using a union statement in my cursor). But it is not working. ``` CREATE OR REPLACE Procedure HR.insert_TBL_APP is --declare variables for insert v_TA_SNO VARCHAR2(10); v_TA_SEQNO VARCHAR2(6); v_TA_DESC VARCHAR2(10); --declare variable to store the sequence number var_TaSeqno varchar2(6); -- Validation v_check VARCHAR2 (10 Byte); err_code varchar2(50); err_msg varchar2(100); v_table_name varchar2(50):='TBL_APP'; error_found exception; cursor c1 is select distinct TA_SNO, TA_SEQNO, TA_DESC from ( SELECT hdr.FIRST_NO TA_SNO, var_TaSeqno TA_SEQNO, -- using variable to assign the sequence no hdr.descrip TA_DESC FROM FORMS_HDR hdr WHERE hdr.seco_name = 'TST121' union SELECT hdr.FIRST_NO TA_SNO, var_TaSeqno TA_SEQNO, -- using variable to assign the sequence no hdr.descrip TA_DESC FROM FORMS_HDR hdr WHERE hdr.seco_name = 'TST122'); begin if c1%isopen then close c1; end if; v_check:=null; FOR i IN c1 LOOP --assign variables for insert v_TA_SNO := i.TA_SNO; v_TA_SEQNO := i.TA_SEQNO; v_TA_DESC := i.TA_DESC; begin -- calling the Function taSeqNoFunc and assign the --sequence No into the variable var_TaSeqno var_TaSeqno := HR.taSeqNoFunc(); select TA_SNO into v_check from TBL_APP a where TA_SNO = i.TA_SNO and TA_SEQNO =i.TA_SEQNO; exception when no_data_found then --insert into target table INSERT INTO TBL_APP (TA_SNO, TA_SEQNO, TA_DESC ) values (v_TA_SNO, v_TA_SEQNO, v_TA_DESC ); when others then raise error_found; end ; end loop; exception when error_found then rollback; err_code := SQLCODE; err_msg := SUBSTR(SQLERRM, 1, 200); insert into TA_ERROR_LOG values (v_check,v_table_name,'An error was encountered '||err_code||':'||err_msg,sysdate); commit; raise_application_error(err_code,err_msg); end; / ``` The error finally goes into raise\_application\_error near the end: "ORA-21000: error number argument to raise\_application\_error of 1 is out of range" Please help me. Thank you.
The `raise_application_error()` procedure is an Oracle PL/SQL built-in. It's provided so what we can associate a custom message to a user-defined exception. User-defined exceptions must have numbers in the range -20999 to -20000. [Find out more](http://docs.oracle.com/database/121/LNPLS/errors.htm#LNPLS99960). You are getting this error because of this code in your inner EXCEPTION block: ``` when others then raise error_found; ``` You are raising a user-defined exception but one you haven't associated to an error number with the EXCEPTION\_INIT pragma. [Find out more](http://docs.oracle.com/database/121/LNPLS/exceptioninit_pragma.htm#i33787). So, Oracle defaults to `SQLCODE = 1`, `SQLERRM = 'User-Defined Exception'`. Clearly `1` is outside the permitted range of `raise_application_error()`. Hence the error when you come to the outer EXCEPTION block. The way to avoid this is to remove the ERROR\_FOUND exception and rely on Oracle's default exception handling. In the innermost block you want to re-raise any exception except NO\_DATA\_FOUND. The simplest way to do that is remove the WHEN OTHERS clause. Then, in the outer block you will have meaningful values for SQLCODE and SQLERRM which you can log. Then just use RAISE to propagate them up the stack.... ``` exception when others then rollback; err_code := SQLCODE; err_msg := SUBSTR(SQLERRM, 1, 200); insert into TA_ERROR_LOG values (v_check,v_table_name,'An error was encountered '||err_code||':'||err_msg,sysdate); commit; raise; end; ``` Not only will you not get an error from `raise_application_error()`, your log will contain a useful error number and message. --- Incidentally, it is bad practice to use ROLLBACK and COMMIT like that in an EXCEPTION block. A better approach is to write a logging procedure which is covered by the AUTONOMOUS\_TRANSACTION pragma. That way the logging won't interfere with the wider transaction. [Find out more](http://docs.oracle.com/database/121/LNPLS/static.htm#LNPLS610).
The error number you pass to the raise\_application\_error procedure must be a negative integer in the range -20000..-20999. For user-defined exceptions SQLCODE returns always +1, so you're passing +1 as the error number to the raise\_application\_error procedure and that's out of range.
Assign function to a variable and use it within procedure
[ "", "sql", "oracle", "stored-procedures", "plsql", "" ]
This query returns the count for duplicate columns col B, col C, col D ``` SELECT `colB`, `colC`, `colD`, COUNT(1) as CNT FROM `table` GROUP BY `colB`, `colC`, `colD` HAVING COUNT(1) > 1 ``` How do I delete the duplicate records while keeping one ? ``` DELETE FROM `table` WHERE ( // Keep one record and delete rows with duplicate columns b,c,d ) ``` `colA` is the Primary Key - AI
If you have a primary key, you can do: ``` delete t from table t left join (select cola, colb, colc, min(cola) as cola from table t group by cola, colb, colc ) tokeep on t.cola = tokeep.cola where tokeep.cola is null; ``` However, lots of deletes on a table can be inefficient. So, it is often faster to move the data to a temporary table an re-insert it: ``` create temporary table tokeep as select min(cola) as cola, colb, colc, cold from table t group by colb, colc, cold truncate table `table`; insert into `table`(cola, colb, colc, cold) select cola, colb, colc, cold from tokeep; ```
try this ``` DELETE a FROM table a LEFT JOIN ( SELECT MIN(colA) colA , colB , colC ,colD FROM Table GROUP BY colB , colC,colD ) b ON a.colA = b.colA and a.colB = b.colB and a.colC = b.colC and a.colD = b.colD WHERE b.colA IS NULL ```
Mysql - Delete rows with duplicate columns
[ "", "sql", "" ]
I have two tables and I want to get the data from both tables. CustomerDetails ``` Id Customer1 Customer2 1 1 2 2 2 1 3 1 3 CustomerName Id Name 1 a 2 b 3 c ``` output should be ``` Id Customer1 Customer2 1 a b 2 b a 3 a c ``` I tried with inner join but it only worked for one column and not for both. How do i get this data from a sql query. Please help me find this. Thanks
use **2** `join`s ``` select t1.id,t2.name customer1 ,t3.name customer2 from customerdetail t1 join customername t2 on t1.customer1=t2.id join customername t3 on t1.customer2=t3.id ```
``` SELECT Id, CN1.Name AS Name1, CN2.Name AS Name2 FROM CustomerDetails CD JOIN CustomerName AS CN1 ON CD.Customer1 = CN1.ID JOIN CustomerName AS CN2 ON CD.Customer2 = CN2.ID ``` I would do it with `LEFT JOIN` as it would be more safe. I do not know if you do have always values for both `Customer1` and `Customer2`
How To create correct Join between two tables?
[ "", "sql", "view", "sql-view", "" ]
In SQL Server Management Studio I can take database backups and save them as `.bak` files, which can also be restored in SQL Server Management Studio. Is there any difference between doing this and setting up a script which backs up the `.MDF` and `.LDF` files - and if there was ever an issue, I could just re-attach the `.MDF` and `.LDF` files? Thanks
It depends on your restore needs and backup space. It's certainly possible to to just reattach MDF and LDF files, but there are some limitations: * You have to detach the database first, make the backup (both files at the same time) and then reattach. This means downtime. * You cannot make incremental backups. * You cannot make differential backups. * You cannot do point-in-time restoration. * You basically have to make a full backup each time you copy the MDF and LDF files, which can really eat up space (thus, it can be better to do incremental or differential backups). * SQL Server has built-in mechanisms that can run without invoking external scripts to do regular backups. MDF and LDF backups require external scripts that have permission to access the data directory, the backup location and the server to attach/detach the database. Basically, I'd say that unless you have a really good reason to not use the built-in backup functionality, I'd avoid doing manual backups of the MDF and LDF files.
Database backups are much more powerful than backing up the files. First of all, if you backup the files while the database is in use, because it is changing constantly you may get something that works, or more likely you will get a corrupted file. A proper database backup coordinates ongoing changes with the backup process so that the backup file is perfectly consistent. A file backup may give you a file that has half of the changes in a transaction and not the other half, or,worse, half the changes in a particular page and not the other half. Secondly, proper database backups let you recover the database to ANY point in time beginning at the oldest full backup, not just the point in time that the backup was made. (You will need the chain of all log backups made since the full backup to do this). EDIT: note that as pointed out in the comments, the built-in functions don't necessarily provide point-in-time recovery--only if you use the type of backups that provide that functionality (though there are other reasons to use that type of backup even if you don't need point-in-time recovery).
MDF and LDF SQL Server backups
[ "", "sql", "sql-server", "database-backups", "" ]
``` select application from l3_usage where function in ('dbms', 'web', 'app') group by application, function order by application ``` ![enter image description here](https://i.stack.imgur.com/70kFF.png) This is the result image when I execute the above query. I want to now count the number of times applications occur. So for example: Budget : 3 CSR : 3 FMS : 3 Facilities : 1 Inventory : 3 etc... I have tried ``` select application, count(application) from l3_usage where function in ('dbms', 'web', 'app') group by application order by application ``` but I get random count numbers that do not relate to the count I was looking for. Could anyone help? Thanks! EDIT ::::: my ultimate purpose of doing this is to get only applications that have count values less than 3. Which in this case, should return just "Facilities" and "SCM".
Try this query: ``` SELECT x.application, COUNT(*) As Frequency FROM ( SELECT application FROM l3_usage WHERE function IN ('dbms', 'web', 'app') GROUP BY application, function ORDER BY application ) AS x GROUP BY x.application HAVING COUNT(*) < 3 ORDER BY COUNT(*) ``` I query the result set you posted in your answer and obtain the `COUNT` for each group. Notice that this `COUNT` is the number of unique `function` values for each `application` group. You should get this result: ``` +-------------+-----------+ | application | Frequency | +-------------+-----------+ | Facilities | 1 | | SCM | 1 | +-------------+-----------+ ``` **Update:** A much easier way to do your query is: ``` SELECT application, COUNT(DISTINCT function) As Frequency FROM l3_usage WHERE function IN ('dbms', 'web', 'app') GROUP BY application HAVING COUNT(DISTINCT function) < 3 ORDER BY application ``` You had the right idea in the beginning, but you needed to `SELECT` the number of distinct `function`s for each application group.
``` SELECT application,COUNT(CASE WHEN application IS NOT NULL THEN 1 END) FROM table group by application ``` I found reference from [here](https://stackoverflow.com/a/8635631/2630817) ## Edit --- ``` select * from( SELECT typing,COUNT(CASE WHEN typing IS NOT NULL THEN 1 END) as counts FROM ss group by typing ) abc where counts<3 ```
How to count distinct results from a already executed SQL Query?
[ "", "mysql", "sql", "oracle", "distinct", "" ]
I dont know how to call this topic so I thought I just would explain what I need. I have the following table: ``` Id IdPerson date successfully 1 1 01.01.2012 FALSE 2 1 01.01.2014 TRUE 3 2 01.01.2014 FALSE ``` Now I want all IdPerson where the newest entry is FALSE So that would be just IdPerson 2 because IdPerson 1 is true in 2014. I really have no clue how to do that. Can somebody help me? Greets
There is an interesting approach to this that uses conditional aggregation: ``` select idperson from table t group by idperson having max(date) = max(case when successfully = 'false' then date end); ``` Note: this solution (as well as the others) assume that `date` is stored in a native date/time format. If not, you should fix the database, but this query would need to use `convert()`.
Use `ROW_NUMBER() OVER (PARTITION BY ...)` in a subquery to locate the newest entry per person. Then, in the outer query, select only those having `successfully = 'FALSE'`: ``` SELECT Id, IdPerson, [date] FROM ( SELECT ROW_NUMBER() OVER (PARTITION BY IdPerson ORDER BY [date] DESC) AS rn, * FROM MyTable ) t WHERE t.rn = 1 AND t.successfully = 'FALSE' ```
SQL compare two max Dates from the same table
[ "", "sql", "sql-server", "max", "" ]
I have a table `Retail` ``` CustomerID Itemset 1 31 1 30 1 78 2 31 2 91 3 30 3 31 ``` I want to to find the count of `CustomerID` having both `30` & `31` in `Itemset`. I.e. In this case `1` & `3`, so the count is `2`. I tried formulating the query using the `&` operator but the query isn't returning the right answer. my query- `Select count(customerID) from Retail where Itemset=30 & 31`
``` select count(*) from ( select CustomerId from Retail where Itemset in (30,31) group by CustomerId having count(distinct Itemset) = 2 -- this guarantees that CustomerId have at least one 30 and at least one 31 in column Itemset ) T ```
``` Select count(distinct(CustomerID)) from ( select r30.CustomerID from Retail r30 inner join Retail r31 on r31.CustomerID = r30.CustomerID where r30.Itemset = 30 and r31.Itemset = 31 ) T ```
How to formulate this SQL Query
[ "", "sql", "sql-server", "" ]
My original table is this; TableName= NewRetail ``` CustomerID 1 2 3 4 5 6 7.....30 1 30 31 Null Null Null Null Null 2 24 78 35 72 Null Null Null ``` I want to store this table in 'Retail' `CustomerId Itemset 1 30 1 31 2 24 2 78 2 35 2 72` There are no duplicates in any row in Original(Source) Table. Thanks. I tried using Loops but couldn't make it work. Been stuck at it since three days.
You can do this using `UNION ALL`: ``` SELECT * FROM ( SELECT CustomerId, [1] AS ItemSet FROM NewRetail UNION ALL SELECT CustomerId, [2] FROM NewRetail UNION ALL SELECT CustomerId, [3] FROM NewRetail UNION ALL SELECT CustomerId, [4] FROM NewRetail UNION ALL SELECT CustomerId, [5] FROM NewRetail UNION ALL SELECT CustomerId, [6] FROM NewRetail UNION ALL SELECT CustomerId, [7] FROM NewRetail UNION ALL ... SELECT CustomerId, [30] FROM NewRetail )t WHERE ItemSet IS NOT NULL ORDER BY CustomerId ``` --- Using dynamic SQL: ``` ;WITH Tally(N) AS( SELECT TOP 30 ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) FROM sys.columns ) SELECT @sql = @sql + CASE WHEN @sql = '' THEN 'SELECT CustomerId, [' + CONVERT(VARCHAR(2), N) + '] AS ItemSet FROM NewRetail' ELSE ' UNION ALL' + CHAR(10) +' SELECT CustomerId, [' + CONVERT(VARCHAR(2), N) + '] FROM NewRetail' END FROM Tally SELECT @sql = 'SELECT * FROM ( ' + @sql + CHAR(10) + ')t WHERE ItemSet IS NOT NULL ORDER BY CustomerId' PRINT @sql EXEC (@sql) ```
You can use `table valued constructor` with `Cross apply` to `unpivot` the data ``` SELECT CustomerID, Itemset FROM Yourtable CROSS apply (VALUES ([1]),([2]),([3]),([4]),([5]),([6]),([7]),...) cs (Itemset) WHERE Itemset IS NOT NULL ``` **Dynamic Version** ``` DECLARE @itemset VARCHAR(max)='', @sql NVARCHAR(max); WITH cte AS (SELECT TOP 30 Row_number()OVER(ORDER BY (SELECT NULL)) RN FROM sys.columns) SELECT @itemset += '(' + Quotename(RN) + '),' FROM cte SET @itemset = LEFT(@itemset, Len(@itemset) - 1) SET @sql = 'SELECT CustomerID, Itemset FROM Yourtable CROSS apply (VALUES ' + @itemset + ') cs (Itemset) WHERE Itemset IS NOT NULL ' EXEC Sp_executesql @sql ```
How to merge column this specific way in SQL
[ "", "sql", "sql-server", "" ]
I am trying to build a SQL query where I group by 1 column, but then also include the values of other columns from an arbitrary record in each group. So, something like ``` SELECT BoxNo FROM MuffinData WHERE FrostingTimeApplied >= CONVERT(date, GETDATE()) GROUP BY BoxNo ``` but including some value from columns `MuffinType`, `FrostingType` in the result (I know that there will be only 1 value of `MuffinType` and `FrostingType` per box.)
You have to use an aggregate function for each column selected that is not present in the `GROUP BY` clause: ``` SELECT BoxNo, MAX(MuffinType) AS MuffinType, MAX(FrostingType) AS FrostingType FROM MuffinData WHERE FrostingTimeApplied >= CONVERT(date, GETDATE()) GROUP BY BoxNo ``` If there is only 1 value of `MuffinType` and `FrostingType` per box, then these unique values per box no are going to be selected in the above query.
> I know that there will be only 1 value of MuffinType and FrostingType > per box If that's indeed the case, a simple DISTINCT should do the trick, like so: ``` SELECT DISTINCT BoxNo, MuffinType, FrostingType FROM MuffinData WHERE FrostingTimeApplied >= CONVERT(date, GETDATE()); ``` If that's not the case, you're dealing with a problem known generally as the *Top N per group* problem. You can find coverage of the problem and suggested solutions [here](https://www.microsoftpressstore.com/articles/article.aspx?p=2314819&seqNum=4). Cheers, Itzik
SQL group by 1 column but include TOP 1 of other columns
[ "", "sql", "sql-server", "t-sql", "group-by", "" ]
I have the following data table. ``` Record Date Price A 3/1/2015 5 A 3/2/2015 6 A 3/3/2015 7 A 3/4/2015 10 B 2/1/2015 4 B 2/2/2015 6 B 2/3/2015 15 B 2/4/2015 2 ``` How can I output a table that only shows the First price and the last price for each record for the first date in the table and the last date in the table. Output columns would be Record, First Price, Last Price. I am looking for a one step solution that is easy to implement in order to create a custom view. The output desired would be: ``` Record FirstPrice LastPrice A 5 10 B 4 2 ```
Perhaps something like this is what you are looking for? ``` select R.Record, FD.Price as MinPrice, LD.Price as MaxPrice from Records R join ( select Price, R1.Record from Records R1 where Date = (select MIN(DATE) from Records R2 where R2.Record = R1.Record) ) FD on FD.Record = R.Record join ( select Price, R1.Record from Records R1 where Date = (select MAX(DATE) from Records R2 where R2.Record = R1.Record) ) LD on LD.Record = R.Record group by R.Record ``` <http://sqlfiddle.com/#!9/d047b/26>
Get the min and max aggregate dates grouped by the record field and join back to the root data. If you can have multiple records for the same record field on the same date, you will have to use min, max or avg to get just one value for that date. SQLFiddle: <http://sqlfiddle.com/#!9/1158b/3> ``` SELECT anchorData.Record , firstRecord.Price , lastRecord.Price FROM ( SELECT Record , MIN(Date) AS FirstDate , MAX(Date) AS LastDate FROM Table1 GROUP BY Record ) AS anchorData JOIN Table1 AS firstRecord ON firstRecord.Record = anchorData.Record AND firstRecord.Date = anchorData.FirstDate JOIN Table1 AS lastRecord ON lastRecord.Record = anchorData.Record AND lastRecord.Date = anchorData.LastDate ```
How to Obtain First and Last record ? One Step Solution?
[ "", "mysql", "sql", "where-clause", "" ]
I have problem with one query. This work: ``` select coalesce(((SUM(ati.anDebit) - SUM(ati.ancredit))),0) + (select coalesce((SUM(anDebit)),0) from tHE_AcctTransItem where acLinkDoc = '15-390-000523' and SUBSTRING(acDoc,4,3) in ('391', '3B0') and SUBSTRING(acKey, 3, 3) = '420' and acSubject = 'Company' and acAcct = '1200') from tHE_AcctTransItem ati inner join tHE_AcctTrans at on ati.acKey = at.acKey where at.acDocType in('4100', '4620', '4630', '4700', '4730') and acLinkDoc = '15-390-000523' and acAcct = '1200' and acSubject = 'Company' ``` For my purpose I need to put CASE, and I write this: ``` select case when acDocType <> '4730' then (coalesce(((SUM(ati.anDebit) - SUM(ati.ancredit))),0) + (select coalesce((SUM(anDebit)),0) from tHE_AcctTransItem where acLinkDoc = '15-390-000523' and SUBSTRING(acDoc,4,3) in ('391', '3B0') and SUBSTRING(acKey, 3, 3) = '420' and acSubject = 'Company' and acAcct = '1200')) else coalesce(((SUM(ati.anDebit) - SUM(ati.ancredit))),0) * -1 end from tHE_AcctTransItem ati inner join tHE_AcctTrans at on ati.acKey = at.acKey where at.acDocType in('4100', '4620', '4630', '4700', '4730') and acLinkDoc = '15-390-000523' and acAcct = '1200' and acSubject = 'Company' group by acDocType ``` But I don't receive anything. ``` select ati.* from tHE_AcctTransItem ati inner join tHE_AcctTrans at on ati.acKey = at.acKey where at.acDocType in('4100', '4620', '4630', '4700', '4730') and acLinkDoc = '15-390-000523' and acAcct = '1200' and acSubject = 'Company' ``` This does not return any row, and I don't know how to show me result like in first query but with case. From this query only this select has some data: ``` (select coalesce((SUM(anDebit)),0) from tHE_AcctTransItem where acLinkDoc = '15-390-000523' and SUBSTRING(acDoc,4,3) in ('391', '3B0') and SUBSTRING(acKey, 3, 3) = '420' and acSubject = 'Company' and acAcct = '1200') ``` And when I put case this is not suming.
First, your SQL has a lot of extra parentheses, which along with the rather haphazard formatting makes it very hard to read and debug. Aggregate functions in the select clause of a query that does not have a group by clause will return a null record. ``` declare @t table ( k nvarchar(50), val int ) select sum(val), avg(val), min(val), max(val) from @t ``` The above returns one row with all nulls. That allows you to wrap your aggregates in the coalesce function and causes the subqueries in the select clause to function. However, add a group by clause to the query and you no longer get the null row. ``` declare @t table ( k nvarchar(50), val int ) select sum(val), avg(val), min(val), max(val) from @t group by k ``` The above returns no records at all, so subqueries in the select clause do not run. You might try something like the following. If the first query returns no records, the second query gets a chance. (Note though that both queries run regardless, so if the queries are data intensive you might want to find another way.) ``` select coalesce(( select case when acDocType <> '4730' then SUM(ati.anDebit) - SUM(ati.ancredit) + ( select coalesce(SUM(anDebit),0) from tHE_AcctTransItem where acLinkDoc = '15-390-000523' and SUBSTRING(acDoc,4,3) in ('391', '3B0') and SUBSTRING(acKey, 3, 3) = '420' and acSubject = 'Company' and acAcct = '1200' ) else SUM(ati.anDebit) - SUM(ati.ancredit) * -1 end from tHE_AcctTransItem ati inner join tHE_AcctTrans at on ati.acKey = at.acKey where at.acDocType in('4100', '4620', '4630', '4700', '4730') and acLinkDoc = '15-390-000523' and acAcct = '1200' and acSubject = 'Company' group by acDocType ),( select coalesce(SUM(anDebit),0) from tHE_AcctTransItem where acLinkDoc = '15-390-000523' and SUBSTRING(acDoc,4,3) in ('391', '3B0') and SUBSTRING(acKey, 3, 3) = '420' and acSubject = 'Company' and acAcct = '1200' )) ```
Logically, if your first query returns results and the third query does not, then the difference must be causing the issue. Notice that your subquery in the first query, is not correlated to the main from and where clauses. Try this query. I would ``` SELECT at.acDocType , ati.acDoc , ati.acKey , SUM(COALESCE(ati.anDebit, 0)) , SUM(COALESCE(ATI.anCredit, 0)) FROM tHE_AcctTransItem ati LEFT OUTER JOIN tHE_AcctTrans at ON ati.acKey = at.acKey WHERE ( at.acDocType IN ( '4100', '4620', '4630', '4700', '4730' ) OR ( SUBSTRING(ati.acDoc, 4, 3) IN ( '391', '3B0' ) AND SUBSTRING(ati.acKey, 3, 3) = '420' ) ) AND ati.acLinkDoc = '15-390-000523' AND ati.acAcct = '1200' AND ati.acSubject = 'Company' GROUP BY ALL at.acDocType , ati.acDoc , ati.acKey ``` I expect that you will see the null results that are created by one of your joins in the main query.
T-SQL - empty CASE + select
[ "", "sql", "t-sql", "" ]
I have a table test which has ticket and vehcile. ``` ticket vehicle 1000 101 1001 102 1002 102 1003 103 1004 104 1005 102 1006 102 ``` My requirement: Input will be ticket and ouput will be vehicle.If the vehcile is repeating then I have to get just the previous ticket. **For example:** ticket 1006 has vehicle 102 which is repeating.I mean 1006,1005,1002,1001 tickets have vehicle 102.So if the input is 1006 then output will be 1005. Similarly if the input is 1005,output will be 1002.And if the input is 1002 then output will be 1001. I did like this ``` SELECT ticket FROM (SELECT ROW_NUMBER() OVER (ORDER BY ticket desc) AS RowNumber, * FROM test WHERE vehicle = (SELECT vehicle FROM test WHERE ticket = 1005)) AS getsecondsLast WHERE RowNumber = 2 ``` but this only works properly if the input is 1006. Please help me [fiddle here](http://sqlfiddle.com/#!6/ca9cd/2)
[SQL Fiddle](http://sqlfiddle.com/#!6/ca9cd/19) **MS SQL Server 2014 Schema Setup**: ``` create table test( ticket int,vehicle int); insert into test values(1000,101); insert into test values(1001,102); insert into test values(1002,102); insert into test values(1003,103); insert into test values(1004,104); insert into test values(1005,102); insert into test values(1006,102); ``` **Query 1**: ``` declare @ticket int = 1005; select top(1) T1.ticket from dbo.test as T1 where T1.ticket < @ticket and T1.vehicle in ( select T2.vehicle from dbo.test as T2 where T2.ticket = @ticket ) order by T1.ticket desc; ``` **[Results](http://sqlfiddle.com/#!6/ca9cd/19/0)**: ``` | ticket | |--------| | 1002 | ```
<http://sqlfiddle.com/#!6/ca9cd/22> ``` DECLARE @ticket int = 1006 select MAX(prev.ticket) from test curr inner join test prev on curr.vehicle = prev.vehicle AND curr.ticket > prev.ticket where curr.ticket = @ticket ```
How to get just upper value based on column
[ "", "sql", "sql-server", "sql-server-2012", "" ]
I have a table in SQL say like the following: ``` -------------------------- |ID | number| numberDate | |---|-------|------------- | 1 | 120 | 2011-01-22 | |---|-------|------------| | 1 | 124 | 2011-01-27 | |---|-------|------------| | 2 | 136 | 2011-01-20 | |---|-------|------------| | 2 | 135 | 2011-01-30 | |---|-------|------------| | 3 | 150 | 2011-01-15 | |---|-------|------------| | 3 | 155 | 2011-01-19 | |---|-------|------------| | 3 | 180 | 2011-01-23 | -------------------------- ``` I would like to select the the IDs that have an increasing number. In the example above, I would select ID 1 and ID 3 because: for ID 1 we have 120<124 and for ID 3 we have 150<155<180. The output should be: ``` ----- |ID | |---| | 1 | |---| | 3 | ----- ``` I cannot figure it out. Thanks. **EDIT:** I added the third column and I put some sample output.
We'll do this in two steps. ``` --Step 1: Find records the violate the rule With BadIDs AS ( --IDs where there is another record with a matching ID and lower number, but greater date select t1.id from [table] t1 inner join [table] t2 on t2.id = t1.id where t1.number > t2.number and t1.numberDate < t2.numberDate ) -- Step 2: All IDs not part of the first step: select distinct ID from [table] WHERE ID NOT IN (select ID from BadIDs) ``` Unfortunately, MySql doesn't support CTEs (Common Table Expressions). Here's a version that will work with MySql: ``` select distinct ID from [table] WHERE ID NOT IN ( select t1.id from [table] t1 inner join [table] t2 on t2.id = t1.id where t1.number > t2.number and t1.numberDate < t2.numberDate ) ```
you can use a `where exists...subquery`. this query should give you all id's have increasing number: ``` select distinct t.id from table_name t where exists ( select t2.number from table_name t2 where t2.id=t.id and t2.number > t.number ) ```
How to SELECT (SQL) items only if they are in increasing order?
[ "", "mysql", "sql", "" ]
I created new tables `Faculty` and `Specialty` as described in the link below, but I do not see them under databases in the Object Explorer. Also, if I click on 'edit', I do not see 'Intellisense' anywhere. What is going on? <http://www.cs.trinity.edu/~thicks/Tutorials/MSSQL-Server-Management-Studio-DB-Construction/MSSQL-Server-Management-Studio-DB-Construction.html>
Check Refreshing the database in object explorer. ![image](https://i.stack.imgur.com/vte34.gif)
Simply go your database and write this query: ``` USE DBName --Change the database name in which you have created your table GO IF (EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = 'TheSchema' AND TABLE_NAME = 'Faculty')) BEGIN print 'table exists' END ``` Check if your table exists in the database. If the table is created then it will be listed here else there might be some problem while creation. And if the above query shows the table which you are looking for in the database then I would recommend to restart your management studio and check again.
Saved table not showing up in Object Exploer in SQL Management Studio
[ "", "sql", "ssms", "" ]
I have some issues when I want to select some rows after joining two tables. I have 3 tables, Library can contains multiple books, Books belong to one or more Library and have a BookTypeId for each language the book is available. ``` Library Id|Name Books Id|IsAvailable|LibraryId|BookTypeId BookType Id|DisplayName|Language ``` After the join it's create this table ``` Id|Name |IsAvailable|Language 0 |library1| 0|Book1_EN 0 |library1| 1|Book1_FR 0 |library1| 0|Book1_ES 1 |library2| 1|Book1_EN 1 |library2| 1|Book1_FR 1 |library2| 1|Book1_ES 1 |library2| 1|Book2_EN ``` Library can contains multiple books, Books belong to one or more Library and have a BookTypeId for each language the book is available. ``` SELECT l.Id, l.Name, BookType.TypeName, Books.IsAvailable FROM [Library] l RIGHT JOIN Books ON l.Id = Books.LibraryId LEFT JOIN BookType ON Books.BookTypeId = BookType.Id WHERE (BookType.Language='Bookname1_FR' AND Books.IsAvailable='1') AND (BookType.Language='Bookname2_EN' AND Books.IsAvailable='1') ORDER BY l.Id ASC ``` But my request return null, I think it's because of my join which duplicates some raws of Library table in my joining table. Do you know how I have to modify my WHERE clause to get librairies which sell those 2 books at the same time ? Maybe by using 2 SELECT and INTERSECT the results ?
Your WHERE condition cannot be satisfied. Modify that. Maybe you mean OR? ``` WHERE (BookType.Language='Bookname1_FR' AND Books.IsAvailable='1') OR (BookType.Language='Bookname2_EN' AND Books.IsAvailable='1') ``` UPDATED: you should use GROUP BY for your requirement ``` SELECT l.Id, l.Name, BookType.Language FROM [Library] l RIGHT JOIN Books ON l.Id = Books.LibraryId LEFT JOIN BookType ON Books.BookTypeId = BookType.Id WHERE Books.IsAvailable='1' GROUP BY l.Name, BookType.Language HAVING COUNT(*)>1 ORDER BY l.Id ASC ```
``` SELECT l.Id, l.Name, BookType.TypeName, Books.IsAvailable FROM [Library] l RIGHT JOIN Books ON l.Id = Books.LibraryId LEFT JOIN BookType ON Books.BookTypeId = BookType.Id WHERE (BookType.Language='Bookname1_FR' AND Books.IsAvailable='1') AND (BookType.Language='Bookname2_EN' AND Books.IsAvailable='1') ORDER BY l.Id ASC ``` BookType.Language can not be 'Bookname1\_FR' and 'Bookname2\_EN' at the same time. May be you mean this: ``` SELECT l.Id, l.Name, BookType.TypeName, Books.IsAvailable FROM [Library] l RIGHT JOIN Books ON l.Id = Books.LibraryId LEFT JOIN BookType ON Books.BookTypeId = BookType.Id WHERE BookType.Language IN('Bookname1_FR', 'Bookname2_EN') AND Books.IsAvailable='1' ORDER BY l.Id ASC ```
SQL select issues after join
[ "", "sql", "sql-server", "t-sql", "join", "" ]
I am trying to get around an apparently simple problem in MySQL: I have a table composed by 3 fields and I would like to form a query that returns a field selected as DISTINCT and two more columns which return a COUNT of the remaining two fields when a specific condition is met. To illustrate (columns named *first*, *second* and *third* left to right): ``` | VzokAAD | aaa | 0 | | VziAAAT | bbb | 0 | | VziAAAT | ccc | 1 | | VziAAAT | ddd | 0 | | W0cZAAT | eee | 1 | | VziNAAT | fff | 1 | | VzpqAAD | ggg | 1 | | VzpqAAD | hhh | 1 | ``` My current query is structured as follows: ``` SELECT DISTINCT(first) AS field_one, COUNT(second) FROM table WHERE third = 0 GROUP BY field_one; ``` So the above query will return: ``` | VzokAAD | 1 | | VziAAAT | 2 | ``` I can change the query to something like this: ``` SELECT DISTINCT(first) AS field_one, COUNT(second) FROM table WHERE third = 1 GROUP BY field_one; ``` And it will return: ``` | VzpqAAD | 2 | | VziNAAT | 1 | | W0cZAAT | 1 | | VziAAAT | 1 | ``` How do I get both queries combined together so that, I get the first column GROUPED and DISTINCT and two additional columns with the COUNT of third =1 and third =0 respectively. Basically something like this (wrong of course): ``` SELECT DISTINCT(first) AS field_one, COUNT(second WHERE third = 1) AS alpha, COUNT(second WHERE third = 0) AS beta FROM table GROUP BY field_one; ``` I tried to use CASE and IF control flow functions however I only managed to get the results defined in a single column. Thanks to anyone willing to lend a hand!
You can simply use this query : ``` select first , SUM(IF(third=0,1,0)) as Col_zero, SUM(IF(third=1,1,0)) as Col_one from table group by first; ```
You can use `UNION` in following: ``` SELECT first AS field_one, COUNT(second) FROM table WHERE third = 0 GROUP BY field_one; UNION SELECT first AS field_one, COUNT(second) FROM table WHERE third = 1 GROUP BY field_one; ``` --- Also how about `IN`? ``` SELECT DISTINCT(first) AS field_one, COUNT(second) FROM table WHERE third IN(0, 1) GROUP BY field_one; ```
MySQL - Conditional SELECT to output separate columns
[ "", "mysql", "sql", "database", "conditional-statements", "" ]
I have 2 table, but I wanted to query the 'rejected' status only, means I need query the result that the **user has only rejected status**, instead of having **approve** **&** **reject,** or **approve** in submissions table ``` Users Table ----------- id | name ----------- 1 | John 2 | Doe 3 | Testing 4 | Sample Submission Table ------------------------------- id | user_id | title | status ------------------------------- 1 | 1 | title1 | approved 2 | 1 | title2 | rejected 3 | 2 | title3 | approved 4 | 2 | title4 | approved 5 | 3 | title5 | rejected 6 | 3 | title6 | rejected 7 | 3 | title7 | rejected 8 | 4 | title8 | approved 9 | 4 | title9 | approved 10| 4 | title10| rejected 11| 4 | title11| rejected ``` Below is the result I wanted to achieve : ![enter image description here](https://i.stack.imgur.com/b3VpM.jpg) But I outer join the result query by 'rejected' only but still have some 'approved' result by the users. ![enter image description here](https://i.stack.imgur.com/o4RvF.jpg) but with above query, I'd this result. ![enter image description here](https://i.stack.imgur.com/l0xbQ.jpg) What I wanted to query is , query the submissions just have status 'rejected' only, fully ignore the 'approved' , 'approve or reject' result.
I found a solution already which is using `WHERE NOT EXISTS` to filter the `approved` result in submission ``` SELECT u.id AS user_id,s.*, u.name FROM submissions s LEFT OUTER JOIN users u ON s.user_id = u.id WHERE NOT EXISTS ( SELECT USER_ID FROM submissions tmp WHERE tmp.User_ID = s.User_ID AND tmp.status = 'approved' ) AND STATUS = 'rejected' ```
Why did you use left outer join ? I believe simple join will get you the result .. ``` SELECT u.id AS user_id,s.*, u.name FROM submissions s JOIN users u ON s.user_id = u.id AND s.status = 'rejected' ``` OR ``` SELECT u.id AS user_id,s.*, u.name FROM submissions s JOIN users u ON s.user_id = u.id WHERE s.status = 'rejected' ```
Select certain value only
[ "", "mysql", "sql", "sqlyog", "" ]
I have a table which holds a set of records. Now through SQL I am getting a Count for each different status. Example from the table below: ``` Status | ID ------------------------- Open | 1 Open | 2 Open | 3 Open | 4 In Progress | 5 In Progress | 6 In Progress | 7 Closed | 8 Closed | 9 Closed | 10 Closed | 11 ``` So Far I managed to do the following ``` Status | ID ------------------------- Open | 4 In Progress | 3 Closed | 4 ``` However I want the result to return ``` Status | ID ------------------------- Other | 7 Closed | 4 ``` The following is my SQL so far: ``` SELECT myStatus,Count(job) As CountJobs FROM JobsTable GROUP BY myStatus ORDER BY CountJobs DESC ``` How can I achieve this?
``` SELECT CASE WHEN Status = 'Closed' THEN Status ELSE 'OTHER' END AS Status, Count(job) As CountJobs FROM JobsTable GROUP BY CASE WHEN Status = 'Closed' THEN Status ELSE 'OTHER' END ORDER BY CountJobs DESC ```
Use a `CASE` expression: ``` SELECT CASE WHEN myStatus = 'Closed' THEN 'Closed' ELSE 'Other' END myStatus, COUNT(*) CountJobs FROM JobsTable GROUP BY CASE WHEN myStatus = 'Closed' THEN 'Closed' ELSE 'Other' END; ```
SQL Query to get count of one status + all other status as one
[ "", "sql", "sql-server", "database", "t-sql", "" ]
I have a query: ``` SELECT s.period, s.year, s.amount FROM salaries s ``` I would like to select from `salaries` table only the rows that have period and year equal to: ``` SELECT p.period, p.years FROM periods p ``` I know that the simplest way would be use join, but the problem is - that in the application I can add only a clause after `WHERE` So the solution should be: ``` SELECT s.period, s.year, s.amount FROM salaries s WHERE ... ``` Is that possible? **EDIT:** The result should be the same as: ``` SELECT s.period, s.year, s.amount FROM salaries s JOIN periods p ON s.period = p.period AND s.year = p.year ```
You can use more than one column for an `IN` condition: ``` SELECT s.period, s.year, s.amount FROM salaries s where (s.year, s.period) in (select year, period from periods) ``` But Gordon's `not exists` solution is probably faster.
Does this work with your environment? ``` WHERE EXISTS (SELECT 1 FROM periods p WHERE p.period = s.period AND p.year = s.year ) ```
Two columns in subquery in where clause
[ "", "sql", "postgresql", "" ]
Following are the tables from my database: ![enter image description here](https://i.stack.imgur.com/GdHm3.png) I need following columns from the above tables as follows : **tblVendor :** `Name` (AS `VendorName`) **tblVendorItem :** `Name` ( AS `VendorItemName`) **tblEvent :** `Name`( AS `EventName`) and `EventDate` **tblEventService :** `ServiceDate` and `Status` I have written the following query to retrieve the above data : ``` SELECT tblEvent.NAME AS 'EventName', tblEvent.EventDate AS 'EventDate', tblVendorItem.NAME AS 'VendorItem', tblVendor.NAME AS 'VendorName', tblEventService.ServiceDate AS 'Date', tblEventService.Status FROM tblEvent INNER JOIN tblEventService ON tblEvent.EventID = tblEventService.EventID INNER JOIN tblVendor ON tblEventService.VendorID = tblVendor.VendorID INNER JOIN tblEventItem ON tblEventService.EventServiceID = tblEventItem.EventServiceID WHERE tblEvent.CustomerID = @C_ID; ``` But i don't know why the sql engine returns the following error : > Msg 1013, Level 16, State 1, Line 1 The objects "tblEventService" and > "tblEventService" in the FROM clause have the same exposed names. Use > correlation names to distinguish them. What am i doing wrong here ? Or is the query itself wrong ?
Your forgot to join table "tblVendorItem" ``` SELECT tblEvent.NAME AS 'EventName', tblEvent.EventDate AS 'EventDate', tblVendorItem.NAME AS 'VendorItem', tblVendor.NAME AS 'VendorName', tblEventService.ServiceDate AS 'Date', tblEventService.Status FROM tblEvent INNER JOIN tblEventService ON tblEvent.EventID = tblEventService.EventID INNER JOIN tblVendor ON tblEventService.VendorID = tblVendor.VendorID INNER JOIN tblEventItem ON tblEventService.EventServiceID = tblEventItem.EventServiceID INNER JOIN tblVendorItem ON tblEventService.VendorID = tblVendorItem.VendorID WHERE tblEvent.CustomerID = @C_ID; ```
As suggested, use aliases when joining the same table, like this: ``` SELECT es1.EventID FROM tblEventServices es1 JOIN tblEventServices es2 ON es1.EventID = es2.EventID --or however you wish to join them ```
Multiple inner joins returning Error
[ "", "sql", "sql-server", "" ]
I'm trying to select all the users in a table that have the same `Email` but have a different `Name`. So far I have managed to get all the rows that have duplicate `Email` but I'm stuck on the next step. ``` SELECT * FROM users WHERE Email IN (SELECT Email FROM users GROUP BY Email HAVING COUNT(*) > 1) ``` Thanks in advance
I think you just want `count(distinct name)` in the subquery: ``` SELECT * FROM users WHERE Email IN (SELECT Email FROM users GROUP BY Email HAVING COUNT(distinct Name) > 1 ) ; ``` I prefer `having min(name) <> max(name)` for the `having` clause. It is slightly more efficient. However, the most efficient method is probably to use window functions: ``` select u.* from (select u.*, min(name) over (partition by email) as minname, max(name) over partition by email) as maxname from users u ) u where minname <> maxname; ```
Try this ``` SELECT * FROM users U1 INNER JOIN users U2 on U1.Email=U2.Email AND U1.Name <> U2.Name ```
SQL - Select all rows that have the same Email but different Name
[ "", "sql", "select", "count", "group-by", "duplicates", "" ]
Consider the following numbers. ``` 7870.2 8220.0 ``` I need to remove decimal points if the value ends with `.0`. If it ends with `.2` then it should keep the value as it is. I have used `ceiling` but it removes all the values after decimal. How can I write a select query in which I can add some condition for this?
use `right` within a `case` statement and: ``` DECLARE @val decimal(5,1) SET @val = 7870.0 Select Case When right(@val,1)<> '0' then cast(@val as varchar) else cast(cast(@val as int) as varchar) End ``` **output:** 7870 **EDIT:** I could write : ``` Case When right(@val,1)<> '0' then @val else cast(@val as int) -- or floor(@val) End ``` **but** because return type of case statement is the [highest precedence](https://msdn.microsoft.com/en-us/library/ms190309.aspx) type from the set of given types, so the output for second version is: 7870.0 not 7870, that's why I convert it to i.e varchar in when clauses, and it can be converted outside of case statement, I mean cast ((case when...then...else... end) as datatype)
Generally speaking you should *not* do this in your dB. This is an app or reporting side operation. The dB is made to store and query information. It is not made to format/string manipulate information.
Remove only zero after decimal sql server 2012
[ "", "sql", "sql-server", "sql-server-2012", "rounding", "" ]
I'm building a system where I have to find the combined price of a computer system by using the database data. The first screenshot is a build from the system table. **Systems Table** ![enter image description here](https://i.stack.imgur.com/Cl4nF.png) **Parts Table** ![enter image description here](https://i.stack.imgur.com/vI7XK.png) The different kinds are: motherboard, case, ram, cpu, graphic. What I need is some way of turning the columns into rows and thereby summing the prices of each system. Here is the table and content. ``` CREATE TABLE Component ( nome VARCHAR(30), kind VARCHAR(10), /*cpu, ram, mainboard, cases*/ price INT, PRIMARY KEY(nome) ); CREATE TABLE Computer_system ( nome VARCHAR(30), ram VARCHAR(20), cpu VARCHAR(20), mainboard VARCHAR(20), cases VARCHAR(20), gfx VARCHAR(20), PRIMARY KEY(nome) ); INSERT INTO Computer_system VALUES('SERVER1','D31','XEON1','LGA2011_D3_E_OGFX','CASE_A',null); INSERT INTO Computer_system VALUES('SERVER2','D43','XEON3','LGA2011_D4_E_OGFX','CASE_A',null); INSERT INTO Computer_system VALUES('CONSUMER1','D43','I71','LGA1150_D4_ATX_OGFX','CASE_B',null); INSERT INTO Computer_system VALUES('GAMING1', 'D51', 'FX','AM3+_D5_ATX','BLACK_PEARL', 'NVIDIA_TITAN_BLACK_X'); INSERT INTO Computer_system VALUES('BUDGETO', 'D31', 'XEON1','LGA2011_D3_ATX','CASE_B', null); ```
There's a neat trick for unpivot in Postgres [using UNNEST( ARRAY( ...) )](https://stackoverflow.com/questions/1128737/unpivot-and-postgresql) This efficiently (in one pass of the table) unpivots those multiple columns of table computer\_system into multiple rows of (in this case) 3 columns: "nome", "colkind" and "colnome". An example of the unpivoted data: ``` | nome | colkind | colnome | |-----------|-----------|----------------------| | BUDGETO | ram | D31 | | BUDGETO | gfx | (null) | | BUDGETO | cases | CASE_B | | BUDGETO | mainboard | LGA2011_D3_ATX | | BUDGETO | cpu | XEON1 | ``` Once that data is available in that format it is simple to join to the Components table, like this: ``` SELECT * FROM ( /* this "unpivots" the source data */ SELECT nome , unnest(array[ram, cpu, mainboard,cases,gfx]) AS colnome , unnest(array['ram', 'cpu', 'mainboard','cases','gfx']) AS colkind FROM Computer_system ) unpiv INNER JOIN Components c ON unpiv.colnome = c.nome AND unpiv.colkind = c.kind ; ``` From here it is simple to arrive at this result: ``` | nome | sum_price | |-----------|-----------| | BUDGETO | 291 | | GAMING1 | 515 | | CONSUMER1 | 292 | | SERVER1 | 285 | | SERVER2 | 289 | ``` using: ``` SELECT unpiv.nome, sum(c.price) sum_price FROM ( /* this "unpivots" the source data */ SELECT nome , unnest(array[ram, cpu, mainboard,cases,gfx]) AS colnome , unnest(array['ram', 'cpu', 'mainboard','cases','gfx']) AS colkind FROM Computer_system ) unpiv INNER JOIN Components c ON unpiv.colnome = c.nome AND unpiv.colkind = c.kind GROUP BY unpiv.nome ; ``` `See this SQLfiddle demo` & please take note of the execution plan ``` QUERY PLAN HashAggregate (cost=487.00..488.00 rows=100 width=82) -> Hash Join (cost=23.50..486.50 rows=100 width=82) Hash Cond: ((((unnest(ARRAY[computer_system.ram, computer_system.cpu, computer_system.mainboard, computer_system.cases, computer_system.gfx])))::text = (c.nome)::text) AND ((unnest('{ram,cpu,mainboard,cases,gfx}'::text[])) = (c.kind)::text)) -> Seq Scan on computer_system (cost=0.00..112.00 rows=20000 width=368) -> Hash (cost=15.40..15.40 rows=540 width=120) -> Seq Scan on components c (cost=0.00..15.40 rows=540 width=120) ```
I Think you need break down your table design into 3 table, there are Component, Computer\_System and Computer\_component. Below are the field list: Computer\_System -> computer\_id and name Component -> nome\_component, kind, price Computer\_Component -> computer\_id, nome\_component. With that table, you can sum the total price for each computer\_id by join the Computer\_System a JOIN Computer Component b ON a.computer\_id = b.Computer id JOIN Component c ON b.nome\_component = c.nome\_component
Transform database columns to rows and sum
[ "", "sql", "postgresql", "" ]
**Table Definition:** ``` CREATE TABLE [dbo].[tbl]( [Id1] [int] NOT NULL, [Id2] [int] NOT NULL, [Id3] [int] NOT NULL, [IsActive] [bit] NOT NULL, [CreatedTs] [datetime] NOT NULL, CONSTRAINT [PK_tbl] PRIMARY KEY CLUSTERED ( [Id1] ASC, [Id2] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ) GO ALTER TABLE [dbo].[tbl] ADD CONSTRAINT [DF_tbl_IsActive] DEFAULT ((1)) FOR [IsActive] GO ALTER TABLE [dbo].[tbl] ADD CONSTRAINT [DF_tbl_CreatedTs] DEFAULT (getdate()) FOR [CreatedTs] GO ``` In above table the I've composite primary key using "**Id1**" and "**Id2**" combination. Now I want to include "**Id3**" in composite primary key, for that I am doing following: ``` ALTER TABLE tbl DROP CONSTRAINT PK_tbl ALTER TABLE [dbo].[tbl] ADD CONSTRAINT [PK_tbl] PRIMARY KEY CLUSTERED ( [Id1] ASC, [Id2] ASC, [Id3] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) GO ``` The above query runs perfectly fine on my local sql server db, but when I run it on Azure db I get error: > Tables without a clustered index are not supported in this version of > SQL Server. Please create a clustered index and try again. How should I modify the composite primary key on azure sql?
The problem is that dropping the PK constraint also drops the underlying clustered index, and heaps are not permitted in Azure. The best you can do is to create a new table with desired structure, copy the data over, drop the old table, rename the new one and recreate the FKs if any.
Azure SQL Database's latest update (V12) allows you to have tables without clustered indexes (i.e. as heaps). If you upgrade your server to the latest version, you'll be able to run your queries to modify the PK successfully. Other features enabled by V12: <http://azure.microsoft.com/en-us/documentation/articles/sql-database-preview-whats-new/> How to upgrade: <http://azure.microsoft.com/en-us/documentation/articles/sql-database-preview-upgrade/>
Unable to alter the Composite Primary Key in Azure Sql
[ "", "sql", "sql-server", "azure", "azure-sql-database", "composite-key", "" ]
I have a table. ``` select * from Group ``` ``` Owner Member1 Member1_Value Member2 Member2_Value Kelly Nicol 2 Julie 4 Nicol Julie 4 Kelly 3 Julie Kelly 5 Nicol 3 ``` Now I need to get Avege value from our Members for Owners. ``` Owner Member1 Member1_Value Member2 Member2_Value Avg Kelly Nicol 2 Julie 5 3.5 Nicol Julie 4 Kelly 2 2.5 Julie Kelly 5 Nicol 3 4.5 ``` Eg: Kelly's Avg 3.5 comes from Nicol gave 2 and Julia gave 5 and Avg is 3.5. as same as Nicol got Avg 2.5 because Kelly gave 2 and Julie gave 3. So how can I get that Avg colum from one sql statment Appreciate your answers Thanks
I would say that what you want is this but you should consider normalising a little bit that table in order to be able to do this kinds of queries easier: ``` SELECT Name, AVG(Value) AS Value FROM ( SELECT Member1 AS Name, Member1_Value As Value FROM Group UNION ALL SELECT Member2 AS Name, Member2_Value As Value FROM Group ) Normalised GROUP BY Name ``` A better table structure for this will by Member(MemberId, Name) Group(GroupId) GroupMember(MemberId, GroupId) Qualifications(ProvidingMemberId, ReceivingMemberId, Value) Or something similar
If I understood it correctly then it should work : ``` select *, cast((Member1_Value + Member2_Value) as Decimal(10,2))/2 [Avg] from Group ```
SQL - get row for same table
[ "", "sql", "sql-server", "database", "stored-procedures", "sql-server-2008-r2", "" ]
I set **two user-defined variables** as shown below but after some time, I forgot the names: ``` SET @a = 2, @b = 3; ``` So, does MySQL have the command that displays **all user-defined variables**?
Starting with MySQL 5.7, the performance schema exposes user variables. See table `performance_schema.user_variables_by_thread` <https://dev.mysql.com/doc/refman/5.7/en/performance-schema-user-variable-tables.html>
If you have MariaDB (a binary "drop-in" equivalent of MySQL) there is a plugin available, provided by MariaDB itself. MariaDB 10.2 (equivalent to MySQL 5.7) and above has a plugin that creates a ["USER\_VARIABLES" table](https://mariadb.com/kb/en/library/information-schema-user_variables-table/). Here is how to [install the plugin](https://mariadb.com/kb/en/library/user-variables-plugin/). Here is an example of its use: ``` SELECT * FROM information_schema.USER_VARIABLES ORDER BY VARIABLE_NAME; +---------------+----------------+---------------+--------------------+ | VARIABLE_NAME | VARIABLE_VALUE | VARIABLE_TYPE | CHARACTER_SET_NAME | +---------------+----------------+---------------+--------------------+ | var | 0 | INT | utf8 | | var2 | abc | VARCHAR | utf8 | +---------------+----------------+---------------+--------------------+ ``` MariaDB installs the plugin by default after version MariaDB 10.2.6. The link above shows how to install it for prior versions. Double check what version of "mysql" you're running, because sometimes people will refer to a MariaDB as MySQL, due to its use as a "binary drop in replacement" for MySQL. So it's possible that you are running a MariaDB database. I am not aware of MySQL providing anything similar. **How to check which [version of mysql](https://www.liquidweb.com/kb/how-to-check-the-mysql-version/) you're running** (the prompt is in bold) From the command line: **$** `mysql -v` From the mysql command client: **mysql>** `SHOW VARIABLES LIKE "%version%";` It is also shown when you first log into the mysql command client, which you can do via: **$** `mysql -u your_mysql_username --password=your_mysql_password`
How to show all user-defined variables (MySQL)
[ "", "mysql", "sql", "database", "command-line", "mysql-variables", "" ]
i have table with DOB column ('2012-05-29 00:00:00.000') and few other fields , i need to select the data for DOB between 6 months to 6 Years. I tried using the below SQL but this is not giving me the right data. any help will be appreciated. ``` select * from dbo.xyz where ( FLOOR(DATEDIFF(MONTH, birth_date , GETDATE()) % 12) >=6 AND FLOOR(DATEDIFF(DAY, birth_date , GETDATE()) / 365.25) <= 6 ) ```
Try this ``` select * from dbo.xyz where DATEDIFF(MONTH, birth_date , GETDATE()) between 6 and 72 ```
When using dates, the advice is to use functions only on the non-column values. In other words, modify `getdate()`, not `birth_date`: ``` select * from dbo.xyz where birth_date between dateadd(year, -6, getdate()) and dateadd(month, -6, getdate()) ``` This has two advantages. First, it makes the `where` clause "sargable", which means an index can be used on the comparison. More importantly, the alternative of using `datediff()` doesn't quite work as expected. `datediff()` counts the number of *calendar boundaries* between two values. So, 2014-12-31 and 2015-01-01 are one day apart, one month apart, and even one year apart.
select month and year from Date of birth IN sql
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "ssms", "" ]
I'm trying to run an alter table query to add columns to an existing table: ``` ALTER TABLE dbo.Names Add solCorporate bit null ``` When I execute it just sticks in the executing query cycle, I've left it for 20 mins and nothing. Any ideas? I'm presuming if there was a code error it would flag before running. Thanks.
I ran into the same problem today. I tried to stop the database connection by right-clicking it and choosing Stop (in the Object Explorer in MS SQL Server Management Studio). When that didn't help I launched Sql Server Configuration Manager and restarted the SQL Server from the SQL Server Services tab. After that, Alter table ran in a second.
Closing SQL server management studio worked for me
SQL alter table query stuck in execute
[ "", "sql", "sql-server-2008", "alter-table", "" ]
We imported a lot of data from another table. Now I'm trying to correct some of them. ``` UPDATE [x10ddata].[dbo].[ResourceTest] SET [Country] = (CASE WHEN [Country] IN ('Aezerbaijan', 'AZERBIJAN') THEN 'Azerbaijan' WHEN [Country] = 'Belgique' THEN 'Belgium' WHEN [Country] = 'China (RPC)' THEN 'China' WHEN [Country] = 'Columbia' THEN 'Colombia' WHEN [Country] = 'Croatia (Local Name: Hrvatska)' THEN 'Croatia' .....//... WHEN [Country] IN ('U.S.', 'U.S.A', 'U.S.A.', 'US', 'USA', 'USA - Maryland', 'USAQ') THEN 'United States' END) GO ``` I didn't use **ELSE** because many rows have valid country. My question is to know whether I need to the **WHERE** clause to filter the rows that will be affected? The reason I'm asking this question is that, I've selected into a test table and tried the the script. According to the output, all the rows affected, but when I check closely, not all the rows were affected. It's confusing. Thanks for helping
The `case` statement will return `null` if none of the `when` clauses are met. You can verify this with this simple sql: ``` declare @i int set @i = 2 select case when @i = 1 then 'A' end AS Column1 ``` This will return `null` since `@i` is not `1`. To fix this in your case, you can either add the `where` clause like you said, or the simpler option might be to add `ELSE [Country]` after all of your `WHEN` clauses. This would mean "If I don't need to change the country field, then just use the same value that was there before."
You won't need a WHERE clause, but the ELSE clause is needed. Change your statement to: ``` UPDATE [x10ddata].[dbo].[ResourceTest] SET [Country] = (CASE WHEN [Country] IN ('Aezerbaijan', 'AZERBIJAN') THEN 'Azerbaijan' WHEN [Country] = 'Belgique' THEN 'Belgium' WHEN [Country] = 'China (RPC)' THEN 'China' WHEN [Country] = 'Columbia' THEN 'Colombia' WHEN [Country] = 'Croatia (Local Name: Hrvatska)' THEN 'Croatia' .....//... WHEN [Country] IN ('U.S.', 'U.S.A', 'U.S.A.', 'US', 'USA', 'USA - Maryland', 'USAQ') THEN 'United States' ELSE [Country] END) ```
Do I need a where clause in a conditional UPDATE?
[ "", "sql", "sql-server", "t-sql", "" ]
I'm using PostgreSQL 9.3. I want to duplicate some of the db records. Since I'm using an auto-increment pk id for the table, I want to get back the id mappings from the generated ids of duplicated records to the original ones. For example, say I have a table `posts` with 2 records in it: ``` [{'id': 1, 'title': 'first'} , {'id': 2. 'title': 'second'}] ``` With SQL: ``` INSERT INTO posts (title) SELECT title FROM posts RETURNING id, ?? ``` I expect to see mappings like: ``` [{'id': 3, 'from_id': 1} , {'id': 4, 'from_id': 2}] ``` Any idea on how to fill in the question marks above to make it work? Thanks a lot!
This would be simpler for `UPDATE`, where additional rows joined into the update are visible to the `RETURNING` clause: * [Return pre-UPDATE column values using SQL only](https://stackoverflow.com/questions/7923237/return-pre-update-column-values-using-sql-only-postgresql-version/7927957#7927957) The same is currently ***not*** possible for `INSERT`. [The manual:](http://www.postgresql.org/docs/current/sql-insert.html) > The expression can use any column names of the table named by *table\_name* *table\_name* being the target of the `INSERT` command. You can use [(data-modifying) CTEs](http://www.postgresql.org/docs/current/queries-with.html#QUERIES-WITH-MODIFYING) to get this to work. Assuming `title` to be *unique per query*, else you need to do more: ``` WITH sel AS ( SELECT id, title FROM posts WHERE id IN (1,2) -- select rows to copy ) , ins AS ( INSERT INTO posts (title) SELECT title FROM sel RETURNING id, title ) SELECT ins.id, sel.id AS from_id FROM ins JOIN sel USING (title); ``` If `title` is not unique per query (but at least `id` is unique per table): ``` WITH sel AS ( SELECT id, title, row_number() OVER (ORDER BY id) AS rn FROM posts WHERE id IN (1,2) -- select rows to copy ORDER BY id ) , ins AS ( INSERT INTO posts (title) SELECT title FROM sel ORDER BY id -- ORDER redundant to be sure RETURNING id ) SELECT i.id, s.id AS from_id FROM (SELECT id, row_number() OVER (ORDER BY id) AS rn FROM ins) i JOIN sel s USING (rn); ``` This second query relies on the undocumented implementation detail that rows are inserted in the order provided. It works in all current versions of Postgres and is probably not going to break. *db<>fiddle [here](https://dbfiddle.uk/?rdbms=postgres_14&fiddle=fdb2e25362fc8bcd6e7f188793e38e75)* Old [sqlfiddle](http://sqlfiddle.com/#!17/2deb9/1)
if `id` column of `posts` is `serial` type, it's generated like `nextval('posts_id_seq'::regclass)`, you can manually call this function for every new row ``` with sel as ( SELECT id, title, nextval('posts_id_seq'::regclass) new_id FROM posts WHERE id IN (1,2) ), ins as ( INSERT INTO posts (id, title) SELECT new_id, title FROM sel ) SELECT id, new_id FROM sel ``` it'l works with any data, include non-unique `title`
INSERT INTO ... FROM SELECT ... RETURNING id mappings
[ "", "sql", "postgresql", "sql-insert", "sql-returning", "" ]
Which would be the best way to get the number of people hired on each day of the week for 7 years from a table People that has their entry\_date with a day-month-year as 01-Jun-91. For example: ``` 2000 2001 2002 etc.. SUN 2 0 1 MON 0 0 2 ``` Do I have to create a counter for each day of each year? Like Sun2000, Sun2001 etc?
You need to join each day of the week with your entry\_date and [pivot](http://oracle-base.com/articles/11g/pivot-and-unpivot-operators-11gr1.php) the results. [SQL Fiddle](http://sqlfiddle.com/#!4/a72667/24) **Query**: ``` with x(days) as ( select 'sunday' from dual union all select 'monday' from dual union all select 'tuesday' from dual union all select 'wednesday' from dual union all select 'thursday' from dual union all select 'friday' from dual union all select 'saturday' from dual ) select * from ( select x.days, extract(year from emp.entry_date) entry_year from x left outer join emp on x.days = to_char(emp.entry_date,'fmday') ) pivot(count(entry_year) for entry_year in ( 2007, 2008, 2009, 2010, 2011, 2012 ) ) order by case days when 'sunday' then 1 when'monday' then 2 when'tuesday' then 3 when'wednesday' then 4 when'thursday' then 5 when'friday' then 6 when'saturday' then 7 end ``` **[Results](http://sqlfiddle.com/#!4/a72667/24/1)**: ``` | DAYS | 2007 | 2008 | 2009 | 2010 | 2011 | 2012 | |-----------|------|------|------|------|------|------| | sunday | 0 | 0 | 0 | 0 | 0 | 0 | | monday | 0 | 0 | 0 | 2 | 0 | 0 | | tuesday | 0 | 0 | 0 | 0 | 1 | 0 | | wednesday | 0 | 0 | 0 | 1 | 2 | 1 | | thursday | 0 | 0 | 0 | 0 | 0 | 3 | | friday | 0 | 0 | 0 | 0 | 0 | 0 | | saturday | 0 | 0 | 0 | 0 | 0 | 0 | ```
Depending on the version of Oracle you're using (10g doesn't have the `PIVOT` function, for example), you might try something like the following conditional aggregation: ``` SELECT day_abbrev , SUM(CASE WHEN year_num = 2000 THEN person_cnt ELSE 0 END) AS "2000" , SUM(CASE WHEN year_num = 2001 THEN person_cnt ELSE 0 END) AS "2001" , SUM(CASE WHEN year_num = 2002 THEN person_cnt ELSE 0 END) AS "2002" , SUM(CASE WHEN year_num = 2003 THEN person_cnt ELSE 0 END) AS "2003" , SUM(CASE WHEN year_num = 2004 THEN person_cnt ELSE 0 END) AS "2004" , SUM(CASE WHEN year_num = 2005 THEN person_cnt ELSE 0 END) AS "2005" , SUM(CASE WHEN year_num = 2006 THEN person_cnt ELSE 0 END) AS "2006" FROM ( SELECT TO_CHAR(entry_date, 'DY') AS day_abbrev , EXTRACT(YEAR FROM entry_date) AS year_num , COUNT(*) AS person_cnt FROM people GROUP BY TO_CHAR(entry_date, 'DY'), EXTRACT(YEAR FROM entry_date) ) GROUP BY day_abbrev ORDER BY TO_CHAR(NEXT_DAY(SYSDATE, day_abbrev), 'D'); ```
Get collection of integers from counter
[ "", "sql", "oracle", "" ]
Database: ``` Department Position Points A Manager 50 A Supervisor 10 A Supervisor 10 A Staff 2 A Staff 2 B Manager 40 B SuperVisor 8 B Staff 2 B Staff 2 B Staff 2 ``` Desired query result: ``` Dept Manager Count Supervisor Count Staff Count Staff Total Pts A 1 2 2 4 B 1 1 3 4 ``` Is the desired query result possible without using nested select with count? We have a certain stored procedure similar to this using nested counts and we like to make it simpler and perform better/faster
Use `Conditional Aggregate` to count only the specific data ``` Select Department, count(case when Position = 'Manager' then 1 END) as Manager, count(case when Position = 'Supervisor' then 1 END) as Supervisor, count(case when Position = 'Staff' then 1 END) as Staff From yourtable Group by Department ``` If you are using `Sql Server` use this ``` SELECT Department, Manager, Supervisor, Staff FROM Yourtable PIVOT (Count(Position) FOR Position IN (Manager,Supervisor,Staff))pv ```
Use conditional `SUM`: ``` SELECT Department, SUM(CASE WHEN Position = 'Manager' THEN 1 END) as Manager, SUM(CASE WHEN Position = 'Supervisor' THEN 1 END) as Supervisor, SUM(CASE WHEN Position = 'Staff' THEN 1 END) as Staff FROM yourtable GROUP BY Department ```
Count without nested select: is this possible?
[ "", "sql", "count", "sql-optimization", "" ]
I have a dataset like this: ``` +---------+--------+--------+ | Student | Score1 | Score2 | +---------+--------+--------+ | 1 | 23 | 40 | | 2 | 12 | 10 | | 3 | 54 | 90 | +---------+--------+--------+ ``` I want to calculate the mean of the 2 scores on each row. In theory this is pretty simple - just do `(score1 + score2/2)`. However, I run into problems if one of the values is NULL, having to make extensive use of `NVL`. The `AVG()` function would take care of all of this for me, but this is designed for averaging multiple rows. Is there any way of using AVG on values in the same row? # Update This is what I have at the moment which deals with all the possibilities (to my knowledge). However, I think there must be a cleaner way? ``` SELECT T1.STUDENT, T1.SCORE1, T1.SCORE2, (NVL(T1.SCORE1,0) + NVL(T1.SCORE2,0))/DECODE((NVL2(t1.SCORE1,1,0) + NVL2(t1.SCORE2,1,0)),0,NULL,(NVL2(t1.SCORE1,1,0) + NVL2(t1.SCORE2,1,0))) AS AVG_SCORE FROM STUDENTS T1; ```
Your method is a good one. I would write it as: ``` select student, ((coalesce(score1, 0) + coalesce(score2, 0)) / nullif(nvl2(score1, 1, 0) + nvl2(score2, 1, 0), 0) ) as score_avg ``` The advantage of this method (or the method in your question) is that it is easy to add more scores. That said, having columns with basically the same name suggests that you should be using a junction table instead of multiple columns.
what about this: ``` select student, case when score1 is null and score2 is not null then score2 when score1 is not null and score2 is null then score1 when score1 is null and score2 is null then 0 else (score1 + score2)/2 end from your_table ```
Can you run Oracle's AVG() function on values in the same row?
[ "", "sql", "oracle", "average", "" ]
I am in an argument with a friend of mine on how to store a small array (<10) of references in SQL. Lets say there is a class **player** which can hold **one** **item** in it's inventory. Describing this as an SQL table would be fairly straight forward: ``` CREATE TABLE items( id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, type VARCHAR(32), weight FLOAT ); CREATE TABLE players( id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, name VARCHAR(128) not null, item INT, FOREIGN KEY(item) REFERENCES items(id) ); ``` The question is now: If the player could hold more than one item, but a small fixed amount of them, would it be *better* to store them in an additional table and then `JOIN` over them, like this: ## Additional Table ``` CREATE TABLE players( id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, name VARCHAR(128) not null ); CREATE TABLE inventory( id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, item INT NOT NULL, player INT NOT NULL, FOREIGN KEY(item) REFERENCES items(id), FOREIGN KEY(player) REFERENCES players(id) ); ``` Or would it be *better* to just add additional columns? If the amount of items was dynamic and unlimited, this would of course not be an option: ## Multiple Columns ``` CREATE TABLE players( id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, name VARCHAR(128) not null item1 INT, item2 INT, item3 INT, item4 INT, FOREIGN KEY(item1) REFERENCES items(id) FOREIGN KEY(item2) REFERENCES items(id) FOREIGN KEY(item3) REFERENCES items(id) FOREIGN KEY(item4) REFERENCES items(id) ); ``` This would have the advantage of not having to join over a table that would grow very big very fast but might fragment your table very fast if not any player is carrying all four items all the time. * Which way should be preferred? * Is the second way using multiple columns violating the first normal form? * What are the advantages and disadvantages to both ways? * What will deliver the best performance?
This might make a good interview question. 1. Preference is opinon. It depends. However I avoid the "multiple columns" technique (technique #2) if I have more than 2 items per player for many many reasons. For one, if you design and code n=10 items per player, how many items will the project manager want tomorrow? n+1 of course. 2. I belive the "Multiple Columns" techinque is 1NF Becuase the data is atomic (although it requires null values) "Many writers misunderstand the concept of a repeating group and use it to claim that a certain table is in violation of 1NF." <https://www.simple-talk.com/sql/learn-sql-server/facts-and-fallacies-about-first-normal-form/> Just because it's 1NF does mean it's a good solution. Normalization per se is not as important as application useability, maintainibilty, and performance. De-normalization is common practice for performance. 3. see below 4. What problem are you solving? You give a technique but until you give a problem to solve , you can't measure performance. If may be more preformant for writes but not reads. Write some example SQL for the questions you application needs to answer. For your technique #2, almost all questions that I can think of require use of subselects (or case statements). These are hard to maintain, I think (hence not 'preferrable') Let's number your two techniques #1 and #2. Here are (too many) example SQL solutions for each: How many items are in each player? #1. `Select count(inventory.item) from inventory inner join player = 1` #2. really depends on your db, for example MySQL You might use `IFNULL(item1,0)` and sum them, or CASE statements. Not going to attempt to write this code. Which players have item id = 9? 1. `select id from players from players inner join inventory on players.id = inventory.player where inventory.item = 9` 2. `select id from players where item1=9 or item2=9 or item3=9 ....` Which players have item id X and Y? 1. `select id from players from players inner join inventory on players.id = inventory.player where inventory.item = X or inventory.item = Y;` 2. `select id from players where id in (select id from players where item1 = X or item2 = X....) or id in (select id from players where item1 = Y or item2 = Y ...) or ...` Since items have weights, which players have items with weight > 10 ? 1. `select distinct players.* from players inner join inventory on players.id = inventory.player inner join items on inventory.item = items.id where items.weight > 10` 2. `select distinct id from players where players.item1 in (select id from items where items.weight > 10) or players.item2 in (select id from items where items.weight > 10) or ...` Notice I'm not finishing the SQL for technique #2. Would you? There are many other examples of painful SQL. Which players have the highest total weight? Delete all items with a certain id. I'm not going to answer these; for each case in my opinion the sql for technique #2 is harder to maintain (for me == not preferrable). There may be techniques to make these subselects simpler ( paramertized views, SQL templates in your application code) but it depends on your platform. Optimizing using indexes becomes problematic also, because it seems to me you'll need an index on every item column in your players table. If I'm correct that technique #2 requires sub-selects, I hear joins are more efficient ([Join vs. sub-query](https://stackoverflow.com/questions/2577174/join-vs-sub-query) ) Using the technique #1, ( ADDITIONAL TABLE ) just use a trigger or application code to enforce the rule limiting 10 items per player. That kind of rule is easier to change than all of the SELECTs I should stop now, but here is something else you two can argue about. If your items don't have properties (or the properites are rarely referenced), consider technique #3: SINGLE COLUMN DELIMITED LIST `CREATE TABLE players( id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, name VARCHAR(128) not null, items VARCHAR(2048) -- or whatever size you need, or TEXT );` `INSERT INTO PLAYERS (name, items) values ('player 1', 'itemX, itemY, itemZ');` Not normalized, but who cares if it's fast!
Make another table. Yes, making multiple columns violates 1NF. Why should you obey this rule? Consider: (1) Is the limit of 10 absolute? It sounds like this is some sort of game (from the word "player") so maybe it is. But in most applications, such limits tend to be of the "I can't imagine anyone ever having more than ..." variety. I worked on a system years ago for insurance where we had to record the employee's children who were covered by the policy. The original designer decided to create multiple fields, child1, child2, ... child8. He apparently said to himself, "No one would ever have more than 8 children. That will be plenty." Then we got an employee with 9 children and the system blew up. (2) Let's say you want to test if a player is carrying a certain specified item. With two tables, you write something like ``` select count(*) from player_item where player_id=@pid and item_id=@iid ``` If count>0 then the player has the item. With one table, you'd have to write ``` select count(*) from player where player_id=@pid and (item1=@iid or item2=@iid or item3=@iid or item4=@iid or item6=@iid or item7=@iid or item8=@iid or item9=@iid or item10=@iid) ``` Even for a simple "is it equal" test this is a lot of extra code. And did you notice that I skipped item5? That's an easy mistake to make when typing these repetitive tests over and over. Trust me: I did it once when there were just 3 repeats. Then the program worked correctly if the desired value was in slot 1 or slot 3, but it failed when the value was in slot 2. In most of our tests we only put one item in, so it appeared to work. We didn't catch that one until we went to production. (3) Suppose you decide that 10 is not the right limit, and you want to change it to 12. With two tables, the only place that would be changed would be the code where you create new ones, to impose the limit of 12 instead of 10. If you did it right, that 10 is a symbolic variable somewhere and not hard-coded, so you change one assignment statement. With one table, you have to change every query that reads that table. (4) Speaking of searching the table for a given item: with two tables, you can create an index on item\_id. Will one table, you need an index on item1, another index on item2, another index on item3, etc. There are 10 indexes for the system to maintain instead of 1. (5) Joins will be a particular nightmare. It seems likely that you might want to display a list of all the items a player has with some values from the item record, like a name. With two tables, that's ``` select item.name from player_item join item on item.item_id=player_item.item_id where player_id=@pid ``` With one table, it's ``` select item1.name, item2.name, item3.name, item4.name, item5.name, item6.name, item7.name, item8.name, item9.name, item10.name from player left join item item1 on item1.item_id=player.item1 left join item item2 on item2.item_id=player.item2 left join item item3 on item3.item_id=player.item3 ... ``` etc for 10 joins. And if the join is more complex than a simple one value with an id match, you have to repeat all the columns and all the conditions 10 times. Whew! And if later you decide you need to change the conditions, you have to make sure you make the same change ten times. (5) How do you handle adds and deletes? Does the order matter? Like if you use one table, and there are 4 items filled in, what happens if #3 is deleted? Can we just put a null in slot 3? Or do we need to move the value from slot 4 down to slot 3 and then null out slot 4? When we add new items can they always go on the end or do we have to put them in the middle sometimes? When we show lists of items to the users, do they have to appear in some order? With two tables, we could add a simple "order by name" or "order by turn\_acquired" to the query. With one table, you'd have to build an array in memory and sort them. Okay, not a huge deal to do once, but a pain if it comes up many times in the program. Etc.
Store small array as multiple columns in SQL
[ "", "mysql", "sql", "database-design", "" ]
I am trying to write a simple Database application in go which access multiple data servers, some MySQL, MSSQL and SqlLite3. I am using the "database/sql" package to access them. ``` db, err := sql.Open(driver, dataSourceName) result, err := db.Exec( "INSERT INTO users (name, age) VALUES (?, ?)", "gopher", 27, ) ``` I need to log the SQL queries to the individual servers for debugging and auditing. How can I achieve that?
Assuming that you don't want to use the servers logging facilities, the obvious solution would be to simply log all queries as they are made. ``` db, err := sql.Open(driver, dataSourceName) log.Println(dataSourceName, "INSERT INTO users (name, age) VALUES (?, ?)", "gopher", 27) result, err := db.Exec( "INSERT INTO users (name, age) VALUES (?, ?)", "gopher", 27, ) ``` This is the basic solution for your problem. You can refine it in multiple ways: * Create a `log.Logger` exclusively for your queries, so you can direct it to a particular output destination * Wrap the said `log.Logger` and the `sql.DB` objects in a special struct that will log queries as they are done Here is a rough example of the said struct: ``` type DB struct { db *sql.DB dsn string log *log.Logger } func NewDB(driver, dsn string, log *log.Logger) (*DB, error) { db, err := sql.Open(driver, dsn) if err != nil { return nil, err } return &DB { db: db, dsn: dsn, log: log, } } func (d DB) Exec(query string, args ...interface{}) (sql.Result, err) { d.log.Println(dsn, query, args) return d.db.Exec(query, args...) } ``` And how you would use it: ``` l := log.New(os.Stdout, "[sql]", log.LstdFlags) db, _ := NewDB(driver, dataSourceName, l) result, _ := db.Exec( "INSERT INTO users (name, age) VALUES (?, ?)", "gopher", 27, ) ``` Obviously, you can refined this design again, by adding error reporting, duration of the queries, etc.
There is now a solution for that without having to wrap `sql.DB` with a wrapper that adds the logging. [sqldblogger](https://pkg.go.dev/github.com/simukti/sqldb-logger) facilitates this without needing to change existing `sql.DB` code.
How to log queries to database drivers?
[ "", "sql", "database", "logging", "go", "" ]
I have the following table: ``` oGroup oDate oValue -------------------------------- A 2014-01-01 20 A 2014-01-02 30 B 2014-01-01 5 B 2014-01-02 15 C 2014-01-01 40 C 2014-01-02 60 ``` I want to have the following result: ``` oGroup 2014-01-01 2014-01-02 ---------------------------------- A 20 30 B 5 15 C 40 60 ``` How can I achieve this in SQL Server 2008? Thank you.
Learn on [Pivot](http://blog.sqlauthority.com/2008/06/07/sql-server-pivot-and-unpivot-table-examples/) ``` select * from piv pivot ( min(oValue) for oDate in([2014-01-01],[2014-01-02]) )as pivv; ``` Using `Dynamic sql` ``` declare @query nvarchar(max) declare @cols nvarchar(max) select @cols=stuff((select distinct ','+QUOTENAME(oDate) from piv for xml path(''),TYPE).value('.','nvarchar(max)'),1,1,'') select @query='select * from piv pivot ( min(oValue) for oDate in(' + @cols + ') )as pivv;' exec (@query) ``` [Fiddle Demo](http://www.sqlfiddle.com/#!6/5263c/1)
You could use [**dynamic crosstab**](http://www.sqlservercentral.com/articles/Crosstab/65048/): ``` DECLARE @sql1 VARCHAR(4000) = '', @sql2 VARCHAR(4000) = '', @sql3 VARCHAR(4000) = '' SELECT @sql1 = 'SELECT oGroup' + CHAR(10) SELECT @sql2 = @sql2 + ' ,MAX(CASE WHEN oDate = CAST(''' + CONVERT(VARCHAR(10), oDate, 112) + ''' AS DATE) THEN oValue END) AS [' + CONVERT(VARCHAR(10), oDate, 120) +']' + CHAR(10) FROM( SELECT DISTINCT oDate FROM SampleData )t ORDER BY oDate SELECT @sql3 = 'FROM SampleData GROUP BY oGroup ORDER BY oGroup' PRINT(@sql1 + @sql2 +@sql3) EXEC (@sql1 + @sql2 +@sql3) ``` [**SQL Fiddle**](http://sqlfiddle.com/#!6/a51f5/1/0) --- This is what the `PRINT` outputs: ``` SELECT oGroup ,MAX(CASE WHEN oDate = CAST('20140101' AS DATE) THEN oValue END) AS [2014-01-01] ,MAX(CASE WHEN oDate = CAST('20140102' AS DATE) THEN oValue END) AS [2014-01-02] FROM SampleData GROUP BY oGroup ORDER BY oGroup ```
Transpose row to column in SQL Server 2008
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have an order table which has the customer id and order amount. I want to join these orders but joined orders cannot exceed a certain amount. An example below: Let's say the maximum amount is 33 pallets and I have a table like this: ``` Order ID Client ID Amount 1 100001 10 2 100001 22 3 100001 13 4 100001 33 5 100001 1 6 100001 5 7 100001 6 ``` The result should be: ``` Order ID Client ID Amount Joined ID Joined Amount 1 100001 10 100001A 32 2 100001 22 100001A 32 3 100001 13 100001B 13 4 100001 33 100001C 33 5 100001 1 100001D 12 6 100001 5 100001D 12 7 100001 6 100001D 12 ``` Here, if we can also come up with a way to ad orders numbered 5,6,7 to joined order 10001B it would be great. But even this solution will be enough. I have a few ideas on how to solve this but I couldn't really come up with a working solution. I'll be handling around 2000 Order Ids like this, so also I don't want this to be a slow operation. I'm using SQL Server 2014
you can find proposed solution (sql definition) with help of recursive CTE here: <http://sqlfiddle.com/#!6/285c16/45> basicaly CTE iterates ordered list (by clientID, orderID) and evaluate if summed amount is not over 33. i have added next clientID to mock data, to test correct subcount criteria evaluation. here is query to obtain results: ``` -- prepare numbering for iteration with orders_nr as ( select row_number() over(order by clientID, id) as [nr], o.* from orders o ) , -- prepare sum totals re as ( select id, amount, amount as amount_total ,o.[nr] as nr, clientID from orders_nr o where o.[nr]=1 UNION ALL select o.id, o.amount, CASE WHEN o.clientID <> r.clientID then o.amount ELSE o.amount+ r.amount_total END, o.[nr] as nr, o.clientID from orders_nr o join re r on (o.[nr]=r.[nr]+1) ) , -- iterate total - evaluate current criteria (<=33) re2 as ( select re.id, re.amount, re.amount_total, re.[nr] as [group], re.[nr], re.clientID from re where re.[nr]=1 UNION ALL select r.id, r.amount, CASE WHEN r.amount+re2.amount_total >33 OR r.clientID<>re2.clientID then r.amount ELSE re2.amount_total+r.amount END as amount_total, CASE WHEN r.amount+re2.amount_total >33 OR r.clientID<>re2.clientID THEN r.[nr] ELSE re2.[group] END as [group], r.[nr], r.clientID from re r join re2 on (r.[nr]=re2.[nr]+1 ) ) , group_total AS ( select [group], clientID, max(amount_total) as total FROM re2 group by [group], clientID ), result as ( select r.id, r.clientID, r.amount, cast(r.clientid as varchar(20)) +'-'+char(64+cast( dense_rank() over( partition by r.clientID order by r.[clientID], r.[group]) as varchar(3))) as joinedID , gt.total as joinedAmount from re2 as r join group_total gt on (r.clientID=gt.clientID AND r.[group]=gt.[group]) ) select * from result ```
I tried to solve with simple selects and without using an explicit cursor but it was a little hard in that way. I've solved it and got exactly what you wanted with: a `TempTable`, a `cursor`, a `counter` for checking the sum of sequent amounts, `CHAR()` function to generate letters; I calculated the values and inserted into temp table finally updated the temp table, following is what I tried and the [**DEMO IS HERE**](http://sqlfiddle.com/#!6/b96ce/2). ``` create table #tbl_name (OrderID int, ClientID int, Amount int, joinedId varchar(15) , joinedAmount int) insert #tbl_name(OrderID,ClientID,Amount) select OrderID,ClientID,Amount from tbl_name declare cr cursor for select orderId, clientId, amount from tbl_name order by OrderId declare @summedAmount int, @orderId int, @clientId int, @amount int, @counter int set @summedAmount=0 set @counter=65 open cr fetch from cr into @orderId,@clientId,@amount while (@@fetch_status=0) begin if (@amount + @summedAmount < 33) begin set @summedAmount=@summedAmount+@amount update #tbl_name set joinedId=cast(@ClientId as varchar(10))+char(@counter), joinedAmount=@summedAmount where orderId=@orderId end else if (@amount + @summedAmount >33) begin set @counter=@counter+1 set @summedAmount=@amount update #tbl_name set joinedId=cast(@ClientId as varchar(10))+char(@counter), joinedAmount=@Amount where orderId=@orderId end fetch from cr into @orderId,@clientId,@amount end close cr deallocate cr go with CTE as ( select JoinedId, max(joinedAmount) mx from #tbl_name group by JoinedId ) update #tbl_name set joinedAmount = CTE.mx from #tbl_name join CTE on #tbl_name.JoinedId=CTE.JoinedId select * from #tbl_name drop table #tbl_name ```
SQL Group By Certain Amounts
[ "", "sql", "algorithm", "t-sql", "sql-server-2014", "" ]
I have the following table: ``` -------------------------------------------- Group Date Value1 Value2 Value3 -------------------------------------------- A 2014-01-01 10 10 5 B 2014-01-01 12 20 25 C 2014-01-01 20 40 50 ``` I want to have the following result: ``` ------------------------------------------- NewCol A B C ------------------------------------------- Value1 10 12 20 Value2 10 20 40 Value3 5 25 50 ``` How can I do this in sql server 2008? Thank you.
For interchanging rows and columns, you need to `UNPIVOT`(convert columns into row values) first and then `PIVOT`(rows to columns) based on `UNPIVOT` result. ``` -- Here is the result SELECT * FROM ( -- Unpivot here using CROSS APPLY SELECT [Group], [Values],COLNAMES FROM YOURTABLE CROSS APPLY(VALUES (Value1,'Value1'),(Value2,'Value2'),(Value3,'Value3')) AS COLUMNNAMES([Values],COLNAMES) )TAB PIVOT ( -- Specify the values to hold in pivoted column MIN([Values]) -- Specify the name of columns FOR [Group] IN([A],[B],[C]) )P ORDER BY COLNAMES ``` **WORKING OF QUERY** You can use `CROSS APPLY` to `UNPIVOT`. `Value1` will be the columns which holds the values in column - `Value1`. `'Value1'`(in single quotes) will be the hard coded column name value(which is shown in `COLNAMES` variable. The usage of `CROSS APPLY` will generate the following result. * **[SQL FIDDLE](https://data.stackexchange.com/stackoverflow/query/293217/cross-apply)** ![enter image description here](https://i.stack.imgur.com/xRYUl.jpg) Now with the data generated from `CROSS APPLY`, you are going to `PIVOT` which forms the following result. * **[SQL FIDDLE](https://data.stackexchange.com/stackoverflow/query/293218)** ![enter image description here](https://i.stack.imgur.com/uAMyT.jpg) Sometimes you cannot know the values in column `Group` in advance. In such case you need to using `Dynamic Sql`. The first step in that is to get the values in the row `Group` to a variable. ``` DECLARE @cols NVARCHAR (MAX) SELECT @cols = STUFF((SELECT ',' + QUOTENAME([Group]) FROM ( SELECT distinct [Group] from YOURTABLE ) c FOR XML PATH(''), TYPE ).value('.', 'NVARCHAR(MAX)') ,1,1,'') ``` Now use the `PIVOT` query with `Dynamic Sql`. Why we are using `Dynamic Sql` is because `Sql Server` cannot get the column names from the variable unless and otherwise `Dynamic Sql` is used. ``` DECLARE @query NVARCHAR(MAX) SET @query = ' SELECT * FROM ( -- Unpivot here using CROSS APPLY SELECT [Group], [Values],COLNAMES FROM YOURTABLE CROSS APPLY(VALUES (Value1,''Value1''),(Value2,''Value2''),(Value3,''Value3'')) AS COLUMNNAMES([Values],COLNAMES) ) x PIVOT ( -- Specify the values to hold in pivoted column MIN([Values]) -- Get the column names from variable FOR [Group] IN('+@cols+') ) p ORDER BY COLNAMES;' EXEC SP_EXECUTESQL @query ``` * **[SQL FIDDLE](https://data.stackexchange.com/stackoverflow/query/293220/dynamic-sql)** Hope you understand the concepts and got your result. Any clarifications, feel free to ask.
You can use the PIVOT operator. See the details below: ``` select 'Value1' NewCol, A, B, C from (select [Group], Value1 Value from YourTable) g pivot ( max(g.Value) for g.[Group] in (A, B, C)) p union all select 'Value2', A, B, C from (select [Group], Value2 Value from YourTable) g pivot ( max(g.Value) for g.[Group] in (A, B, C)) p union all select 'Value3', A, B, C from (select [Group], Value3 Value from YourTable) g pivot ( max(g.Value) for g.[Group] in (A, B, C)) p ``` Or You can combine PIVOT, and UNPIVOT operators: ``` select NewCol, A, B, C from ( select * from YourTable unpivot ( Value for NewCol in (Value1, Value2, Value3) ) up ) up pivot ( max(Value) for [Group] in (A, B, C) ) p ```
Column to Row SQL syntax
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have 3 tables ``` order (customerID, productID) product (productID, prodName, classID) classification (classID, className) ``` I need to get prodName and className where customerID=5555 ``` SELECT prodName FROM product AND className FROM classification LEFT JOIN order WHERE order.customerID=12345 ``` It's not working, I think I may be joining the wrong table or something. What's wrong? Thanks a bunch.
You need to specify what field the join is on, like this: ``` SELECT prodName FROM product AND className FROM classification LEFT JOIN order ON classification.productID = order.productID WHERE order.customerID=12345 ```
Your syntax is a little off; ``` SELECT prodName FROM product ,classification LEFT JOIN order ON(classification.classID = order.classID) WHERE order.customerID = 12345 and order.productID = product.productID ``` That will return your Prod Name. Have to be honest, I'm not sure why you're doing a left join here; you never do anything with it. LEFT JOIN will return you all of the records from Classification and ONLY the matching records from Order. The below would do the same thing without the pointless LEFT JOIN ``` SELECT prodName FROM product ,classification ,order WHERE order.customerID = 12345 AND order.classID = classification.classID AND order.productID = product.productID ``` In fact, the only reason I see for even joining the Classification table is to ensure that the order has a valid classification, which you should be verifying on the record's creation anyways. You might want to rethink what you're trying to accomplish here.
joining/searching 3 tables
[ "", "sql", "left-join", "" ]
I've got this table, ``` Name Rating A 2 B 1 C 5 D 3 E 1 F 4 ``` and I've got a rating system ``` 1-Excellent, 2-Very Good, 3-Good, 4-OK, 5-Poor ``` I was wondering if i could replace the nueric values in the table to get the following result table. ``` Name Rating A Very Good B Excellent C Poor D Good E Excellent F OK ``` Thanks
Use a `CASE` statement. Of course, this will only work if your column is not set to a numeric value. ``` UPDATE tblRatings SET Rating = CASE WHEN 1 THEN 'Excellent' WHEN 2 THEN 'Very Good' WHEN 3 THEN 'Good' WHEN 4 THEN 'OK' ELSE 'Poor' END ``` If it is, you'll need to use a `SELECT` statement; ``` SELECT CASE WHEN 1 THEN 'Excellent' WHEN 2 THEN 'Very Good' WHEN 3 THEN 'Good' WHEN 4 THEN 'OK' ELSE 'Poor' END FROM tblRatings ```
I don't think it's good idea to update your data inplace, it's better to store id of the rating and not text representation of the data. But you can query your table and replace int with text: ``` select Name, case Rating when 1 then 'Excellent' when 2 then 'Very Good,' when 3 then 'Good' when 4 then 'OK' when 5 then 'Poor' end as Rating from <your table> ``` Or you can create a lookup table and join with it ``` create table Rating (id int, desc nvarchar(128)) insert into Rating (id, desc) select 1, 'Excellent' union all select 2, 'Very good' union all select 3, 'Good' union all select 4, 'OK' union all select 5, 'Poor' select t.Name, R.desc as Rating from <your table> as t left outer join Rating as R on R.id = t.Rating ```
Replace column values
[ "", "sql", "sql-server", "" ]
Let's cut to the chase. I have a table which looks like this one (using SQL Server 2014): **DEMO:** **<http://sqlfiddle.com/#!6/75f4a/1/0>** ``` CREATE TABLE TAB ( DT datetime, VALUE float ); INSERT INTO TAB VALUES ('2015-05-01 06:00:00', 12), ('2015-05-01 06:20:00', 10), ('2015-05-01 06:40:00', 11), ('2015-05-01 07:00:00', 14), ('2015-05-01 07:20:00', 15), ('2015-05-01 07:40:00', 13), ('2015-05-01 08:00:00', 10), ('2015-05-01 08:20:00', 9), ('2015-05-01 08:40:00', 5), ('2015-05-02 06:00:00', 19), ('2015-05-02 06:20:00', 7), ('2015-05-02 06:40:00', 11), ('2015-05-02 07:00:00', 9), ('2015-05-02 07:20:00', 7), ('2015-05-02 07:40:00', 6), ('2015-05-02 08:00:00', 10), ('2015-05-02 08:20:00', 19), ('2015-05-02 08:40:00', 15), ('2015-05-03 06:00:00', 8), ('2015-05-03 06:20:00', 8), ('2015-05-03 06:40:00', 8), ('2015-05-03 07:00:00', 21), ('2015-05-03 07:20:00', 12), ('2015-05-03 07:40:00', 7), ('2015-05-03 08:00:00', 10), ('2015-05-03 08:20:00', 4), ('2015-05-03 08:40:00', 10) ``` I need to: * sum values hourly * select the smallest 'hourly sum' for each day * select hour for which that sum occurred In other words, I want to have a table which looks like this: ``` DATE | SUM VAL | ON HOUR -------------------------- 2015-03-01 | 24 | 8:00 2015-03-02 | 22 | 7:00 2015-03-03 | 24 | 6:00 ``` First two points a very easy (check out sqlfiddle). I have a problem with the third one. I can't just like that select Datepart(HOUR, DT) bacause it has to be aggregated. I was trying to use JOINS and WHERE clause, but with no success (some values may occur in table more than once, which thrown an error). I'm kinda new with SQL and I got stuck. Need your help SO! :)
One way is to use the set with minimum hourly values as a derived table and join against that. I would do something like this: ``` ;WITH CTE AS ( SELECT Cast(Format(DT, 'yyyy-MM-dd HH:00') AS datetime) AS DT, SUM(VALUE) AS VAL FROM TAB GROUP BY Format(DT, 'yyyy-MM-dd HH:00') ) SELECT b.dt "Date", val "sum val", cast(min(a.dt) as time) "on hour" FROM cte a JOIN ( SELECT Format(DT,'yyyy-MM-dd') AS DT, MIN(VAL) AS DAILY_MIN FROM cte HOURLY GROUP BY Format(DT,'yyyy-MM-dd') ) b ON CAST(a.DT AS DATE) = b.DT and a.VAL = b.DAILY_MIN GROUP BY b.DT, a.VAL ``` This would get: ``` Date sum val on hour 2015-05-01 24 08:00:00.0000000 2015-05-02 22 07:00:00.0000000 2015-05-03 24 06:00:00.0000000 ``` I used min() for the time part as your sample data has the same low value for two separate hour for the 3rd. If you want both then remove the min function from the outer select and the group by. Then you would get: ``` Date sum val on hour 2015-05-01 24 08:00:00.0000000 2015-05-02 22 07:00:00.0000000 2015-05-03 24 06:00:00.0000000 2015-05-03 24 08:00:00.0000000 ``` I'm sure it can be improved, but you should get the idea.
``` DECLARE @TAB TABLE ( DT DATETIME , VALUE FLOAT ); INSERT INTO @TAB VALUES ( '2015-05-01 06:00:00', 12 ), ( '2015-05-01 06:20:00', 10 ), ( '2015-05-01 06:40:00', 11 ), ( '2015-05-01 07:00:00', 14 ), ( '2015-05-01 07:20:00', 15 ), ( '2015-05-01 07:40:00', 13 ), ( '2015-05-01 08:00:00', 10 ), ( '2015-05-01 08:20:00', 9 ), ( '2015-05-01 08:40:00', 5 ), ( '2015-05-02 06:00:00', 19 ), ( '2015-05-02 06:20:00', 7 ), ( '2015-05-02 06:40:00', 11 ), ( '2015-05-02 07:00:00', 9 ), ( '2015-05-02 07:20:00', 7 ), ( '2015-05-02 07:40:00', 6 ), ( '2015-05-02 08:00:00', 10 ), ( '2015-05-02 08:20:00', 19 ), ( '2015-05-02 08:40:00', 15 ), ( '2015-05-03 06:00:00', 8 ), ( '2015-05-03 06:20:00', 8 ), ( '2015-05-03 06:40:00', 8 ), ( '2015-05-03 07:00:00', 21 ), ( '2015-05-03 07:20:00', 12 ), ( '2015-05-03 07:40:00', 7 ), ( '2015-05-03 08:00:00', 10 ), ( '2015-05-03 08:20:00', 4 ), ( '2015-05-03 08:40:00', 10 ); WITH cteh AS ( SELECT DT , CAST(dt AS DATE) AS D , SUM(VALUE) OVER ( PARTITION BY CAST(dt AS DATE), DATEPART(hh, DT) ) AS S FROM @TAB ), ctef AS ( SELECT * , ROW_NUMBER() OVER ( PARTITION BY D ORDER BY S ) AS rn FROM cteh ) SELECT D , S , CAST(DT AS TIME) AS H FROM ctef WHERE rn = 1 ``` Output: ``` D S H 2015-05-01 24 08:00:00.0000000 2015-05-02 22 07:00:00.0000000 2015-05-03 24 06:00:00.0000000 ```
SQL query - Find daily MIN value from hourly sums
[ "", "sql", "sql-server", "t-sql", "" ]
I need to write an SQL query to combine two tables... ``` Table: names +-------------+---------------+ | id | Name | +-------------+---------------+ | 1 | Bob | | 2 | Geoff | | 3 | Jim | +-------------+---------------+ Table: attributes +-------------+---------------+ | id | Attribute | +-------------+---------------+ | 1 | Age | | 2 | Height | | 3 | Weight | +-------------+---------------+ ``` ...so that each record in the **names** table has a copy of each record in the Attributes table. i.e. ``` +-------------+---------------+---------------+ | id | Name | Attribute | +-------------+---------------+---------------+ | 1 | Bob | Age | | 2 | Bob | Height | | 3 | Bob | Weight | | 4 | Geoff | Age | | 5 | Geoff | Height | | 6 | Geoff | Weight | | 7 | Jim | Age | | 8 | Jim | Height | | 9 | Jim | Weight | +-------------+---------------+---------------+ ``` Is there a way to do this?
``` SELECT n.name, a.attribute from names n cross join attributes a ``` The same can be achieved with following query: ``` SELECT n.name, a.attribute from names n join attributes a ON 1=1 ``` The only thing is that you show id from 1 to 9 in the output but there are no such IDs in the sample data. But if you have table with autoincrement then after you insert this data, the ID will be as you expected. UPDATED: As suggested in comments and if you need id, then you can do following: ``` SELECT (n.id-1)*3+a.id AS id, n.name, a.attribute from names n cross join attributes a ```
If you want to get an incremented id, just use a variable: ``` select (@rn := @rn + 1) as id, n.name, a.attribute from names n cross join attributes a cross join (select @rn := 0) vars; ```
How can I combine these two tables in SQL so that each row of the first has a copy of every row in the second?
[ "", "mysql", "sql", "" ]
If I had a table with 3 columns, `Column A` `Column B` and `Column C`. is the query faster if my select statement is like `Select Column A, Column B, Column C from Table`? what if it's `Select Column A, Column C, Column B from Table`? Same for update and insert. `Update Table set Column A = '', Column B = '', Column C = ''` and `Update Table set Column A = '', Column C = '', Column B = ''` `Insert into Table (Column A, Column B, Column C) Values()` and `Insert into Table (Column A, Column C, Column B) Values()`
I don't believe *any* of the SQL standards dictate the performance requirements of individual statements so it's really totally under the control of the implementation. However, I'd be *very* surprised if there was a substantial difference since the bulk of the time would be simply retrieving the data and delivering it. Most DBMS' do a fair bit of analysis on statements before trying to execute them, so that they can reduce the effect on the retrieval phase. Things like figuring out whether all data can be retrieved by an index-only read, or choosing the correct index to use the minimum cardinality. So it's possible that your column order may not survive the transition from analysis to execution anyway (it'll have to be re-instated for a `select` when delivering the data to the user, but not so for `insert` or `update`). There may be a *minuscule* difference caused by re-ordering data from the order in which its stored in a record but, if it's significant, you should move to a better DBMS.
No, not in the SELECT or UPDATE columns. Places where order MIGHT matter are the GROUP BY/ORDER BY clauses. Predicates in the WHERE clause and JOIN conditions will be re-ordered by the optimizer based on cost.
Does the order of columns in an sql statement affect the speed of the query?
[ "", "mysql", "sql", "sql-server", "oracle", "" ]
I need to transform some strings from this format: ``` "1020202020" ``` To ``` "1-0-2-0-2-0-2-0-2-0" ``` How can I do that in a simple way? Thanks
Every day I wake up and think to myself "What impossible thing will CTEs make possible today?" ``` ;with cte as ( select '1020202020' inputstring, convert(varchar(max),'') outputstring union all select substring(inputstring,2,len(inputstring)), outputstring + left(inputstring,1) + '-' + case when len(inputstring) = 2 then right(inputstring,1) else '' end from cte where len(inputstring) > 1 ) select top 1 outputstring from cte order by len(outputstring) desc ```
Your question is unclear on the exact rules are for the placement of the hyphens. So, there might be some clever method using `replace()`. For your example string: ``` select replace(replace(col, '02', '-0-2'), '20', '2-0') ```
Add a '-' in a varchar SQL
[ "", "sql", "t-sql", "" ]
When trying to delete a user when logged in as the administrative user, I get the following error in my heroku logs: ``` 2015-03-24T07:47:23.506661+00:00 app[web.1]: Started DELETE "/users/1" for 128.252.25.47 at 2015-03-24 07:47:23 +0000 2015-03-24T07:47:23.534256+00:00 app[web.1]: SQL (4.4ms) DELETE FROM "users" WHERE "users"."id" = $1 [["id", 1]] 2015-03-24T07:47:23.517508+00:00 app[web.1]: User Load (1.3ms) SELECT "users".* FROM "users" WHERE "users"."id" = $1 LIMIT 1 [["id", 1]] 2015-03-24T07:47:23.541747+00:00 app[web.1]: Completed 500 Internal Server Error in 31ms 2015-03-24T07:47:23.534529+00:00 app[web.1]: PG::ForeignKeyViolation: ERROR: update or delete on table "users" violates foreign key constraint "fk_rails_274c57dd65" on table "cars" 2015-03-24T07:47:23.534534+00:00 app[web.1]: : DELETE FROM "users" WHERE "users"."id" = $1 2015-03-24T07:47:23.544385+00:00 app[web.1]: : DELETE FROM "users" WHERE "users"."id" = $1): 2015-03-24T07:47:23.544388+00:00 app[web.1]: 2015-03-24T07:47:23.544384+00:00 app[web.1]: DETAIL: Key (id)=(1) is still referenced from table "cars". 2015-03-24T07:47:23.544387+00:00 app[web.1]: app/controllers/users_controller.rb:45:in `destroy' 2015-03-24T07:47:23.544390+00:00 app[web.1]: 2015-03-24T07:47:23.536639+00:00 app[web.1]: (1.7ms) ROLLBACK 2015-03-24T07:47:23.544377+00:00 app[web.1]: 2015-03-24T07:47:23.510694+00:00 app[web.1]: Processing by UsersController#destroy as HTML 2015-03-24T07:47:23.534532+00:00 app[web.1]: DETAIL: Key (id)=(1) is still referenced from table "cars". 2015-03-24T07:47:23.510838+00:00 app[web.1]: Parameters: {"authenticity_token"=>"iKb5lOUvx8qDoGHtVWNgYRL/dIk5zWFU5kiQpWtGOnabA+D7Yg1gj86NHwYKMmFeK5hh4F/3jndyKHHDHApVoQ==", "id"=>"1"} 2015-03-24T07:47:23.515104+00:00 app[web.1]: User Load (2.5ms) SELECT "users".* FROM "users" WHERE "users"."id" = $1 LIMIT 1 [["id", 2]] 2015-03-24T07:47:23.544382+00:00 app[web.1]: ActiveRecord::InvalidForeignKey (PG::ForeignKeyViolation: ERROR: update or delete on table "users" violates foreign key constraint "fk_rails_274c57dd65" on table "cars" ``` I'm getting the error because of the table "cars" where a user `has_many` cars, so I think the error isn't with deleting a user but the cars associated with the user. But I don't understand how to fix my database so a user can be deleted. Here are my schemas: **schema.rb** ``` create_table "cars", force: :cascade do |t| t.text "year" t.text "brand" t.text "model" t.text "vin" t.text "mileage" t.integer "user_id" t.datetime "created_at", null: false t.datetime "updated_at", null: false t.string "picture" end add_index "cars", ["user_id"], name: "index_cars_on_user_id" create_table "users", force: :cascade do |t| t.string "name" t.string "email" t.datetime "created_at", null: false t.datetime "updated_at", null: false t.string "password_digest" t.string "phone" t.string "remember_digest" t.boolean "admin", default: false end add_index "users", ["email"], name: "index_users_on_email", unique: true add_index "users", ["phone"], name: "index_users_on_phone", unique: true ```
Add cascading delete to yout User model: ``` class User < ActiveRecord::Base .... has_many :cars, :dependent => :delete_all end ```
To add an alternative for those who don't want to delete associated records (maybe you want to keep the cars records for later use). ``` class User < ActiveRecord::Base .... has_many :cars, dependent: :nullify end ``` The associated record foreign\_key will be set to nil. Can be useful for people like me landing on your question!
Rails app Foreign key error: Administrative user not allowed to delete other users in
[ "", "sql", "ruby-on-rails", "ruby", "heroku", "" ]
I have two SQL Tables that have a key based off First\_Name, Last\_Name & Date\_Of\_Birth. I am trying to write a simple transact sql to find exceptions where Table1 Patient\_Key is not found in Table2 Table 1 contains 152758 records and will be a new dataset every month Table 2 contains 8388 records and will continue to grow So my query as it stands takes over 1/2 hour to return zero reults (which I knew it would have no results due to manually querying each table separately for distinct Patient\_Keys Here is the query as it stands: ``` SELECT T1.* FROM TABLE1 T1 WHERE upper(T1.FIRST_NAME) + UPPER(t1.LAST_NAME) + REPLACE(CONVERT(VARCHAR (10), T1.DATE_OF_BIRTH, 120), '-','') NOT IN (SELECT DISTINCT upper(T2.FIRST_NAME) + UPPER(T2.LAST_NAME) + REPLACE(CONVERT(VARCHAR (10), T2.DATE_OF_BIRTH, 120), '-','') FROM TABLE2 T2) ``` Is there a more efficient SQL cost-savings method?
Checking for rows to exist across multiple keys works much better with a `WHERE NOT EXISTS` correlated subquery: ``` SELECT * FROM Table1 T1 WHERE NOT EXISTS ( SELECT 1 FROM Table2 T2 WHERE T2.FIRST_NAME = T1.FIRST_NAME AND T2.LAST_NAME = T1.LAST_NAME AND T2.DATE_OF_BIRTH = T1.DATE_OF_BIRTH ) ``` If your database is actually configured to use case-sensitive collation, you should use the `COLLATE` option to enforce case-insensitive comparisons. It's significantly more efficient. There should be an equivalent case-insensitive collation whatever your configuration. ``` SELECT * FROM Table1 T1 WHERE NOT EXISTS ( SELECT 1 FROM Table2 T2 WHERE T2.FIRST_NAME = T1.FIRST_NAME COLLATE SQL_Latin1_General_CP1_CI_AS AND T2.LAST_NAME = T1.LAST_NAME COLLATE SQL_Latin1_General_CP1_CI_AS AND T2.DATE_OF_BIRTH = T1.DATE_OF_BIRTH ) ``` If you have an index on `Table1 (FIRST_NAME, LAST_NAME, DATE_OF_BIRTH)` and `Table2 (FIRST_NAME, LAST_NAME, DATE_OF_BIRTH)`, you should have even better performance.
I would recommend `left join` for this: ``` SELECT T1.* FROM TABLE1 T1 LEFT JOIN TABLE2 T2 ON t1.first_name = t2.first_name AND t1.last_name = t2.last_name AND t1.date_of_birth = t2.date_of_birth WHERE t2.first_name IS NULL; ``` The problem with your query are the concatenations. If you are in an environment that has case sensitive collations, then you should add computed columns in both tables that are single case. For this query, create an index on `table2(first_name, last_name, date_of_birth)`. That should give you the performance you need.
Transact SQL using EXCEPT vs INTERSECT
[ "", "sql", "sql-server-2008", "t-sql", "" ]
I have two tables, object and object\_data, with object referencing object\_data by foreign key (the relation is 1:1). For a set of objects, I need to null their object\_data references and delete the corresponding object\_data rows, like this: ``` DELETE FROM object_data WHERE id IN ( SELECT object_data_id FROM object WHERE ... ); UPDATE object SET object_data_id = NULL WHERE ...; ``` Problem is, the foreign key constraint doesn't allow deleting object\_data rows that are still referenced from object. My current solution is reading the results of the `SELECT` into a list, then nulling the foreign keys and then deleting the object\_data rows in reasonable-sized batches using IN operator. Is there a better solution? Adding a column that refers back from object\_data to object is not an option.
Yes use CTEs (Common Table Expression) ``` WITH tmp AS (SELECT object_data_id FROM object WHERE ...), upd AS (UPDATE object SET object_data_id = NULL WHERE ...) DELETE FROM object_data WHERE id IN (SELECT object_data_id FROM tmp); ``` The first CTE called tmp is executed first and remembers the data you need later The second CTE called upd does sets the fields to NULL Finally the DELETE uses the data from tmp to perform the DELETE
Sonds like the perfect job for the `ON UPDATE SET NULL` modifier to your FK constraint. [Per documentation:](http://www.postgresql.org/docs/current/interactive/sql-createtable.html) > `SET NULL` > > Set the referencing column(s) to null. ``` ALTER TABLE object DROP CONSTRAINT <fk_name_here>; ALTER TABLE object ADD CONSTRAINT <fk_name_here> FOREIGN KEY ON (object_data_id) REFERENCES object_data (object_data_id) ON DELETE SET NULL; ``` Guessing the PK name is `object_data_id`, too. Then all you need is: ``` DELETE FROM object_data WHERE id ... ``` Reverences in `object` are set to NULL automatically. Aside, this sounds odd: > I have two tables, object and object\_data, with object referencing > object\_data by foreign key (the relation is 1:1) Typically, I would expect the reference to be the other way round from `object_data` *to* `object` in such a scenario, but that's just guessing from the table names.
PostgreSQL: deleting rows referenced from another table
[ "", "sql", "postgresql", "foreign-keys", "referential-integrity", "" ]
I have an article table which has id and date (month/year) columns, first of all I would like to count ids and group them by date, then I would like to see which id belongs to which date group in single query like that: ``` id date count ----------------- 1 01/2015 2 2 01/2015 2 3 02/2015 1 4 03/2015 4 5 03/2015 4 6 03/2015 4 7 03/2015 4 ``` I have 2 queries ``` Select Count(id) from article group by date ``` and ``` Select id from article ``` gives results; ``` count date id date ------------- ---------- 2 01/2015 1 01/2015 1 02/2015 2 01/2015 4 03/2015 3 02/2015 ``` I need a single query like ``` select count(id), id, date from.... ``` which brings id, count, date columns to use in my C# code. Can someone help me with this?
``` SELECT id, date, COUNT(*) OVER (PARTITION BY date) AS Count FROM article ``` [Sql fiddle](http://sqlfiddle.com/#!6/64fd9/2)
Can't quite do that in one query, but you could use a CTE to produce a single result set: ``` create table #tt (id int null, dt varchar(8)) insert #tt values (1,'01/2015'), (2,'01/2015'), (3,'02/2015'), (4,'03/2015'), (5,'03/2015'), (6,'03/2015'), (7,'03/2015') ;with cteCount(d, c) AS ( select dt, count(id) from #tt group by dt ) select id, dt, c from #tt a inner join cteCount cc on a.dt = cc.d drop table #tt ``` results: ``` id dt c 1 01/2015 2 2 01/2015 2 3 02/2015 1 4 03/2015 4 5 03/2015 4 6 03/2015 4 7 03/2015 4 ```
Select ID, Count(ID) and Group by Date
[ "", "sql", "sql-server", "date", "count", "group-by", "" ]
I'm attempting to return `0.0` if the following function does not return anything: ``` CREATE OR REPLACE FUNCTION get_height(firstn VARCHAR, lastn VARCHAR) RETURNS FLOAT AS $$ DECLARE height FLOAT = 0.0; BEGIN SELECT into height AVG(((p.h_feet * 12) + p.h_inches) * 2.54) FROM player p WHERE p.firstname = firstn AND p.lastname = lastn; RETURN height; END; $$ LANGUAGE plpgsql; ``` I've tried searching for it and found that `COALESCE` does not work. Does anyone have any ideas how to solve this? Table structure: ``` create table player( firstname text ,lastname text ,h_feet INT ,h_inches INT ); ``` Example data: ``` insert into player values ('Jimmy','Howard',6,2); ```
Here is the script I used. As you can see I run PostgreSQL 9.4.1. I used HeidiSQL to launch the queries. Are you sure that you correctly updated your function? I just noted that you used a different function ('player\_height') in a comment instead of 'get\_height' in you original post. ``` select version(); -- PostgreSQL 9.4.1, compiled by Visual C++ build 1800, 64-bit delimiter // CREATE OR REPLACE FUNCTION get_height(firstn VARCHAR, lastn VARCHAR) RETURNS FLOAT AS $$ DECLARE height FLOAT = 0.0; BEGIN SELECT into height AVG(((p.h_feet * 12) + p.h_inches) * 2.54) FROM players p WHERE p.firstname = firstn AND p.lastname = lastn; return coalesce(height, 0.0); END; $$ LANGUAGE plpgsql; delimiter; CREATE TABLE players ( firstname varchar(40), lastname varchar(40), h_feet int, h_inches int); insert into players values ('Jimmy', 'Howard', 6, 2); select * from get_height('Jimmy', 'Howard'); -- gives 187.96 select * from get_height('Random', 'Guy'); -- gives 0 ```
### Explanation The root of the problem is the fuzzy definition of "not anything". > if the following function does not return anything **`NULL` is not *nothing***, it's just unknown what it is exactly. "Nothing" in terms of SQL would rather be **no row** - nothing returned at all. That typically happens when no qualifying row is found. But when using **aggregate functions**, that cannot happen because, [per documentation:](https://www.postgresql.org/docs/current/functions-aggregate.html) > ... these functions return a null value when no rows are selected. `avg()` returns `NULL` when no rows are found (so not "nothing"). You get a row with a `NULL` value as result - which **overwrites** your init value in the code you demonstrate. ## Solution Wrap the result in `COALESCE`. Demonstrating a much simpler SQL function: ``` CREATE OR REPLACE FUNCTION get_height_sql(firstn varchar, lastn varchar) RETURNS float LANGUAGE sql STABLE AS $func$ SELECT COALESCE(avg(((p.h_feet * 12) + p.h_inches) * 2.54)::float, 0) FROM player p WHERE p.firstname = firstn AND p.lastname = lastn $func$; ``` *db<>fiddle [here](https://dbfiddle.uk/?rdbms=postgres_14&fiddle=5e397594c9e9ae7eb3c20d7c8f73bc2e)* Old [sqlfiddle](http://sqlfiddle.com/#!17/4d5cd/1) The same can be used in a PL/pgSQL function. This function can be `STABLE`, might help with performance in the context of bigger queries. ## Other cases If you actually *can* get **no row** from a query, a simple `COALESCE` **would fail**, because it's never executed. For a **single value** result you can just wrap the whole query like: ``` SELECT COALESCE((SELECT some_float FROM ... WHERE ... LIMIT 1), 0) AS result ``` * [How to display a default value when no match found in a query?](https://stackoverflow.com/questions/8200462/how-to-display-a-default-value-when-no-match-found-in-a-query/8200473#8200473) PL/pgSQL has the ability to check before actually returning from the function. This works for **multiple rows with one or more columns**, too. There is an [example in the manual](https://www.postgresql.org/docs/current/plpgsql-control-structures.html#id-1.8.8.8.3.4.2) demonstrating the use of `FOUND`: ``` ... RETURN QUERY SELECT foo, bar ...; IF NOT FOUND THEN RETURN QUERY VALUES ('foo_default'::text, 'bar_default'::text); END IF; ... ``` Related: * [Return setof record (virtual table) from function](https://stackoverflow.com/questions/955167/return-setof-record-virtual-table-from-function/17247118#17247118) * [PostgreSQL - Check foreign key exists when doing a SELECT](https://stackoverflow.com/questions/29155188/postgresql-check-foreign-key-exists-when-doing-a-select/29158104#29158104) To always return **exactly one row**, you can also use **pure SQL**: ``` SELECT foo, bar FROM tbl UNION ALL SELECT 'foo_default', 'bar_default' LIMIT 1; ``` If the first `SELECT` returns no row, the second `SELECT` returns a row with defaults.
How to return a value from a function if no value is found
[ "", "sql", "postgresql", "function", "null", "plpgsql", "" ]
I've checked a few questions about this already but they don't do quite what I want and involve multiple columns and conditions so I'm going to distill it down to its most basic form. ``` SERIAL DATE_TIME 1 01/01/2015 12:00:00 1 01/01/2015 15:00:00 2 01/01/2015 13:00:00 1 02/01/2015 12:00:00 2 02/01/2015 12:00:00 2 02/01/2015 09:00:00 ``` Each serial may have multiple rows in the table and multiple entries per day. I want to return the latest entry per day for each serial. So in this scenario I would want: ``` SERIAL DATE_TIME 1 01/01/2015 15:00:00 1 02/01/2015 12:00:00 2 01/01/2015 13:00:00 2 02/01/2015 12:00:00 ``` Ideally a solution that works with any DBMS.
I'm not sure on the *any RDBMS* part, but in T-SQL, and probably many others, you could do this: ``` SELECT Serial, MAX(Date_Time) AS Date_Time FROM [Table] GROUP BY Serial, CAST(Date_Time AS DATE) ``` From what I can find, it *sounds* like `CAST` is an ANSI standard, so it should work with "everything," but I'm not familiar enough with the standards to give you a definitive answer on that, specifically. Here's a [SQL Fiddle](http://sqlfiddle.com/#!6/656c3/2) of it working. Just for good measure, I assume you don't actually need this, but if you want it to look exactly the same as your example (formatting permitting), you could easily append an `ORDER BY Serial, Date_Time` to sort them as you did.
Will not work on ALL SQL versions. However will work on ones that support WINDOW functions: ``` CREATE TABLE #a ( serial INT, date_time DATETIME ); INSERT INTO #a VALUES (1, '01/01/2015 12:00:00'); INSERT INTO #a VALUES (1, '01/01/2015 15:00:00'); INSERT INTO #a VALUES (2, '01/01/2015 13:00:00'); INSERT INTO #a VALUES (1, '02/01/2015 12:00:00'); INSERT INTO #a VALUES (2, '02/01/2015 12:00:00'); INSERT INTO #a VALUES (2, '02/01/2015 09:00:00'); SELECT serial, date_time FROM ( SELECT serial, date_time, ROW_NUMBER() OVER(PARTITION BY serial, CAST(date_time AS DATE) ORDER BY date_time DESC) row_num FROM #a ) a WHERE a.row_num = 1 ``` Another option with aggregates: ``` SELECT serial, max_date_time FROM ( SELECT serial, CAST(date_time AS DATE) AS serial_date, MAX(date_time) AS max_date_time FROM #a GROUP BY serial, CAST(date_time AS DATE) ) a ```
Selecting most recent date per ID for every day
[ "", "sql", "database", "" ]
I'm trying to do a group by and return a boolean for whether the group by contains a value in the group. I have two Tables Title Table and Items Table. The Title.ID is a foreign key to my Items Table. My Items Table has a multiple format codes and I need to Select A Boolean if a group contains a format code The Sql Statement look like: ``` Select t.ID, Any(i.Formatcode = 'DOD') as hasDODItem From Title t join Item i on i.TitleID = t.ID group by t.ID. ``` I'm looking for a function that would be like Any(i.Formatcode = 'DOD') as hasDODItem
``` select t.ID, max(case when i.Formatcode = 'DOD' then 1 else 0) as hasDODItem from Title as t inner join Item as i on i.TitleID = t.ID group by t.ID ``` or you can do this with subquery and `exists`: ``` select t.ID, case when exists ( select * from Item as i where i.TitleID = t.ID and i.Formatcode = 'DOD' ) then 1 else 0 end as hasDODItem from Title as t ```
Use `case`: ``` Select t.ID, (case when i.Formatcode = 'DOD' then 1 else 0 end) as hasDODItem From Title t join Item i on i.TitleID = t.ID group by t.ID ``` EDIT: If you just want to know ids that have a particular item, then use `exists` rather than a `join`: ``` Select t.ID, (case when exists (select 1 from item i where i.TitleID = t.ID and i.Formatcode = 'DOD' ) then 1 else 0 end) as hasDODItem From Title t ; ``` The `join` is not necessary. I thought you wanted it for some reason.
SQL Group by with return boolean for any
[ "", "sql", "sql-server", "group-by", "" ]
--- **EDIT 2:** What I want to ask is: If I understood right, an `EXISTS` statement does not *filter* any tuples from a relation. So for what purposes is it used, since it does not *filter* any tuples? **EDIT:** To elaborate more, in the first example, the following is present in the subquery: ``` cities_stores.store_type = stores.store_type ``` but as far as I understood, if the subquery evaluates to `TRUE`, then what will be returned is *all* `DISTINCT store_type`s `FROM stores`, is that right? If so, how is that useful? --- [Here](https://dev.mysql.com/doc/refman/5.0/en/exists-and-not-exists-subqueries.html), it is stated that a `[NOT] EXISTS` statement will evaluate to either `TRUE` or `FALSE`. In the two examples existing in the link, if I assume that first example's `EXISTS` statement and second example's `NOT EXISTS` statement will evaluate to `TRUE`, then the examples are supposed to be equivalent to the following: Example 1: ``` SELECT DISTINCT store_type FROM stores WHERE TRUE; ``` Example 2: ``` SELECT DISTINCT store_type FROM stores WHERE TRUE; ``` If that's the case, then both queries are equivalent, provided that their `[NOT] EXISTS` statements evaluate to `TRUE`. So my question is, how is using a `[NOT] EXISTS` statement is useful? How do they work? If what I stated is wrong, could you please point out wrong places and explain how do `[NOT] EXISTS` statements work?
The example you're referring to uses correlated sub-queries. <http://en.wikipedia.org/wiki/Correlated_subquery> Here's an excellent answer explaining the mechanics behind it. <https://stackoverflow.com/a/11592277/2707705>
Note this statement in the subquery `"WHERE cities_stores.store_type = stores.store_type"`. it is where the inner table checks against outer table. in the examples, cities\_stores is the inner table, stores is the outer table in the main query. > EDIT 2: What I want to ask is: If I understood right, an EXISTS statement does not filter any tuples from a relation. So for what purposes is it used, since it does not filter any tuples? The filter you are looking for is in the subquery - `"WHERE cities_stores.store_type = stores.store_type"`. not every record in table 'stores' will return true from the subquery.
Usage of EXISTS in MySQL
[ "", "mysql", "sql", "" ]
I have a table with time periods like (no overlap in time periods): ``` start_date end_date ----------------------------- 12-aug-14 12-nov-14 12-jan-15 12-apr-15 12-jun-15 12-aug-15 ... 5 more ``` I'm trying to find the in between time periods - something like: ``` 12-nov-14 12-jan-15 12-apr-15 12-jun-15 ... ``` However, my queries are giving all time period differences like: ``` 12-nov-14 12-jan-15 12-nov-14 12-jun-15 ``` My query was: ``` select l1.end_date, l2.start_date from lease l1, lease l2 where l1.place_no = 'P1' and l2.place_no = 'P1' and l2.start_date > l1.end_date order by l1.end_date asc; ``` Any ideas? Thanks!
`sort` your table, use `rownum` and then `join` them: ``` WITH CTE AS ( SELECT START_DATE, END_DATE, ROWNUM AS RN FROM ( SELECT START_DATE, END_DATE FROM TABLE_NAME ORDER BY 1,2) ) SELECT T1.END_DATE, T2.START_DATE FROM CTE T1 JOIN CTE T2 ON T2.RN=T1.RN+1 ```
Use `lead()`. That is what the function is designed for: ``` select l.*, lead(start_date) over (partition by place_no order by start_date) as next_start_date, (lead(start_date) over (partition by place_no order by start_date) as next_start_date - end_date) as gap from lease l where l1.place_no = 'P1'; ``` There is no need for a `join` or even for subqueries -- unless you want to eliminate the additional row that has a NULL value because there is no next value.
SQL to find date intervals
[ "", "sql", "oracle", "" ]
This question is a part of an insert statement in which I am trying to select a value from another value that can be inserted into a column. For example in my table OnlineServers, I have columns: ``` ID, ServerID, OnlineSince ``` In my second table ImportServers, I have columns with the data (The lines after NewYork and Paris are actually empty): ``` ImportServerName NewYork London Paris Tokyo ``` This question is related to SQL Server. In my third table, which is a look-up table called ServerLookup, I have these columns with data: ``` ID, ServerName 0 Not specified 1 NewYork 2 London 3 Tokyo 4 Munich 5 Salzburg ``` Question: I want to have an sql statement which can select ID '0' from ServerLookup table if the value of the column ImportServerName is empty. What I have so far is: ``` insert into OnlineServers (ServerID, OnlineSince) select ( select ID from ServerLookup where ServerLookup.ServerName = ImportServers.ServerName or ServerLookup.ServerName = '' ), GETDATE() from ImportServers ``` The problem I am facing is if the server name is matched, it also returns an extra row with empty server name. How can I fix this problem. Thanks PS: Forgive me if there is any typo in the code
``` INSERT INTO OnlineServers SELECT CASE ImportServerName WHEN '' THEN 0 ELSE ID END AS ServerID, GetDate() FROM ImportedServers s LEFT JOIN ServerLookup l on s.ImportedServerName = l.ServerName; ``` This should do it. You `LEFT JOIN` so you get every record from ImportedServers and use `CASE` to get 0 where ImportServerName is blank.
Maybe something like this: ``` SELECT FIRST(column_name) FROM table_name; ``` Limiting the return to the first match, also, shoudl teh from not be inside the select brackets? ``` select FIRST(ID) from ServerLookup where ServerLookup.ServerName = ImportServers.ServerName or ServerLookup.ServerName = '' from ImportServers) ```
SQL Server select column while inserting
[ "", "sql", "sql-server", "" ]
I have the following table: ``` Game | Name | Result | Stage 1 A W F 1 B L 0 2 C L F 2 D W 0 3 E L 0 3 F W 0 ``` The output I am looking for: ``` Game | Name | Result | Stage 1 A W F 2 D W F ``` I only want to see the winners (W) from the results of stage F. I can do this via joins (which isn't very fast): ``` SELECT * FROM ( SELECT * FROM MyTable WHERE Stage = 'F' ) AA JOIN MyTable ON AA.Game = MyTable.Game AND AA.Result <> MyTable.Result ``` ..but I am wondering if there is an easier and more efficient way to do it. Plus this requires I do some more filtering afterwards. Thanks in advance!
To perform a job of this sort without a self-join or an equivalent, you would want to use SQL window functions, which MySQL does not support. The join you are using is not too bad, but this would be a little simpler: ``` SELECT players.Game AS Game, players.Name AS Name, 'W' AS Result, 'F' as Stage FROM MyTable stage JOIN MyTable players ON stage.Game = players.Game WHERE stage.stage = 'F' AND players.result = 'W' ```
With "The winners only from stage F" you only need: ``` SELECT * FROM MyTable WHERE stage="F" and result="W"; ``` Your own result example however also shows name "D" which is not a winner in stage F.
SQL Query - Combined rows with selective columns [MySQL]
[ "", "mysql", "sql", "" ]
![enter image description here](https://i.stack.imgur.com/akypR.png) I want to display data in family tree. I can easily get using below query. i faced problem in where condition on city. If i search city is *paris* then it will display only those data which are belong to *paris* but i want to display those data which have family head belong to *paris*. (if family head belong to *paris* then display all child of family head. No need for child to be from Paris). -1 is root level head ``` ; WITH CTE AS ( SELECT ROW_NUMBER() OVER (ORDER BY c.ContactId) as MCID , 1 as CID ,c.contactId FROM ContactDetail c where c.FamilyHeadId = -1 UNION ALL SELECT cte.MCID as MCID , CONVERT(INT,( CONVERT(VARCHAR(10),CTE.CID) + CONVERT(VARCHAR(10),ROW_NUMBER() OVER (ORDER BY p.ContactId)) ) ) AS CID ,p.contactId FROM ContactDetail p INNER JOIN cte ON p.FamilyHeadId = CTE.ContactId ) select * from CTE inner join ContactDetail b on cte.contactid = b.contactid order by mcid,CID ```
You can move the filter on "Paris" into the CTE's initial anchor query (i.e. the search for family head will also be constrained by the city). This predicate must not however be repeated on the recursive query after the `UNION` in the CTE (since there is no requirement to constrain children). I've assumed a column `CityName` on `ContactDetail` but presumably your actual model has a normalized relationship to a City table or such: ``` ; WITH CTE AS ( SELECT ROW_NUMBER() OVER (ORDER BY c.ContactId) as MCID, 1 as CID, c.contactId FROM ContactDetail c WHERE c.FamilyHeadId = -1 AND c.City = 'Paris' UNION ALL SELECT cte.MCID as MCID, CONVERT(INT, (CONVERT(VARCHAR(10),CTE.CID) + CONVERT(VARCHAR(10),ROW_NUMBER() OVER (ORDER BY p.ContactId)))) AS CID ,p.contactId FROM ContactDetail p INNER JOIN cte ON p.FamilyHeadId = CTE.ContactId ) SELECT * FROM CTE INNER JOIN ContactDetail b on cte.contactid = b.contactid order by mcid, CID; ``` [SqlFiddle here](http://sqlfiddle.com/#!6/ce9d0/1) Note that you can also avoid the final join in the outer query back to the same table by emitting all the required fields from the 2 CTE queries as per this [updated SqlFiddle](http://sqlfiddle.com/#!6/ce9d0/2)
actually variable table could be more better than cursor so I utilize that plz check this query It should help you out. ``` declare @allheadCount int, @allheadCounttree int , @contactId int declare @city varchar(50), @name varchar(50) set @allheadCounttree =0 DECLARE @familyTree TABLE ( familyId int IDENTITY(1,1), contactId int , familyHeadId int, Name varchar(50) NOT NULL, city varchar(50) NOT NULL, headcity varchar(50) NOT NULL ) select @allheadCount = count(contactid) from contactdetail where familyheadid = -1 WHILE @allheadCount <> @allheadCounttree BEGIN select top 1 @contactid = contactId, @name=name, @city=city from contactdetail where familyheadid = -1 and contactid not in (select distinct contactid from @familytree ) insert into @familyTree (contactid,familyheadid,name,city,headcity) Values (@contactId,-1,@name,@city,@city) insert into @familyTree (contactid,familyheadid,name,city,headcity) select contactid,@contactId,name,city,@city from contactdetail where familyheadid = @contactid select @allheadCounttree = count(contactid) from @familyTree where familyheadid = -1 END select * from @familyTree where city = lower('Paris') or headcity = lower('paris') order by familyid ```
Display data in family tree view with where condition
[ "", "sql", "sql-server", "sql-server-2008", "" ]