Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I need to select one row only which has the highest count. How do I do that? This is my current code: ``` select firstname, lastname, count(*) as total from trans join work on trans.workid = work.workid join artist on work.artistid = artist.artistid where datesold is not null group by firstname, lastname; ``` Example current: ``` FIRSTNAME | LASTNAME | TOTAL ------------------------------ Tom | Cruise | 3 Angelina | Jolie | 9 Britney | Spears | 5 Ellie | Goulding | 4 ``` I need it to select only this: ``` FIRSTNAME | LASTNAME | TOTAL -------------------------------- Angelina | Jolie | 9 ```
In Oracle 12, you can do: ``` select firstname, lastname, count(*) as total from trans join work on trans.workid = work.workid join artist on work.artistid = artist.artistid where datesold is not null group by firstname, lastname order by count(*) desc fetch first 1 row only; ``` In older versions, you can do this with a subquery: ``` select twa.* from (select firstname, lastname, count(*) as total from trans join work on trans.workid = work.workid join artist on work.artistid = artist.artistid where datesold is not null group by firstname, lastname order by count(*) desc ) twa where rownum = 1; ```
You can add `order by total desc` and `fetch first 1 row only` (*since Oracle 12c r1 only*, otherwise you should use your result as temp table and `select` from it to use `rownum = 1` limitation in the `where` clause) , in case you `total` can't be the same for different groups. The other way is to add this `having` clause, so you can list all people with maximum `total`: ``` having count(*) = (select max(total) from (select count(*) as total from <your_query>) tmp) ``` or that: ``` having count(*) = (select count(*) as total from <your_query> order by total desc fetch first 1 row only) ```
select only one row that has the highest count in sql
[ "", "sql", "oracle", "select", "" ]
What are NLS Strings in Oracle SQL which is shown as a difference between char and nchar as well as varchar2 and nvarchar2 data types ? Thank you
Every Oracle database instance has 2 available character set configurations: 1. The default character set (used by char, varchar2, clob etc. types) 2. The national character set (used by nchar, nvarchar2, nclob, etc. types) Because the default character set could be configured to be a character set that doesn't support the full range of Unicode characters (such as Windows 1252), that's why Oracle provides this *alternate* character set configuration as well, that *is* guaranteed to support Unicode. So let's say your database uses Windows-1252 for its default character set (not that I'm recommending it), and UTF-8 for the national (or alternate) character set... Then if you have a table column where you don't need to support all kinds of weird unicode characters, then you can use a type such as varchar2 if you want to. And by doing so, you may be saving some space. But if you do have a specific need to store and support unicode characters, then for that very specific instance, your column should be defined as nvarchar2, or some other type that uses the national character set. That said, if your database's default character set is already a character set that supports Unicode, then using the nchar, nvarchar2, etc. types is not really necessary. You can find more complete information on the topic [here](https://docs.oracle.com/database/121/NLSPG/ch2charset.htm#NLSPG002).
AFAIK, `NLS` stands for `National Language Support` which supports local languages (In other words supporting Localization). From [Oracle Documentation](http://docs.oracle.com/html/B13531_01/ap_b.htm) > National Language Support (NLS) is a technology enabling Oracle > applications to interact with users in their native language, using > their conventions for displaying data
What are NLS Strings in Oracle SQL?
[ "", "sql", "database", "oracle", "" ]
I want to write a query that lists the programs we offer at my university. A program consists of at least a major, and possibly an "option", a "specialty", and a "subspecialty". Each of these four elements are detailed with an code which relates them back to the major. One major can have zero or more options, one option can have zero or more specialties, and one specialty can have zero or more sub specialties. Conversely, a major is permitted to have no options associated with it. In the result set, a row must contain the previous element in order to have the next one, i.e. a row will not contain a major, no option, and a specialty. The appearance of a specialty associated with a major implies that there is also an option that is associated with that major. My problem lies in how the data is stored. All program data lies in one table that is laid out like this: ``` +----------------+---------------+------+ | program_name | program_level | code | +----------------+---------------+------+ | Animal Science | Major | 1 | | Equine | Option | 1 | | Dairy | Option | 1 | | CLD | Major | 2 | | Thesis | Option | 2 | | Non-Thesis | Option | 2 | | Development | Specialty | 2 | | General | Subspecialty | 2 | | Rural | Subspecialty | 2 | | Education | Major | 3 | +----------------+---------------+------+ ``` Desired output will look something like this: ``` +----------------+-------------+----------------+-------------------+------+ | major_name | option_name | specialty_name | subspecialty_name | code | +----------------+-------------+----------------+-------------------+------+ | Animal Science | Equine | | | 1 | | Animal Science | Dairy | | | 1 | | CLD | Thesis | Development | General | 2 | | CLD | Thesis | Development | Rural | 2 | | CLD | Non-Thesis | Development | General | 2 | | CLD | Non-Thesis | Development | Rural | 2 | | Education | | | | 3 | +----------------+-------------+----------------+-------------------+------+ ``` So far I've tried to create four queries that join on this "code", each selecting based on a different "program\_level". The fields aren't combining properly though.
I can't find simpler than this : ``` /* Replace @Programs with the name of your table */ SELECT majors.program_name, options.program_name, specs.program_name, subspecs.program_name, majors.code FROM @Programs majors LEFT JOIN @Programs options ON majors.code = options.code AND options.program_level = 'Option' LEFT JOIN @Programs specs ON options.code = specs.code AND specs.program_level = 'Specialty' LEFT JOIN @Programs subspecs ON specs.code = subspecs.code AND subspecs.program_level = 'Subspecialty' WHERE majors.program_level = 'Major' ``` EDIT : corrected typo "Speciality", it should work now.
Use sub queries to build up what you want. CODE: ``` SELECT(SELECT m.program_name FROM yourtable m WHERE m.program_level = 'Major' AND y.program_name = m.program_name) AS major_name, (SELECT o.program_name FROM yourtable o WHERE o.program_level = 'Option' AND y.program_name = o.program_name) AS Option_name, (SELECT s.program_name FROM yourtable s WHERE s.program_level = 'Specialty' AND y.program_name = s.program_name) AS Specialty_name, (SELECT ss.program_name FROM yourtable ss WHERE ss.program_level = 'Subspecialty' AND y.program_name = ss.program_name) AS Subspecialty_name, code FROM yourtable y ``` OUTPUT: ``` major_name Option_name Specialty_name Subspecialty_name code Animal Science (null) (null) (null) 1 (null) Equine (null) (null) 1 (null) Dairy (null) (null) 1 CLD (null) (null) (null) 2 (null) Thesis (null) (null) 2 (null) Non-Thesis (null) (null) 2 (null) (null) Development (null) 2 (null) (null) (null) General 2 (null) (null) (null) Rural 2 Education (null) (null) (null) 3 ``` SQL Fiddle: <http://sqlfiddle.com/#!3/9b75a/2/0>
Splitting SQL Columns into Multiple Columns Based on Specific Column Value
[ "", "sql", "hana", "" ]
This is a select I have: ``` select s.productid, s.fromsscc, l.receiptid from logstock s left join log l on l.id = s.logid where l.receiptid=1760 ``` with the following results: ``` |Productid |SSCC |RECEIPTID |363 |22849 |1760 |364 |22849 |1760 |1468 |22849 |1760 |1837 |22849 |1760 |384 |22849 |1760 |390 |22849 |1760 |370 |22849 |1760 |391 |22849 |1760 |371 |21557 |1760 |391 |21556 |1760 |390 |21555 |1760 |370 |21554 |1760 |389 |21553 |1760 ``` I need to transform this select into this outcome: ``` |Palet Type1 |Palet Type2 |1 |5 ``` The logic is: * if a single `SSCC` (22849 in the example) has more than one Productid, then it is Type 1 * if a single `SSCC` (21557,21556,21555,21554,21553 in the example) has only one Productid then it is type 2 How do I count how many SSCCs from each type i have (on the basis of productids)?
You have to group and count. You can use a common table expression to help simplify the query. ``` with types (sscc, type) as ( select s.sscc, case when count(s.productid) > 1 then 1 else 2 end as type from stock s where s.receiptid = 1760 group by s.sscc ) select (select count(*) from types where type = 1) as type_1, (select count(*) from types where type = 2) as type_2 ``` SQL fiddle : <http://sqlfiddle.com/#!3/85cea/5>
This should work: ``` select SUM(CASE WHEN l.cnt > 1 THEN 1 ELSE 0 END) AS type1, SUM(CASE WHEN l.cnt = 1 THEN 1 ELSE 0 END) AS type2 from ( select sum(COUNT(*)) over (partition by sscc) as cnt, sscc from logstock group by sscc ) l ``` [fiddle](http://sqlfiddle.com/#!3/860bd/1/0) This part of the query: ``` select sum(COUNT(*)) over (partition by sscc) as cnt, sscc from logstock group by sscc ``` returns ``` cnt sscc 1 21553 1 21554 1 21555 1 21556 1 21557 8 22849 ``` since `(partition by sscc)` was used so we get how many times a sscc was repeated. And the upper query uses `SUM` with `CASE WHEN` to count how how many records there are which are repeated once or more than oce.
SQL count and group items by two columns
[ "", "sql", "sql-server", "" ]
I'm not an expert with MySQL and I've problems with this Stored Procedure. I'm trying to do the SP with conditions but I don't know what is wrong here, I have a mistake: > Error Code: 1064. You have an error in your SQL syntax; check the > manual that corresponds to your MySQL server version for the right > syntax to use near 'declare done int default 0; declare continue > handler for sqlstate '02000' set' at line 16 ``` delimiter $$ create procedure getListPrsn(IN idEquipo INT, IN tipo char, IN Puesto INT) begin declare varJefe int; declare eqpSupJefe int; declare jefeONo cursor for select tblpuesto.PtoLiderEqp from tblequipo inner join tblpuesto on (tblequipo.EqpID=tblpuesto.PtoEqp) inner join tblplaza on (tblpuesto.PtoID=tblplaza.PzaPto) inner join tblpersona on (tblplaza.PzaPrsn=tblpersona.PrsnID) where tblequipo.EqpID=idEquipo and tblpuesto.PtoID=Puesto; declare equipoSuperiorDeMiJefe cursor for select tblequipo.EqpEqpSup from tblequipo inner join tblpuesto on(tblequipo.EqpID=tblpuesto.PtoEqp) where tblpuesto.PtoID=Puesto; if tipo="jefe" then declare done int default 0; declare continue handler for sqlstate '02000' set done=1; open jefeONo; begin repeat fetch jefeONo into varJefe; until done end repeat; end; close jefeONo; if varJefe=1 then declare done int default 0; declare continue handler for sqlstate '02000' set done=1; open equipoSuperiorDeMiJefe; begin repeat fetch equipoSuperiorDeMiJefe into eqpSupJefe; until done end repeat; end; close equipoSuperiorDeMiJefe; call getLider(eqpSupJefe); else if varJefe=0 then call getLider(idEquipo); end if; end if; end if; end $$ delimiter ; ```
I fix my SP We have to check the declaration order 1.- variables 2.- conditions 3.- cursors 4.- handlers ``` delimiter $$ create procedure getListPrsn(IN idEquipo INT, IN tipo CHAR, IN Puesto INT) begin declare varJefe int; declare eqpSupJefe int; declare done int default 0; case tipo when "jefe" then begin declare jefeONo cursor for select tblpuesto.PtoLiderEqp from tblequipo inner join tblpuesto on (tblequipo.EqpID=tblpuesto.PtoEqp) inner join tblplaza on (tblpuesto.PtoID=tblplaza.PzaPto) inner join tblpersona on (tblplaza.PzaPrsn=tblpersona.PrsnID) where tblequipo.EqpID=idEquipo and tblpuesto.PtoID=Puesto; declare continue handler for sqlstate '02000' set done=1; open jefeONo; repeat fetch jefeONo into varJefe; until done end repeat; close jefeONo; set done=0; if varJefe=1 then begin declare equipoSuperiorDeMiJefe cursor for select tblequipo.EqpEqpSup from tblequipo inner join tblpuesto on(tblequipo.EqpID=tblpuesto.PtoEqp) where tblpuesto.PtoID=Puesto; declare continue handler for sqlstate '02000' set done=1; open equipoSuperiorDeMiJefe; repeat fetch equipoSuperiorDeMiJefe into eqpSupJefe; until done end repeat; close equipoSuperiorDeMiJefe; call getLider(eqpSupJefe); end;#begin del if varJefe=1 else if varJefe=0 then call getLider(idEquipo); end if; end if; end;#begin del case JEFE end case; end $$ delimiter ; ```
Problem is in the part as pointed below, where you are trying to declare a local variable inside `IF .. ELSE` block. You can set the variable inside `if .. else` block but you should declare them in the beginning ``` if varJefe=1 then declare done int default 0; <-- Here ``` You should declare the variable at the beginning like ``` create procedure getListPrsn(IN idEquipo INT, IN tipo char, IN Puesto INT) begin declare varJefe int; declare eqpSupJefe int; declare done int default 0; <-- declare it here ```
Error 1064 in stored procedure
[ "", "mysql", "sql", "stored-procedures", "" ]
I know that I can declare a **`varchar2`** using the number of the characters that it should be able to contain. However, in an `Oracle` database on which I am working, I found that a field (named PDF) is defined as follows: ``` VARCHAR2(1 BYTE) ``` What does this mean? How many characters can it contain? Another, related question: What is the difference between a **`VARCHAR`** and a **`VARCHAR2`**?
You can declare columns/variables as varchar2(n CHAR) and varchar2(n byte). n CHAR means the variable will hold n characters. In multi byte character sets you don't always know how many bytes you want to store, but you do want to garantee the storage of a certain amount of characters. n bytes means simply the number of bytes you want to store. varchar is deprecated. Do not use it. [What is the difference between varchar and varchar2?](https://stackoverflow.com/questions/1171196/what-is-the-difference-between-varchar-and-varchar2)
The `VARCHAR` [datatype](https://docs.oracle.com/cd/B28359_01/server.111/b28318/datatype.htm#CNCPT1822) is synonymous with the `VARCHAR2` datatype. To avoid possible changes in behavior, always use the `VARCHAR2` datatype to store variable-length character strings. If your database runs on a single-byte character set (e.g. `US7ASCII`, `WE8MSWIN1252` or `WE8ISO8859P1`) it does not make any difference whether you use `VARCHAR2(x BYTE)` or `VARCHAR2(x CHAR)`. It makes only a difference when your DB runs on multi-byte character set (e.g. `AL32UTF8` or `AL16UTF16`). You can simply see it in this example: ``` CREATE TABLE my_table ( VARCHAR2_byte VARCHAR2(1 BYTE), VARCHAR2_char VARCHAR2(1 CHAR) ); INSERT INTO my_table (VARCHAR2_char) VALUES ('€'); 1 row created. INSERT INTO my_table (VARCHAR2_char) VALUES ('ΓΌ'); 1 row created. INSERT INTO my_table (VARCHAR2_byte) VALUES ('€'); INSERT INTO my_table (VARCHAR2_byte) VALUES ('€') Error at line 10 ORA-12899: value too large for column "MY_TABLE"."VARCHAR2_BYTE" (actual: 3, maximum: 1) INSERT INTO my_table (VARCHAR2_byte) VALUES ('ΓΌ') Error at line 11 ORA-12899: value too large for column "MY_TABLE"."VARCHAR2_BYTE" (actual: 2, maximum: 1) ``` `VARCHAR2(1 CHAR)` means you can store up to 1 character, no matter how many byte it has. In case of Unicode one character may occupy up to 4 bytes. `VARCHAR2(1 BYTE)` means you can store a character which occupies max. 1 byte. If you don't specify either `BYTE` or `CHAR` then the default is taken from `NLS_LENGTH_SEMANTICS` session parameter. Unless you have Oracle 12c where you can set `MAX_STRING_SIZE=EXTENDED` the limit is `VARCHAR2(4000 CHAR)` **However**, `VARCHAR2(4000 CHAR)` does not mean you are guaranteed to store up to 4000 characters. The limit is still 4000 **bytes**, so in worst case you may store only up to 1000 characters in such field. See this example (`€` in UTF-8 occupies 3 bytes): ``` CREATE TABLE my_table2(VARCHAR2_char VARCHAR2(4000 CHAR)); BEGIN INSERT INTO my_table2 VALUES ('€€€€€€€€€€'); FOR i IN 1..7 LOOP UPDATE my_table2 SET VARCHAR2_char = VARCHAR2_char ||VARCHAR2_char; END LOOP; END; / SELECT LENGTHB(VARCHAR2_char) , LENGTHC(VARCHAR2_char) FROM my_table2; LENGTHB(VARCHAR2_CHAR) LENGTHC(VARCHAR2_CHAR) ---------------------- ---------------------- 3840 1280 1 row selected. UPDATE my_table2 SET VARCHAR2_char = VARCHAR2_char ||VARCHAR2_char; UPDATE my_table2 SET VARCHAR2_char = VARCHAR2_char ||VARCHAR2_char Error at line 1 ORA-01489: result of string concatenation is too long ``` See also [Examples and limits of BYTE and CHAR semantics usage (NLS\_LENGTH\_SEMANTICS) (Doc ID 144808.1)](https://support.oracle.com/knowledge/Oracle%20Database%20Products/144808_1.html)
What does it mean when the size of a VARCHAR2 in Oracle is declared as 1 byte?
[ "", "sql", "oracle", "varchar", "sqldatatypes", "" ]
I have a BIGINT column that I want to do a partial match on. e.g. `@search = 1` should return all records where the first number is 1 (1, 11, 100 etc). Basically the same as a varchar LIKE. I have tried: ``` DECLARE @search VARCHAR SET @search = '1' ``` and ``` SET @search = '1%' ``` And used: ``` SELECT id FROM table WHERE CAST(id AS varchar) LIKE @search ``` Adding a `%` to `@search` doesn't help. Any ideas how to accomplish this? EDIT: it seems to be the variable. If I hard code the string in the `WHERE` clause I get the results I am looking for. ``` SELECT id FROM table WHERE CAST(id AS VARCHAR) LIKE '14%' ``` This gives me all records with an `id` of 14\* (14, 140, 1400 etc).
Try this instead: ``` DECLARE @search VARCHAR(10) SET @search = '1' SELECT id FROM table WHERE CAST(id AS VARCHAR(10)) LIKE @search + '%' ``` When casting to `VARCHAR`, you should always specify the length. If you don't define a length, SQL-Server will assign one for you. Sometimes it will be `1` others it will be `30`. Read [**this**](https://sqlblog.org/2009/10/09/bad-habits-to-kick-declaring-varchar-without-length) for more information.
Instead of using the `LIKE` operator try using the [LEFT](https://msdn.microsoft.com/en-us/library/ms177601.aspx) function. This will return the left part of a character string with the specified number of characters. ``` SELECT id FROM table WHERE LEFT(CAST(id AS varchar), 1) = '1' ``` I'm not certain, but I've got to assume this is will have better performance than using the `LIKE` operator, especially since you know you just want to compare to the beginning characters. Often using a function in the WHERE clause can often cause poor performance because the query can't take advantage of any indexes that might exist on the column. However in this case the query calling the CAST function, so the benefit of the index is already lost. **Edit:** If the comparison needs to be for a variable number of digits, then you can use the LEN function to determine the number of characters for the LEFT function to return. ``` SELECT id FROM table WHERE LEFT(CAST(id AS varchar),LEN(@search)) = @search ```
SQL Like/Contains on BIGINT Column
[ "", "sql", "sql-server", "t-sql", "" ]
I have a defined an array field in postgresql 9.4 database: ``` character varying(64)[] ``` Can I have an empty array e.g. {} for default value of that field? What will be the syntax for setting so? I'm getting following error in case of setting just brackets {}: ``` SQL error: ERROR: syntax error at or near "{" LINE 1: ...public"."accounts" ALTER COLUMN "pwd_history" SET DEFAULT {} ^ In statement: ALTER TABLE "public"."accounts" ALTER COLUMN "pwd_history" SET DEFAULT {} ```
You need to use the explicit `array` initializer and cast that to the correct type: ``` ALTER TABLE public.accounts ALTER COLUMN pwd_history SET DEFAULT array[]::varchar[]; ```
I tested both the accepted answer and the one from the comments. They both work. I'll graduate the comments to an answer as it's my preferred syntax. ``` ALTER TABLE public.accounts ALTER COLUMN pwd_history SET DEFAULT '{}'; ```
Empty array as PostgreSQL array column default value
[ "", "sql", "postgresql", "" ]
So if I have a table like this ``` id | value | detail ------------------- 12 | 20 | orange 12 | 30 | orange 13 | 16 | purple 14 | 50 | red 12 | 60 | blue ``` How can I get it to return this? ``` 12 | 20 | orange 13 | 16 | purple 14 | 50 | red ``` If I group by id and detail it returns both 12 | 20 | orange and 12 | 60 | blue
[SQL Fiddle](http://sqlfiddle.com/#!15/9a275/6) **PostgreSQL 9.3 Schema Setup**: ``` CREATE TABLE TEST( id INT, value INT, detail VARCHAR ); INSERT INTO TEST VALUES ( 12, 20, 'orange' ); INSERT INTO TEST VALUES ( 12, 30, 'orange' ); INSERT INTO TEST VALUES ( 13, 16, 'purple' ); INSERT INTO TEST VALUES ( 14, 50, 'red' ); INSERT INTO TEST VALUES ( 12, 60, 'blue' ); ``` **Query 1**: Not sure if Redshift supports this syntax: ``` SELECT DISTINCT FIRST_VALUE( id ) OVER wnd AS id, FIRST_VALUE( value ) OVER wnd AS value, FIRST_VALUE( detail ) OVER wnd AS detail FROM TEST WINDOW wnd AS ( PARTITION BY id ORDER BY value ) ``` **[Results](http://sqlfiddle.com/#!15/9a275/6/0)**: ``` | id | value | detail | |----|-------|--------| | 12 | 20 | orange | | 14 | 50 | red | | 13 | 16 | purple | ``` **Query 2**: ``` SELECT t.ID, t.VALUE, t.DETAIL FROM ( SELECT *, ROW_NUMBER() OVER ( PARTITION BY ID ORDER BY VALUE ) AS RN FROM TEST ) t WHERE t.RN = 1 ``` **[Results](http://sqlfiddle.com/#!15/9a275/6/1)**: ``` | id | value | detail | |----|-------|--------| | 12 | 20 | orange | | 13 | 16 | purple | | 14 | 50 | red | ```
This is an easy task for a Windowed Aggregate Function, ROW\_NUMBER: ``` select * from ( select t.*, row_number() over (partition by id -- for each id order by value) as rn -- row with the minimum value from t ) as dt where rn = 1 ```
In PostgreSQL how can I return the entire row that corresponds with the min of a value?
[ "", "sql", "postgresql", "min", "amazon-redshift", "" ]
``` UPDATE `productData`.`productList` SET `dateLastUpdated` = CURDATE(), `totalReviews` = 100 WHERE `productList`.`name` = 'FirstchefproUltraSharpDualJuliennePeelerVegetablePeelerStainless'; ``` This query takes 2.93 seconds and only affects a single row. The table is ~75k rows long. I don't have much experience in SQL, is there a better way to do this? Updating a single row shouldn't take this long I'm using the latest version of SQL on an Ubuntu server If you need any more information just ask and I'll add it to the post. Thanks Edit: I actually didn't know that SQL and mysql where different things. The select statement ``` SELECT * FROM `productList` WHERE `productList`.`name` = 'FirstchefproUltraSharpDualJuliennePeelerVegetablePeelerStainless'; ``` takes 2.88 seconds, so I guess its a problem with my index. EDIT: Thankyou for your help When I try to create an index on name it says this: ``` ERROR 1170 (42000): BLOB/TEXT column 'name' used in key specification without a key length ``` EDIT: switched name to a varchar(64) its much faster now Thanks everyone
1. Either create an index on the `name` column (as written in some comments), or 2. use an indexed column in the `WHERE` statement, e.g. the product ID if your table contains something like that.
For this query: ``` UPDATE `productData`.`productList` SET `dateLastUpdated` = CURDATE(), `totalReviews` = 100 WHERE `productList`.`name` = 'FirstchefproUltraSharpDualJuliennePeelerVegetablePeelerStainless'; ``` You want an index on `productList(name)`: ``` create index idx_productData_productList on productData(productList); ```
Single row UPDATE query in SQL takes more than 3 seconds
[ "", "mysql", "sql", "performance", "ubuntu", "" ]
I have the following SQL query: ``` SELECT ( SELECT SUM(c1) FROM t2 WHERE t1.c1 = t2.c1 AND t2.c3 = 'NEW' AND t2.c4 = TRUNC(SYSDATE) AND t2.c5 = 'N' ) SUM, ( SELECT sum(c2) FROM t2 WHERE t1.c1 = t2.c1 AND t2.c3 = 'OLD' AND t2.c4 = TRUNC(SYSDATE) ) SUM2 FROM t1 WHERE t1.c2 IN ('DUMMY', 'DUMMY2') ``` So, the `t2` table get queried multiple times but with different `WHERE` clauses each time. This `t2` table is very large and so it takes some time to get the results. It would be good to only query this table once with all the different `WHERE` clauses and `SUM` results.. Is this possible anyway? I got a suggestion to use a WITH AS in the SQL, but with no success for my execution time
You could have several `sum` calls over `case` expression in `t2`, and then join that to `t1`: ``` SELECT sum1, sum2 FROM t1 JOIN (SELECT c1, SUM(CASE WHEN c3 = 'NEW' AND c4 = TRUNC(SYSDATE) AND c5 = 'N' THEN c1 ELSE NULL END) AS sum1, SUM(CASE WHEN c3 = 'OLD' AND c4 = TRUNC(SYSDATE) THEN c2 ELSE NULL END) AS sum2 FROM t2) t2 ON t1.c1 = t2.c1 WHERE t1.c2 IN ('DUMMY', 'DUMMY2') ``` EDIT: The common conditions in the `case` expressions (i.e., `c4 = TRUNC(SYSDATE)`) can be extracted to a `where` clause, which should provide some performance gain: ``` SELECT sum1, sum2 FROM t1 JOIN (SELECT c1, SUM(CASE WHEN c3 = 'NEW' AND c5 = 'N' THEN c1 ELSE NULL END) AS sum1, SUM(CASE WHEN c3 = 'OLD' THEN c2 ELSE NULL END) AS sum2 FROM t2 WHERE c4 = TRUNC(SYSDATE)) t2 ON t1.c1 = t2.c1 WHERE t1.c2 IN ('DUMMY', 'DUMMY2') ```
You can try this: ``` SELECT SUM1.val, SUM2.val FROM (SELECT * FROM t1 WHERE t1.c2 IN ('DUMMY', 'DUMMY2')) as t1 INNER JOIN ( SELECT SUM(c1) as val FROM t2 WHERE t2.c3 = 'NEW' AND t2.c4 = TRUNC(SYSDATE) AND t2.c5 = 'N' ) SUM1 ON t1.c1 = SUM1.c1 INNER JOIN ( SELECT SUM(c2) as val FROM t2 WHERE t2.c3 = 'OLD' AND t2.c4 = TRUNC(SYSDATE) ) SUM2 ON t1.c1 = SUM2.c1 ```
SQL Optimization: query table with different where clauses
[ "", "sql", "oracle", "select", "sql-optimization", "" ]
In a database table, I have two column storing date and time in this format: ``` D30DAT D30TIM 140224 75700 ``` I need update a new field where store date in the format ``` 2014-02-24 07:57:00.000 ``` How I can use a SQL query to do it?
For Postgres and Oracle (assuming those columns are varchar): ``` select to_timestamp(dt, 'yymmdd hh24miss') from ( select d30dat||' '||case when length(d30tim) = 5 then '0'||d30tim else d30tim end as dt from x ) t; ``` The `case` expression adds a leading `0` if the time part only consists of 5 digits so that the format mask can be specified with always 2 digits for the hour. The blank between the two columns is essentially only a debugging aid and could be left out. The result is a *real* timestamp value that can easily be formatted using `to_char()` to the desired format. SQLFiddle for Postgres: <http://sqlfiddle.com/#!15/ac07a/2> SQLFiddle for Oracle: <http://sqlfiddle.com/#!4/ac07a2/4>
Try this function. Its not particularly fast or great, but converts the fields you specified. ``` CREATE FUNCTION GetDateTimeFromINT ( @Date INT, @Time INT ) RETURNS DATETIME AS BEGIN DECLARE @YearNo VARCHAR(4) DECLARE @MonthNo VARCHAR(3) DECLARE @DayNo VARCHAR(2) DECLARE @HourNo VARCHAR(2) DECLARE @MinNo VARCHAR(2) DECLARE @SecNo VARCHAR(2) SET @YearNo = LEFT(CONVERT(VARCHAR,@Date), LEN(@Date)-4) SET @MonthNo = SUBSTRING(CONVERT(VARCHAR,@Date),LEN(@Date)-3,2) SET @DayNo = SUBSTRING(CONVERT(VARCHAR,@Date),LEN(@Date)-1,2) SET @HourNo = LEFT(CONVERT(VARCHAR,@Time), LEN(@Time)-4) SET @MinNo = SUBSTRING(CONVERT(VARCHAR,@Time),LEN(@Time)-3,2) SET @SecNo = SUBSTRING(CONVERT(VARCHAR,@Time),LEN(@Time)-1,2) SET @YearNo = '20' + @YearNo IF LEN(@HourNo) = 1 BEGIN SET @HourNo = '0' + @HourNo END SET @MonthNo = CASE WHEN @MonthNo = '01' THEN 'JAN' WHEN @MonthNo = '02' THEN 'FEB' WHEN @MonthNo = '03' THEN 'MAR' WHEN @MonthNo = '04' THEN 'APR' WHEN @MonthNo = '05' THEN 'MAY' WHEN @MonthNo = '06' THEN 'JUN' WHEN @MonthNo = '07' THEN 'JUL' WHEN @MonthNo = '08' THEN 'AUG' WHEN @MonthNo = '09' THEN 'SEP' WHEN @MonthNo = '10' THEN 'OCT' WHEN @MonthNo = '11' THEN 'NOV' WHEN @MonthNo = '12' THEN 'DEC' END RETURN CONVERT(DATETIME, @DayNo + '-' + @MonthNo + '-' + @YearNo +' ' + @HourNo + ':' + @MinNo + ':' + @SecNo) END GO ``` Call it like so: ``` SELECT *, dbo.GetDateTimeFromINT(D30DAT,T30DAT) OutputDT FROM SourceTable ``` If you need help with the update statement, let me know
Sql Datetime convert
[ "", "sql", "date", "" ]
Say I have records like this: ``` user_id user_data_field created ------- --------------- ------- 1 some data date_a 2 some data date_b 1 some data date_c 1 some data date_d 2 some data date_e ``` What should I do to get all the user records with most recent dates only (assuming that most recent dates are not the ones at the bottom...can be anywhere)? I am using an `in_` clause to fetch the bulk users in `sqlalchemy`: `session.query(Table).filter(Table.user_id.in_([1,2])).order_by(Table.created.desc())` which just sorts them by the created order. Moreover, if I just add the `first()` clause at the end of this statement, it gets me just 1 row. So, do I have a way in sqlalchemy to get it done? Thanks.
*@khan: you solution is almost working, but the flaw is described in the comment to your answer.* The code below solves this particular issue (but still relies on the fact that the would not be duplicate `created` values for the same `user_id`): ``` subq = ( session .query(MyTable.user_id, func.max(MyTable.created).label("max_created")) .filter(MyTable.user_id.in_([1, 2])) .group_by(MyTable.user_id) .subquery() ) q = ( session.query(MyTable) .join(subq, and_(MyTable.user_id == subq.c.user_id, MyTable.created == subq.c.max_created)) ) ```
It sounds to me that the SQL query you're looking for would be something like: ``` SELECT user_id, MAX(created) FROM Table WHERE user_id IN (1, 2) GROUP BY user_id; ``` So now the deal is to translate it using sqlalchemy, I'm guessing something like that would do: ``` session.query(Table.user_id, func.max(Table.created)).filter(Table.user_id.in_([1,2])).group_by(Table.user_id).all() ``` <http://sqlalchemy.readthedocs.org/en/rel_1_0/core/functions.html?highlight=max#sqlalchemy.sql.functions.max>
Get the most recent record for a user
[ "", "sql", "sqlalchemy", "" ]
I have a SQL Server table called `Test` with this sample data: ``` LineNo BaseJanuary BaseFebruary BudgetJanuary BudgetFebruary 1 10000 20000 30000 40000 2 70000 80000 90000 100000 ``` I would like to create the below structure in a SQL Server view (or temporary table etc.) but I'm stuck... any ideas/suggestions would be appreciated! ``` LineNo Month Base Budget 1 January 10000 30000 2 January 70000 90000 1 February 20000 40000 2 February 80000 100000 ``` Note: The numbers are for example only, the data is dynamic.
``` select LineNo, 'January' as Month, BaseJanuary as Base, BudgetJanuary as Budget from test union select LineNo, 'February' as Month, BaseFebruary as Base, BudgetFebruary as Budget from test order by LineNo, Month ```
[`CROSS APPLY`](http://www.sqlservercentral.com/articles/CROSS+APPLY+VALUES+UNPIVOT/91234/) can be used to `UNPIVOT` data: ``` SELECT [LineNo], [Month], Base, Budget FROM test CROSS APPLY(VALUES -- unpivot columns into rows ('January', BaseJanuary, BudgetJanuary) -- generate row for jan , ('February', BaseFebruary, BudgetFebruary) -- generate row for feb ) ca ([Month], Base, Budget) ```
Single row to multiple columns and rows
[ "", "sql", "sql-server", "unpivot", "" ]
I already wrote this sql statement to get person id, name and price for e.g. plane ticket. ``` SELECT Person.PID, Person.Name, Preis FROM table.flug Flug INNER JOIN table.flughafen Flughafen ON zielflughafen = FHID INNER JOIN table.bucht Buchungen ON Flug.FID = Buchungen.FID INNER JOIN table.person Person ON Buchungen.PID = Person.PID WHERE Flug.FID = '10' ORDER BY Preis ASC; ``` My output is correct, but it should only be the line with min(Preis). If I change my code accordingly, I get an error... ``` SELECT Person.PID, Person.Name, min(Preis) FROM table.flug Flug ... ``` As output I need one single line: PID, Name and Price whereas Price is the min(Preis).
Since you're already sorting your lines, just add a `limit` clause: ``` SELECT Person.PID, Person.Name, Preis FROM table.flug Flug INNER JOIN table.flughafen Flughafen ON zielflughafen = FHID INNER JOIN table.bucht Buchungen ON Flug.FID = Buchungen.FID INNER JOIN table.person Person ON Buchungen.PID = Person.PID WHERE Flug.FID = '10' ORDER BY Preis ASC LIMIT 1 ```
You need to group your result by `Person.PID` and `Person.Name` in order to select these fields in the same query where you're using aggregate function `min()`. ``` SELECT Person.PID, Person.Name, min(Preis) as Preis FROM table.flug Flug .... WHERE Flug.FID = '10' GROUP BY Person.PID, Person.Name ORDER BY 3 ASC; ```
Select MIN() in SQL
[ "", "mysql", "sql", "select", "min", "" ]
**My Question is:** > Germany (population 80 million) has the largest population of the > countries in Europe. Austria (population 8.5 million) has 11% of the > population of Germany. > > Show the name and the population of each country in Europe. Show the > population as a percentage of the population of Germany. **My answer:** ``` SELECT name,CONCAT(ROUND(population/80000000,-2),'%') FROM world WHERE population = (SELECT population FROM world WHERE continent='Europe') ``` What I am doing wrong? Thanks.
The question was incomplete and was taken from [here](http://www.sqlzoo.net/wiki/SELECT_within_SELECT_Tutorial) This is the answer ``` SELECT name, CONCAT(ROUND((population*100)/(SELECT population FROM world WHERE name='Germany'), 0), '%') FROM world WHERE population IN (SELECT population FROM world WHERE continent='Europe') ``` I was wondering about sub-query as from OP'S question it wasn't clear (at least to me). The reason is that "world" table (as the name suggest, I have to admit) contains all world country whereas we're interested only into european one. Moreover, the population of Germany has to be retrieved from DB because it's not extacly 80.000.000; if you use that number you receive back 101% as Germany population.
When using sql server in SQL Zoo, then don't use `CONCAT`: I think SQL Zoo uses a version of SQL Server that doesn't support `CONCAT` and furthermore it looks like you have to do a `CAST`. Instead concatenate with the use of '+'. Also see [this post](https://stackoverflow.com/questions/10550307/how-do-i-use-the-concat-function-in-sql-server-2008-r2). I figure the script should be something like beneath (though I haven't got it to my desired stated, because of the fact I want to result to look like 3%;0%;4%;etc. instead of 3.000000000000000%;0.000000000000000%;4.000000000000000%;etc.. And I start a new topic for that one [here](https://stackoverflow.com/questions/32072858/show-no-decimals-result-in-sql-server)). `SELECT name, CAST(ROUND(population*100/(SELECT population FROM world WHERE name='Germany'), 0) as varchar(20)) +'%' FROM world WHERE population IN (SELECT population FROM world WHERE continent='Europe')`
Sqlzoo SELECT within SELECT Tutorial #5
[ "", "sql", "select", "" ]
I need to update `image_id` in the `user_group` table with the value of `image_id2` in `view_kantech_images` where the names match. My query is returning an error: ``` update user_group set image_id = (select vkm.image_id2 from view_kantech_matched as vkm where vkm.name like user_group.name) where name = view_kantech_matched.name ``` The error that it returns is: > Msg 4104, Level 16, State 1, Line 1 > The multi-part identifier "view\_kantech\_matched.name" could not be bound.
You could use the update-join syntax instead: ``` UPDATE ug SET ug.image_id = vkm.image_id2 FROM user_group ug JOIN view_kantech_matched vkm ON vkm.name = ug.name ```
try this ``` UPDATE im SET im.image_id = image_id2 FROM user_group im JOIN view_kantech_matched gm ON im.name = gm.name ```
SQL Server 2008 update query not working
[ "", "sql", "sql-server", "database", "sql-server-2008", "sql-update", "" ]
I have an one to many table, and if there is rows that have same reference id(Paragraph ID) I want to concatenate so LoginName value have many in same row. This query does what I want it to do but there is a problem, It replaces first char. the STUFF function requires a replace value. **My question:** How can I do this without replacing first char? ``` SELECT DISTINCT ParagraphID , STUFF(( SELECT N'|' + CAST([LoginName] AS VARCHAR(255)) FROM [dbo].[CM_Signature] f2 WHERE f1.ParagraphID = f2.ParagraphID FOR XML PATH ('')), 1, 2, '') AS FileNameString FROM [dbo].[CM_Signature] f1 ``` ![enter image description here](https://i.stack.imgur.com/I0DaO.png) Expected value: ``` Daniel | Emma ```
Here is what you can use: ``` SELECT DISTINCT ParagraphID , STUFF(( SELECT N' | ' + CAST([LoginName] AS VARCHAR(255)) FROM [dbo].[CM_Signature] f2 WHERE f1.ParagraphID = f2.ParagraphID FOR XML PATH ('')), 1, 1, '') AS FileNameString FROM [dbo].[CM_Signature] f1 ``` Note the `STUFF("...", 1, 1, '')` instead of `STUFF("...", 1, 2, '')`. Because you need to replace 1 char instead of 2 (To remove the first `|`). Output: ``` Daniel|Emma ``` Also, if you want to have spaces before and after the `|`, just use this query: ``` SELECT DISTINCT ParagraphID , STUFF(( SELECT N' | ' + CAST([LoginName] AS VARCHAR(255)) FROM [dbo].[CM_Signature] f2 WHERE f1.ParagraphID = f2.ParagraphID FOR XML PATH ('')), 1, 3, '') AS FileNameString FROM [dbo].[CM_Signature] f1 ``` Note that this time we removed 3 chars (`STUFF("...", 1, 3, '')`). Output: ``` Daniel | Emma ```
You were starting your path at position 2 instead of first ``` SELECT DISTINCT ParagraphID , STUFF(( SELECT N'|' + CAST([LoginName] AS VARCHAR(255)) FROM [dbo].[CM_Signature] f2 WHERE f1.ParagraphID = f2.ParagraphID FOR XML PATH ('')), 1, 1, '') AS FileNameString FROM [dbo].[CM_Signature] f1 SELECT DISTINCT ParagraphID , STUFF(( SELECT N'|' + CAST([name] AS VARCHAR(255)) FROM mytable f2 WHERE f1.paragraphid = f2.paragraphid FOR XML PATH ('')), 1, 1, '') AS FileNameString FROM mytable f1 ```
many rows into a single column with SQL
[ "", "sql", "sql-server", "t-sql", "" ]
I am trying to optimize the following query, According to execution plan, the sort in the inner query has a high cost. could the following query be re-written so that its easy to read and performs well? ``` select CL.col1, CL.col2 FROM CLAIM CL WITH (NOLOCK) INNER JOIN MEMBER MEM WITH (NOLOCK) ON MEM.MEMID=CL.MEMID LEFT JOIN PAYVACATION PV WITH (NOLOCK) ON CL.CLAIMID = PV.CLAIMID and pv.paymentid = (select top 1 PAYVACATION.paymentid from PAYVACATION WITH (NOLOCK), payment WITH (NOLOCK) where payvoucher.claimid = cl.claimid and PAYVACATION.paymentid = payment.paymentid order by payment.paystatusdate desc) ```
``` ;WITH CTE AS ( select CL.col1, CL.col2, cl.claimid FROM CLAIM CL WITH (NOLOCK) INNER JOIN MEMBER MEM WITH (NOLOCK) ON MEM.MEMID=CL.MEMID LEFT JOIN PAYVACATION PV WITH (NOLOCK) ON CL.CLAIMID = PV.CLAIMID ), CTE2 AS ( select PAYVACATION.paymentid , PAYVACATION.claimid ,ROW_NUMBER() OVER (PARTITION BY PAYVACATION.claimid ORDER BY payment.paystatusdate desc) rn from PAYVACATION WITH (NOLOCK) INNER JOIN payment WITH (NOLOCK) ON PAYVACATION.paymentid = payment.paymentid INNER JOIN CTE WITH (NOLOCK) ON PAYVACATION.claimid = cl.claimid ) SELECT CL.col1, CL.col2 FROM CTE CL INNER JOIN CTE2 C2 ON C2.claimid = CL.claimid AND C2.rn = 1 ```
There's a couple of things you'll need to fix before we can properly answer this question. 1. Make sure the query works as it is. The version you've given us will not compile because of `payvoucher.claimid`. We can guess what it should be, but there's no use putting effort into it when it turns out to be something different. 2. You probably run this on a Case Insensitive environment and it probably will work there, but as a rule you should try to keep your table, field, variable names 'case-consequent'. (As a .NET practitioner this should be second nature anyway =) 3. It would help to have the table-definitions, indexes and a guesstimate on the number of records involved and if possible the way the data interacts.(lots of this connects to just couple of that etc...) 4. Added bonus would be if you could tell us your *expectations* and also what other processes are on these tables and how badly our solution may affect those. (we probably can make the SELECT super-fast but at the cost of making the INSERT/UPDATE/DELETE fairly slow) (and finally, get rid of the `NOLOCK` hints, or change them into their synonym `READUNCOMMITTED` and think about it if you still like them there as much as you do right now)
rewrite query to remove inner query in tsql to optimize
[ "", "sql", "sql-server", "t-sql", "query-optimization", "" ]
I recently came across vba update statements and I have been using `Recordset.Edit` and `Recordset.Update` to not only edit my existing data but to update it. I want to know the difference between the two: `recordset.update` and `Update sql Vba` statement. I think they all do the same but I can't figure which one is more efficient and why. Example code below: ``` 'this is with sql update statement dim someVar as string, anotherVar as String, cn As New ADODB.Connection someVar = "someVar" anotherVar = "anotherVar" sqlS = "Update tableOfRec set columna = " &_ someVar & ", colunmb = " & anotherVar &_ " where columnc = 20"; cn.Execute stSQL ``` This is for recordset (update and Edit): ``` dim thisVar as String, someOthVar as String, rs as recordset thisVar = "thisVar" someOthVar = "someOtherVar" set rs = currentDb.openRecordset("select columna, columnb where columnc = 20") do While not rs.EOF rs.Edit rs!columna = thisVar rs!columnb = someOthvar rs.update rs.MoveNext loop ```
Assuming `WHERE columnc = 20` selects 1000+ rows, as you mentioned in a comment, executing that `UPDATE` statement should be noticeably faster than looping through a recordset and updating its rows one at a time. The latter strategy is a RBAR (Row By Agonizing Row) approach. The first strategy, executing a single (valid) `UPDATE`, is a "set-based" approach. In general, set-based trumps RBAR with respect to performance. However your 2 examples raise other issues. My first suggestion would be to use DAO instead of ADO to execute your `UPDATE`: ``` CurrentDb.Execute stSQL, dbFailonError ``` Whichever of those strategies you choose, make sure *columnc* is indexed.
The SQL method is usually the fastest for bulk updates, but syntax is often clumsy. The VBA method, however, has the distinct advantages, that code is cleaner, and the recordset can be used before or after the update/edit without requering the data. This can make a huge difference if you have to do long-winded calculations between updates. Also, the recordset can be passed ByRef to supporting functions or further processing.
Recordset.Edit or Update sql vba statement fastest way to update?
[ "", "sql", "vba", "ms-access", "" ]
I've managed to get the data out and include NULL values by using left outer join. This is my current query: ``` select s.user, a.id, a.datetime as date, a.total_time from steam_accounts s left outer join activity a on a.steam_id = s.id where s.user_id = 1 ``` This returns this: ![enter image description here](https://i.stack.imgur.com/rcV7S.png) Which is almost perfect. But now I need to filter the results with `max(a.id)` and include null values if there are no matches from the outer join. Here's what I've tried: ``` select s.id, s.user, max(a.id), a.datetime as date, a.total_time from steam_accounts s left outer join activity a on a.steam_id = s.id where s.user_id = "1" ``` Result: ![enter image description here](https://i.stack.imgur.com/BPHGK.png) All the null values disappeared. I only wanted to filter out the first two results from the previous query. This is my desired result: ![enter image description here](https://i.stack.imgur.com/y3J3l.png) Any much is much appreciated. Thanks
Alas, MySQL doesn't have `OUTER APPLY` or `LATERAL JOIN`, so it will be less efficient, than it could have been. It seems that something like this should produce what you want: ``` SELECT s.id ,s.user ,ActivityIDs.MaxActivityID ,activity.datetime as date ,activity.total_time FROM steam_accounts s LEFT JOIN ( SELECT a.steam_id ,max(a.id) AS MaxActivityID FROM activity a GROUP BY a.steam_id ) AS ActivityIDs ON ActivityIDs.steam_id = s.id LEFT JOIN activity ON activity.id = ActivityIDs.MaxActivityID WHERE s.user_id = 1 ``` For each `steam_account` we find one `activity` with max ID in the first `LEFT JOIN`. Then we fetch the rest of `activity` details using found ID in the second `LEFT JOIN`.
Use `max(coalesce(a.id, 0))` Any aggregation done on results with null will always return null
LEFT OUTER JOIN get max() and include NULL values
[ "", "mysql", "sql", "join", "" ]
This is the last problem I have to deal with in my application and I hope someone will help because I'm clueless, I did my research and cannot find a proper solution. I have an 'University Administration' application. I need to make a report with few tables included. The problem is in SQL Query i have to finish. Query needs to MAKE LIST OF BEST 'n' STUDENTS, and the condition for student to be 'best' is grade AVERAGE. I have 3 columns (students.stID & examines.grades). I need to get an average of my 'examines.grades' column, sort the table from highest (average grade) to the lowest, and I need to filter 'n' best 'averages'. The user would enter the filter number and as I said, the app needs to show 'n' best averages. Problem is in my SQL knowledge (not mySQL literaly but T-SQL). This is what I've donne with my SQL query, but the problem lies in the "SELECT TOP" because when I press my button, the app takes average only from TOP 'n' rows selected. ``` SELECT TOP(@topParam) student.ID, AVG(examines.grades) FROM examines INNER JOIN student ON examines.stID = student.stID WHERE (examines.grades > 1) ``` For example: ``` StudentID Grade 1 2 2 5 1 5 2 2 2 4 2 2 ``` EXIT: ``` StudentID Grade_Average 1 3.5 2 3.25 ```
Being impatient, I think this is what you are looking for. You didn't specify which SQL Server version you're using although. ``` DECLARE @topParam INT = 3; -- Default DECLARE @student TABLE (StudentID INT); -- Just for testing purpose DECLARE @examines TABLE (StudentID INT, Grades INT); INSERT INTO @student (StudentID) VALUES (1), (2); INSERT INTO @examines (StudentID, Grades) VALUES (1, 2), (2, 5), (1, 5), (2, 2), (2, 4), (2, 2); SELECT DISTINCT TOP(@topParam) s.StudentID, AVG(CAST(e.grades AS FLOAT)) OVER (PARTITION BY s.StudentID) AS AvgGrade FROM @examines AS e INNER JOIN @student AS s ON e.StudentID = s.StudentID WHERE e.grades > 1 ORDER BY AvgGrade DESC; ``` If you'll provide some basic data, I'll adapt query for your needs. Result: ``` StudentID AvgGrade -------------------- 1 3.500000 2 3.250000 ``` **Quick explain:** Query finds grades average in derived table and later queries it sorting by it. Another tip: You could use `WITH TIES` option in `TOP` clause to get more students if there would be multiple students who could fit for 3rd position. If you'd like to make procedure as I suggested in comments, use this snippet: ``` CREATE PROCEDURE dbo.GetTopStudents ( @topParam INT = 3 ) AS BEGIN BEGIN TRY SELECT DISTINCT TOP(@topParam) s.StudentID, AVG(CAST(e.grades AS FLOAT)) OVER (PARTITION BY s.StudentID) AS AvgGrade FROM examines AS e INNER JOIN student AS s ON e.StudentID = s.StudentID WHERE e.grades > 1 ORDER BY AvgGrade DESC; END TRY BEGIN CATCH SELECT ERROR_NUMBER(), ERROR_MESSAGE(); END CATCH END ``` And later call it like that. It's a good way to encapsulate your logic. ``` EXEC dbo.GetTopStudents @topParam = 3; ```
You should use the `group by` clause for counting average grades (in case `examines.grades` has an integer type, you should `cast` it to the floating-point type) for each `student.ID` and `order by` clause to limit your output to only `top` n with *highest* average grades: ``` select top(@topParam) student.ID , avg(cast(examines.grades as float)) as avg_grade from examines join student on examines.stID = student.stID where (examines.grades > 1) group by student.ID order by avg_grade desc ```
Query including TOP and AVG
[ "", "sql", "sql-server", "average", "" ]
I was writing a mysql filter query which has a primary table and another table which holds multiple records against each record of primary table (I will call this table child). Am trying to write a query which fetches record of primary table based on its values on child table. If the child table condition is one then I will be able to do it simply by joining, but I have 2 conditions which falls on same field. ``` For ex. table 1: id name url 1 XXX http://www.yahoo.com 2 YYY http://www.google.com 3 ZZZ http://www.bing.com table 2: id masterid optionvalue 1 1 2 2 1 7 3 2 7 4 2 2 5 3 2 6 3 6 ``` My query has to return unique master records when the `optionvalue` matches only both 2 different conditions match on second table. I wrote query with IN... ``` select * from table1 left join table2 on table1.id=table2.masterid where table2.optionvalue IN(2,7) group by table1.id; ``` This gets me all 3 records because IN is basically checking 'OR', but in my case I should not get 3rd master record because it has values 2,6 (there is no 7). If I write query with 'AND' then am not getting any records... ``` select * from table1 left join table2 on table1.id=table2.masterid where table2.optionvalue = 2 and table2.optionvalue = 7; ``` This will not return records as the and will fail as am checking different values on same column. I wanted to write a query which fetches master records which has child records with field `optionvalues` holds both 2 and 7 on different records. Any help would be much appreciated.
Indeed, as AsConfused hinted, you need to two joins to TABLE2 using aliases -- `both of these are tested`: ``` -- find t1 where it has 2 and 7 in t2 select t1.* from table1 t1 join table2 ov2 on t1.id=ov2.masterid and ov2.optionValue=2 join table2 ov7 on t1.id=ov7.masterid and ov7.optionValue=7 -- find t1 where it has 2 and 7 in t2, and no others in t2 select t1.*, ovx.id from table1 t1 join table2 ov2 on t1.id=ov2.masterid and ov2.optionValue=2 join table2 ov7 on t1.id=ov7.masterid and ov7.optionValue=7 LEFT OUTER JOIN table2 ovx on t1.id=ovx.masterid and ovx.optionValue not in (2,7) WHERE ovx.id is null ```
You can try something like this (no performance guarantees, and assumes you only want exact matches): ``` select table1.* from table1 join (select masterid, group_concat(optionvalue order by optionvalue) as opt from table2 group by masterid) table2_group on table1.id=table2_group.masterid where table2_group.opt='2,7'; ``` <http://sqlfiddle.com/#!9/673094/9>
Advanced Mysql Query to get master record if two conditions matches on different rows of child records
[ "", "mysql", "sql", "database", "" ]
I have a query like this: ``` SELECT RIGHT(id, 1) id_root FROM user WHERE LENGTH(id) = 3 and LEFT(id, 1) = '0' ``` And how to convert that's query to active record in `codeigniter`. My problem is with syntax `RIGHT( id, 1 )` and also at `LEFT(id,1)='0'`
You can produce your query like this ``` $this->db->from('user'); $this->db->select('RIGHT(id, 1) id_root',false); $this->db->where('LENGTH(id)',3,true); $this->db->where('LEFT(id, 1) =','0',true); $results=$this->db->get()->result(); ``` Remember if you want to use mysql function at your select query which may break mysql syntax by codeigniter use false as 2nd parameter so that codeigniter does not protect/covert your fields. Same for where, if you want to use mysql function or other function which may break mysql syntax by CI use 3rd parameter as true so that codeigniter does not convert your fields. See details at [documentation](http://www.codeigniter.com/user_guide/database/query_builder.html#looking-for-specific-data) Simplest way to get any query result using `$this->db->query('YOUR_QUERY')` But I prefer using CI Active record's functions.
``` $result_arr = $this->db ->select("RIGHT(id, 1) id_root", FALSE) ->from("user") ->where( array("LENGTH(id)"=> 3, "LEFT(id, 1) =" => 0) )->get() ->result_array(); ``` Or you can simply use `$this->db->query("You SQL");` ``` $query = "SELECT RIGHT(id, 1) id_root FROM user WHERE LENGTH(id) = ? and LEFT(id, 1) = ? "; $result_arr = $this->db->query($query, array(3, 0))->result_array(); ```
How to convert RIGHT LEFT functions to codeigniter active record
[ "", "mysql", "sql", "codeigniter", "activerecord", "" ]
I am trying (and failing) to correctly order my recursive CTE. My table consists of a parent-child structure where one task can relate to another on a variety of different levels. For example I could create a task (this is the parent), then create a sub-task from this and then a sub-task from that sub-task and so forth.. Below is some test data that I have included. Currently it's ordered by `Path` which orders it alphabetically. ![Task Hierarchy](https://i.stack.imgur.com/Oc5iS.png) So if I were to create a task. It would give me a TaskID for that task (Say 50) - I could then create 5 subtasks for that main task (51,52,53,54,55). I could then add sub-tasks to the 5 subtasks (51->56) (53->57) but when I want the order I would need it returning So the order I would require it * 50 + 51 - 56 + 52 + 53 - 57 + 54 + 55 Proper order of the test data ![True Order](https://i.stack.imgur.com/4FPAf.png) Here is the code that I have been using ``` DECLARE @TaskID NUMERIC(10,0) SET @TaskID = 38 ;WITH cte AS ( SELECT t.TaskID ,t.ParentID ,t.Title ,CONVERT(VARCHAR(MAX),'') AS [Nest] ,CONVERT(VARCHAR(MAX),'') AS [Path] ,t.CreatedDate FROM tasks.Tasks t WHERE t.ParentID IS NULL AND t.TaskID = @TaskID UNION ALL SELECT sub.TaskID ,sub.ParentID ,sub.Title ,cte.[Nest] + CONVERT(VARCHAR(MAX),sub.TaskID) AS [Nest] ,cte.[Path] + ',' + CONVERT(VARCHAR(MAX),sub.TaskID) AS [Path] ,sub.CreatedDate FROM tasks.Tasks sub INNER JOIN cte ON cte.TaskID = sub.ParentID ) SELECT TaskID ,ParentID ,Title ,Nest ,[Path] ,CreatedDate FROM ( SELECT cte.TaskID ,cte.ParentID ,cte.Title ,NULLIF(LEN(cte.[Path]) - LEN(REPLACE(cte.[Path], ',', '')),0) Nest ,CONVERT(VARCHAR(25),@TaskID) + cte.[Path] AS [Path] ,cte.CreatedDate FROM cte )a ORDER BY a.[Path] ``` I have a feeling it will be blindingly obvious but I'm really not sure how to proceed. I thought about more recursion, functions, splitting the string with no success. Apologies if I'm not being clear
The easiest way would be to pad the keys to a fixed length. e.g. `038,007` will be ordered before `038,012` But the padding length would have to be safe for the largest taskid. Although you could keep your `path` trimmed for readability and create an extra padded field for sorting. A somewhat safer version would be to do the same, but create a padded path from row\_numbers. Where the padding size would have to be big enough to support the maximum number of sub items. ``` DECLARE @TaskID NUMERIC(10,0) SET @TaskID = 38 declare @maxsubchars int = 3 --not more than 999 sub items ;with cte as ( SELECT t.TaskID ,t.ParentID ,t.Title ,0 AS [Nest] ,CONVERT(VARCHAR(MAX),t.taskid) AS [Path] ,CONVERT(VARCHAR(MAX),'') OrderPath ,t.CreatedDate FROM tasks.Tasks t WHERE t.ParentID IS NULL AND t.TaskID = @TaskID union all SELECT sub.TaskID ,sub.ParentID ,sub.Title ,cte.Nest + 1 ,cte.[Path] + ',' + CONVERT(VARCHAR(MAX),sub.TaskID) ,cte.OrderPath + ',' + right(REPLICATE('0', @maxsubchars) + CONVERT(VARCHAR,ROW_NUMBER() over (order by sub.TaskID)), @maxsubchars) ,sub.CreatedDate FROM tasks.Tasks sub INNER JOIN cte ON cte.TaskID = sub.ParentID ) select taskid, parentid, title,nullif(nest,0) Nest,Path, createddate from cte order by OrderPath ``` You could probably go more fancy than a fixed subitem length, determining the amount of subitems and basing the padding on said length. Or using numbered rows based on the amount of siblings and traverse in reverse direction and maybe (just spouting some untested thoughts), but using a simple ordered path is likely enough.
If the topmost CTE (as in the below query) is your table structure then the below code could be the solution. ``` WITH CTE AS ( SELECT 7112 TASKID ,NULL PARENTID UNION ALL SELECT 7120 TASKID ,7112 ParanetID UNION ALL SELECT 7139 TASKID ,7112 ParanetID UNION ALL SELECT 7150 TASKID ,7112 ParanetID UNION ALL SELECT 23682 TASKID ,7112 ParanetID UNION ALL SELECT 7100 TASKID ,7112 ParanetID UNION ALL SELECT 23691 TASKID ,7112 ParanetID UNION ALL SELECT 23696 TASKID ,7112 ParanetID UNION ALL SELECT 23700 TASKID ,23696 ParanetID UNION ALL SELECT 23694 TASKID ,23691 ParanetID UNION ALL SELECT 23689 TASKID ,7120 ParanetID UNION ALL SELECT 7148 TASKID ,23696 ParanetID UNION ALL SELECT 7126 TASKID ,7120 ParanetID UNION ALL SELECT 7094 TASKID ,7120 ParanetID UNION ALL SELECT 7098 TASKID ,7094 ParanetID UNION ALL SELECT 23687 TASKID ,7094 ParanetID ``` ) ,RECURSIVECTE AS ( SELECT TASKID, CONVERT(NVARCHAR(MAX),convert(nvarchar(20),TASKID)) [PATH] FROM CTE WHERE PARENTID IS NULL UNION ALL SELECT C.TASKID, CONVERT(NVARCHAR(MAX),convert(nvarchar(20),R.[PATH]) + ',' + convert(nvarchar(20),C.TASKID)) FROM RECURSIVECTE R INNER JOIN CTE C ON R.TASKID = C.PARENTID ) SELECT C.TASKID, REPLICATE(' ', (LEN([PATH]) - LEN(REPLACE([PATH],',','')) + 2) ) + '.' + CONVERT(NVARCHAR(20),C.TASKID) FROM RECURSIVECTE C ORDER BY [PATH] Try to this query in Text output mode in SSMS. So that you could see the difference
T-SQL Ordering a Recursive Query - Parent/Child Structure
[ "", "sql", "t-sql", "recursion", "hierarchy", "" ]
I'm trying to amend the following SQL code into a pivot table. The original data looks like so: ``` PerilCode B C BI EQ 179166451986 27296144046 9067728654 WS 182394050346 28745459712 9148728654 SL 114374574342 12703142574 293860386 TC 182394050346 28745459712 9148728654 WF 182394050346 28745459712 9148728654 FF 182394050346 28745459712 9148728654 ST 182394050346 28745459712 9148728654 ``` The code is below: ``` SELECT PL.PerilCode, SUM(ReplacementValueA) AS 'B', SUM(ReplacementValueC) AS 'C', SUM(ReplacementValueD) AS 'BI' FROM [SE-SQLTO-0300].[AIRExposure_London].[dbo].[tLocation] L INNER JOIN [SE-SQLTO-0300].[AIRExposure_London].[dbo].[tExposureSet] ES ON L.ExposureSetSID = ES.ExposureSetSID INNER JOIN [SE-SQLTO-0300].[AIRProject].[dbo].[tExposureViewDefinition] EVD ON ES.ExposureSetSID = EVD.ExposureSetSID INNER JOIN [SE-SQLTO-0300].[AIRProject].[dbo].[tExposureView] EV ON EVD.ExposureViewSID = EV.ExposureViewSID INNER JOIN [SE-SQLTO-0300].[AIRProject].[dbo].[tProjectExposureViewXref] PEV ON EV.ExposureViewSID = EV.ExposureViewSID INNER JOIN [SE-SQLTO-0300].[AIRProject].[dbo].[tProject] P ON PEV.ProjectSID = P.ProjectSID INNER JOIN [SE-SQLTO-0300].[AIRExposure_London].[dbo].[tLocTerm] LT ON L.LocationSID = LT.LocationSID INNER JOIN [SE-SQLTO-0300].[AIRReference].[dbo].[tPerilSetXref] PSX ON LT.PerilSetCode = PSX.PerilSetCode INNER JOIN [SE-SQLTO-0300].[AIRReference].[dbo].[tPeril] PL ON PSX.PerilCode = PL.PerilCode WHERE P.ProjectName = 'Pricing' AND EV.ExposureViewName = 'CAP Maxed' GROUP BY PL.PerilCode ``` Ideally what I'm trying to get the pivot to look like is like so: ``` EQ WS SL TC WF FF ST B 179,166,451,986 182,394,050,346 114,374,574,342 182,394,050,346 182,394,050,346 182,394,050,346 182,394,050,346 C 27,296,144,046 28,745,459,712 12,703,142,574 28,745,459,712 28,745,459,712 28,745,459,712 28,745,459,712 BI 9,067,728,654 9,148,728,654 293,860,386 9,148,728,654 9,148,728,654 9,148,728,654 9,148,728,654 ```
You will need first unpivot your data, and then pivot it again: ``` SELECT * FROM (/*your current query here*/) t UNPIVOT(v FOR col IN([B],[C],[BI])) u PIVOT (MAX(v) FOR PerilCode IN([EQ],[WS],[SL],[TC],[WF],[FF],[ST])) p ```
using cross apply also we can achieve ``` declare @t table (Perilcode varchar(2),B BIGINT,C BIGINT,BI BIGINT) insert into @t(Perilcode,B,C,BI)values ('EQ',179166451986,27296144046,9067728654), ('WS',182394050346,28745459712,9148728654),('SL',114374574342,12703142574,293860386), ('TC',182394050346,28745459712,9148728654),('WF',182394050346,28745459712,9148728654),('FF',182394050346,28745459712,9148728654), ('ST',182394050346,28745459712,9148728654) ;with CTE AS( select col,col1,col2 from @t CROSS APPLY (Select Perilcode As Col,B As Col1,'B' as Col2 UNION ALL SELECT Perilcode As Col, C As Col1,'C' as Col2 UNION ALL SELECT Perilcode As Col, BI As Col1,'BI' as Col2 )A(col,Col1,col2) GROUP BY col,Col1,col2) select P.col2 As Perilcode,REPLACE(CONVERT(VARCHAR,CAST([EQ]AS MONEY),1),'.00','')[EQ], REPLACE(CONVERT(VARCHAR,CAST([WS]AS MONEY),1),'.00','')[WS], REPLACE(CONVERT(VARCHAR,CAST([SL]AS MONEY),1),'.00','')[SL], REPLACE(CONVERT(VARCHAR,CAST([TC]AS MONEY),1),'.00','')[TC], REPLACE(CONVERT(VARCHAR,CAST([WF]AS MONEY),1),'.00','')[WF], REPLACE(CONVERT(VARCHAR,CAST([FF]AS MONEY),1),'.00','')[FF], REPLACE(CONVERT(VARCHAR,CAST([ST]AS MONEY),1),'.00','')[ST] from ( Select col,col1,col2 from CTE)P PIVOT(MAX(Col1) FOR COL IN([EQ],[WS],[SL],[TC],[WF],[FF],[ST]))P ```
SQL Pivot Table with SUM
[ "", "sql", "sql-server", "" ]
I have 2 tables in the same database. I want to merge them based on the common `id` column. Because the tables are too huge I am not sure if there are duplicates. How is it possible to merge these two tables into one based on the `id` and be sure that there are no duplicates? ``` SELECT * FROM table1,table2 JOIN GROUP BY id ```
I am not sure What exactly do you want but anyway, this is your code ``` SELECT * FROM table1,table2 JOIN GROUP BY id ``` i just edit your query ``` SELECT * FROM table1 JOIN table2 on table2.id = table1.id GROUP BY table1.id // here you have to add table //on which you will be group by at this moment this is table1 ```
What do you mean by merging two tables? Do you want records and columns from both the tables or columns from one and records from both? Either way you will need to change the join clause only. You could do a join on the columns you wish to ``` SELECT DISTINCT * FROM table1 tb1 JOIN table2 tb2 ON table1.id = table2.id ``` Now if you want columns from only table1 do a LEFT JOIN If you want columns from only table2 then a RIGHT JOIN If you want columns from both the tables, use the query as is. DISTINCT ensures that you get only a single row if there are multiple rows with the same data (but this distinct will check values for all columns in a row whether they are different or the same) Union won't help if both tables have different number of columns. If you don't know about joins then use a Cartesian product ``` select distinct * from table1 tb1, table2 tb2 where tb1.id = tb2.id ``` Where id is the column that is common between the tables. Here if you want columns from only table1 do ``` select distinct tb1.* ``` Similarly replace tb1 by tb2 in the above statement if you just want table2 columns. ``` select distinct tb2.* ``` If you want cols from both just write '\*' In either cases I.e. joins and products said above if you need selective columns just write a table alias. E.g. Consider : table1 has id, foo, bar as columns table2 has id, name,roll no, age you want only id, foo, name from both the tables in the select query result do this: ``` select distinct tb1.id, tb1.foo, tb2.name from table1 tb1 join table2 tb2 on tb1.id=tb2.id ``` Same goes for the Cartesian product query. tb1, tb2 are BTW called as a table aliases. If you want data from both the tables even if they have nothing in common just do ``` select distinct * from table1 , table2 ``` Note that this **cannot** be achieved using a join as join requires a common column to join 'on'
Merge two tables to one and remove duplicates
[ "", "mysql", "sql", "sql-merge", "" ]
I have an Orders table with the Date\_ordered column. I am trying to select the average price of all of the items ordered that were purchased in the month of December. I used ``` select *, avg(price) from orders where monthname(date_ordered) = "december" ``` However it is only coming up with one result, when my table has 4 instances of the date being xxxx-12-xx Note: There are multiple years included in the data but they are irrelevant to the query I need
When you put `avg()` into the `select`, you turn the query into an aggregation query. Without a `group by`, SQL *always* returns on row. If you want the average as well as the other data, then use a `join` or subselect: ``` select o.*, oo.avgp from orders o cross join (select avg(price) as avgp from orders where month(date_ordered) = 12) oo where month(o.date_ordered) = 12; ```
`avg()` in your query is a group function. If there is no `GROUP BY` clause in your query, it causes the group function to be applied on all selected rows. So you are getting average of the four prices in that field. And the average is only one.
Mysql: Searching for data by month only yields one result not all of them
[ "", "mysql", "sql", "" ]
I've a database value that when inserted into a SQL variable, shows with question mark at the end !! can't find a reason?! ``` declare @A varchar(50) = 'R2300529‏' select @A ``` Results: `R2300529?` any explanation? i'm using `SQL server 2012`.
There is unrecognizable character in your string: ![enter image description here](https://i.stack.imgur.com/FTTyY.png) that is giving that `?`. Delete the value and retype, see my screenshot above.
I'm assuming you copy/pasted this value from somewhere. Either that, or you're making some brain teaser here. But copy/pasting the exact script you supplied reveals an additional character: 0x3F, which is a `?` based on the Hex to ASCII conversion. I'd recommend just retyping your script and not copy/pasting.
SQL varchar variable inserts question mark
[ "", "sql", "sql-server", "sql-server-2012", "" ]
I am trying to convert IP addresses from numeric (eg 4183726815 ) to the actual traditional IP display (eg 192.168.1.230) in MS SQL Server. I have found other code (below), however I then get an error. ``` CREATE FUNCTION dbo.IntegerToIPAddress (@IP AS bigint) RETURNS varchar(15) AS BEGIN DECLARE @Octet1 tinyint DECLARE @Octet2 tinyint DECLARE @Octet3 tinyint DECLARE @Octet4 tinyint DECLARE @RestOfIP bigint SET @Octet1 = @IP / 16777216 SET @RestOfIP = @IP - (@Octet1 * 16777216) SET @Octet2 = @RestOfIP / 65536 SET @RestOfIP = @RestOfIP - (@Octet2 * 65536) SET @Octet3 = @RestOfIP / 256 SET @Octet4 = @RestOfIP - (@Octet3 * 256) RETURN(CONVERT(varchar, @Octet1) + '.' + CONVERT(varchar, @Octet2) + '.' + CONVERT(varchar, @Octet3) + '.' + CONVERT(varchar, @Octet4)) END ``` And when I run ``` SELECT dbo.IntegerToIPAddress(2130806436) ``` It returns 127.1.134.164. Which is exactly what is required. However, when I run my data ip's , I get the following error: eg ``` SELECT dbo.IntegerToIPAddress2(3232235780) ``` > Arithmetic overflow error converting expression to data type int. Any ideas? Or should I try another approach from the above code? Thanks folks! :)
Change variable declaration from `Tinyint` to `Bigint` as: ``` DECLARE @Octet1 bigint DECLARE @Octet2 bigint DECLARE @Octet3 bigint DECLARE @Octet4 bigint ```
The line: SET @RestOfIP = @IP - (@Octet1 \* 16777216) Is treating the constant 16777216 as an int and then expecting the multiplication result to be int. You need to tell it that it will be bigint. ``` SET @RestOfIP = @IP - (@Octet1 * CONVERT(bigint, 16777216)) ```
Convert IP's in Numeric to traditional IP display
[ "", "sql", "sql-server", "ip", "" ]
TBL\_Transaction type: ``` Credit Debit ``` TBL\_Account Type: ``` 1 Saving Account 2 Loan Account 3 NRI Account ``` TBL\_Transaction: ``` Sr. Name AccType TransType DateOfTransaction Amount 1 mano saving credit 2014/12/01 5000 2 john Loan Debit 2014/12/02 10000 ``` I want to records which account holder credit process only,he should debit in particular dates. Result: Name AccountType I try this query ,but not work ``` SELECT NAME, ACCOUNTTYPENAME, ........... GROUP BY NAME, ACCOUNTTYPENAME WHERE TRANSACTIONTYPENAME LIKE 'CREDIT%' AND NOT LIKE 'DEBIT%' ```
try this query ``` SELECT NAME,ACCOUNTTYPENAME WHERE TRANSACTIONTYPENAME LIKE 'CREDIT%' OR (TRANSACTIONTYPENAME LIKE 'CREDIT%' and (TRANSACTIONTYPENAME LIKE 'DEBIT%' and DateOfTransaction ='2014/12/01')) GROUP BY NAME,ACCOUNTTYPENAME ```
Try this ``` SELECT NAME,ACCOUNTTYPENAME,........... WHERE TRANSACTIONTYPENAME LIKE 'CREDIT%' AND TRANSACTIONTYPENAME NOT LIKE 'DEBIT%' GROUP BY NAME,ACCOUNTTYPENAME ```
How to use like and NOT like for same field in SQL server
[ "", "sql", "sql-server", "" ]
I'm currently working on a report that shows me all postcodes covered by our sales team. Each team covers over 100 postcodes. What I would like to do is create a report that brings back the clients within the postcode. Currently, my code looks like this. ``` SELECT * FROM tbl_ClientFile WHERE CLNTPOST1 LIKE ('B79%') OR CLNTPOST1 LIKE ('BB1%') OR CLNTPOST1 LIKE ('BB10%') OR CLNTPOST1 LIKE ('BB11%') OR CLNTPOST1 LIKE ('BB12%') OR CLNTPOST1 LIKE ('BB18%') OR CLNTPOST1 LIKE ('BB2%') OR CLNTPOST1 LIKE ('BB3%') OR CLNTPOST1 LIKE ('BB4%') OR CLNTPOST1 LIKE ('BB5%') OR CLNTPOST1 LIKE ('BB6%') OR CLNTPOST1 LIKE ('BB8%') OR CLNTPOST1 LIKE ('BB9%') OR CLNTPOST1 LIKE ('BB94%') OR CLNTPOST1 LIKE ('BD1%') OR CLNTPOST1 LIKE ('BD10%') OR CLNTPOST1 LIKE ('BD11%') OR CLNTPOST1 LIKE ('BD12%') OR CLNTPOST1 LIKE ('BD13%') OR CLNTPOST1 LIKE ('BD14%') OR CLNTPOST1 LIKE ('BD15%') OR CLNTPOST1 LIKE ('BD16%') OR CLNTPOST1 LIKE ('BD17%') OR CLNTPOST1 LIKE ('BD18%') OR CLNTPOST1 LIKE ('BD19%') OR CLNTPOST1 LIKE ('BD2%') OR CLNTPOST1 LIKE ('BD20%') OR CLNTPOST1 LIKE ('BD21%') OR CLNTPOST1 LIKE ('BD22%') OR CLNTPOST1 LIKE ('BD3%') OR CLNTPOST1 LIKE ('BD4%') OR CLNTPOST1 LIKE ('BD5%') OR CLNTPOST1 LIKE ('BD6%') ``` What I was hoping for is that there is a faster and easier way of doing this. Any suggestions would be greatly appreciated. Is there a way to create a variable for each sales Team like `@SalesTeam1 = SELECT * FROM tbl_ClientFile WHERE POSTCODE1 like '' or like ''` Just fishing for ideas really. Cheers
``` WITH CTE AS ( SELECT VALUE FROM ( VALUES ('B79'), ('BB1'), ('BB10'), ('BB11'), ('BB12'), ('BB18'), ('BB2'), ('BB3'), ('BB4'), ('BB5'), ('BB6'), ('BB8'), ('BB9'), ('BB94'), ('BD1'), ('BD10'), ('BD11'), ('BD12'), ('BD13'), ('BD14'), ('BD15'), ('BD16'), ('BD17'), ('BD18'), ('BD19'), ('BD2'), ('BD20'), ('BD21'), ('BD22'), ('BD3'), ('BD4'), ('BD5'), ('BD6') ) V(VALUE) ) ``` SELECT \* FROM tbl\_ClientFile T WHERE EXISTS ( SELECT TOP 1 1 FROM CTE WHERE T.CLNTPOST1 LIKE CTE.VALUE + '%')
One of possible solutions. Create a table `Prefix(v varchar(4))` where you insert those values. Then a solution would be: ``` SELECT * FROM tbl_ClientFile cf JOIN Prefix p on cf.CLNTPOST1 LIKE p.v + '%' ``` To exclude duplicates if some prefix includes some another prefix like `BB1`, `BB10`, `BB15`...: ``` SELECT DISTINCT cf.* FROM tbl_ClientFile cf JOIN Prefix p on cf.CLNTPOST1 LIKE p.v + '%' ```
SQL Multiple LIKE Statements
[ "", "sql", "sql-server", "t-sql", "variables", "sql-like", "" ]
I am looking for a "better" way to perform a query in which I want to show a single player who he has played previously and the associated win-loss record for each such opponent. Here are the tables involved stripped down to essentials: ``` create table player (player_id int, username text); create table match (winner_id int, loser_id int); insert into player values (1, 'john'), (2, 'mary'), (3, 'bob'), (4, 'alice'); insert into match values (1, 2), (1, 2), (1, 3), (1, 4), (1, 4), (1, 4) , (2, 1), (4, 1), (4, 1); ``` Thus, john has a record of 2 wins and 1 loss vs mary; 1 win and 0 losses vs bob; and 3 wins and 2 losses vs alice. ``` create index idx_winners on match(winner_id); create index idx_winners on match(loser_id); ``` I am using Postgres 9.4. Something in the back of my head tells me to consider `LATERAL` somehow but I'm having a hard time understanding the "shape" of such. The following is the query I am using currently but something "feels off". Please help me learn and improve this. ``` select p.username as opponent, coalesce(r.won, 0) as won, coalesce(r.lost, 0) as lost from ( select m.winner_id, m.loser_id, count(m.*) as won, ( select t.lost from ( select winner_id, loser_id, count(*) as lost from match where loser_id = m.winner_id and winner_id = m.loser_id group by winner_id, loser_id ) t ) from match m where m.winner_id = 1 -- this would be a parameter group by m.winner_id, m.loser_id ) r join player p on p.player_id = r.loser_id; ``` This works as expected. Just looking to learn some tricks or better yet proper techniques to do the same. ``` opponent won lost -------- --- ---- alice 3 2 bob 1 0 mary 2 1 ```
### Query The query is not as simple as it looks at first. The shortest query string does not necessarily yield best performance. This should be **as fast as it gets**, being as short as possible for that: ``` SELECT p.username, COALESCE(w.ct, 0) AS won, COALESCE(l.ct, 0) AS lost FROM ( SELECT loser_id AS player_id, count(*) AS ct FROM match WHERE winner_id = 1 -- your player_id here GROUP BY 1 -- positional reference (not your player_id) ) w FULL JOIN ( SELECT winner_id AS player_id, count(*) AS ct FROM match WHERE loser_id = 1 -- your player_id here GROUP BY 1 ) l USING (player_id) JOIN player p USING (player_id) ORDER BY 1; ``` Result exactly as requested: ``` username | won | lost ---------+-----+----- alice | 3 | 2 bob | 1 | 0 mary | 2 | 1 ``` [**SQL Fiddle**](http://sqlfiddle.com/#!15/fbc6d/1) - with more revealing test data! The key feature is the [**`FULL [OUTER] JOIN`**](https://stackoverflow.com/questions/28034827/what-type-of-join-to-use/28035613#28035613) between the two subqueries for losses and wins. This produces a table of all players our candidate has played against. The `USING` clause in the join condition conveniently merges the two `player_id` columns into *one*. After that, a single `JOIN` to `player` to get the name, and [**`COALESCE`**](http://www.postgresql.org/docs/current/interactive/functions-conditional.html#FUNCTIONS-COALESCE-NVL-IFNULL) to replace NULL with 0. VoilΓ‘. ### Index Would be even faster with two multicolumn **indexes**: ``` CREATE INDEX idx_winner on match (winner_id, loser_id); CREATE INDEX idx_loser on match (loser_id, winner_id); ``` *Only* if you get [index-only scans](https://wiki.postgresql.org/wiki/Index-only_scans) out of this. Then Postgres does not even visit the `match` table *at all* and you get super-fast results. With two `integer` columns you happen to hit a *local optimum*: theses indexes have just the same size as the simple ones you had. Details: * [Is a composite index also good for queries on the first field?](https://dba.stackexchange.com/a/27493/3684) ### Shorter, but slow You could run correlated subqueries like [@Giorgi suggested](https://stackoverflow.com/a/30985052/939860), just working *correctly*: ``` SELECT * FROM ( SELECT username , (SELECT count(*) FROM match WHERE loser_id = p.player_id AND winner_id = 1) AS won , (SELECT count(*) FROM match WHERE winner_id = p.player_id AND loser_id = 1) AS lost FROM player p WHERE player_id <> 1 ) sub WHERE (won > 0 OR lost > 0) ORDER BY username; ``` Works fine for *small* tables, but doesn't scale. This needs a sequential scan on `player` and two index scans on `match` per existing player. Compare performance with `EXPLAIN ANALYZE`.
Solution with correlated subquery: ``` SELECT *, (SELECT COUNT(*) FROM match WHERE loser_id = p.player_id), (SELECT COUNT(*) FROM match WHERE winner_id = p.player_id) FROM dbo.player p WHERE player_id <> 1 ``` Solution with `UNION` and conditional aggregation: ``` SELECT t.loser_id , SUM(CASE WHEN result = 1 THEN 1 ELSE 0 END) , SUM(CASE WHEN result = -1 THEN 1 ELSE 0 END) FROM ( SELECT * , 1 AS result FROM match WHERE winner_id = 1 UNION ALL SELECT loser_id , winner_id , -1 AS result FROM match WHERE loser_id = 1 ) t GROUP BY t.loser_id ```
Add up conditional counts on multiple columns of the same table
[ "", "sql", "postgresql", "join", "aggregate-functions", "" ]
I want to find all the dealers who haven't had an order in 2015 yet. I know this query doesn't work but I thought it might be helpful to understand what I want to do. In this example I want to get just "Bob" as a result. He is the only dealer in this example to not have an order in 2015 yet. ``` SELECT d.`name` FROM z_dealer d LEFT JOIN z_order o ON (d.promo_code = o.promo_code) WHERE o.promo_code IS NULL AND o.date_ordered > '2015-01-01 00:00:00' ``` Here is the table data... ``` mysql> Select * from z_order; +----+-------+------------+---------------------+ | id | total | promo_code | date_ordered | +----+-------+------------+---------------------+ | 1 | 10 | holiday | 2014-06-22 09:06:50 | | 2 | 20 | special | 2015-06-22 09:07:04 | | 3 | 15 | holiday | 2015-03-01 09:07:23 | | 4 | 45 | special | 2014-09-03 09:07:33 | | 5 | 16 | laborday | 2014-06-22 09:09:01 | +----+-------+------------+---------------------+ mysql> select * from z_dealer; +----+------+------------+ | id | name | promo_code | +----+------+------------+ | 1 | John | holiday | | 2 | Suzy | special | | 3 | Bob | laborday | +----+------+------------+ ```
``` select d.`name` from z_dealer d where (select count(*) from z_order o WHERE o.promo_code = d.promo_code AND o.date_ordered > '2015-01-01') = 0 ```
``` SELECT d.`name` FROM z_dealer d LEFT JOIN z_order o ON (d.promo_code = o.promo_code) WHERE o.promo_code IS NULL AND o.date_ordered > 2015-01-01 00:00:00 ``` Date value between ' ' is read as a string value not a DateTime. So above code works. And if you want your result "Bob" remove Where clause from query as it is considered as predicate which may alter your results. P.S you should not use hyphens in Date Value as its representing cultural format. Have a look at [This](https://technet.microsoft.com/en-us/library/ms191307.aspx) for good practices.
How do I find records in one table that do exist in another but based on a date?
[ "", "mysql", "sql", "" ]
**mytable.timecreated** is a BIGINT column on a SQL Server 2008 R2 database and it contains a unix timestamp. This query: ``` SELECT DATEADD(s,mdl_user_enrolments.timestart,'19700101 02:00') as TheDate FROM mytable ``` gives me the date in this format: ``` 2015-05-30 20:30:05.000 ``` How do I retrieve just the Month and Year from that timestamp, like this: ``` May 2015 ``` The following SQL seems to do the job, but is there a simpler way to do it? ``` SELECT CAST( CASE WHEN month(dateadd(s,timecreated,'19700101 02:00')) = 1 Then 'January' WHEN month(dateadd(s,timecreated,'19700101 02:00')) = 2 Then 'February' WHEN month(dateadd(s,timecreated,'19700101 02:00')) = 3 Then 'March' WHEN month(dateadd(s,timecreated,'19700101 02:00')) = 4 Then 'April' WHEN month(dateadd(s,timecreated,'19700101 02:00')) = 5 Then 'May' WHEN month(dateadd(s,timecreated,'19700101 02:00')) = 6 Then 'June' WHEN month(dateadd(s,timecreated,'19700101 02:00')) = 7 Then 'July' WHEN month(dateadd(s,timecreated,'19700101 02:00')) = 8 Then 'August' WHEN month(dateadd(s,timecreated,'19700101 02:00')) = 9 Then 'September' WHEN month(dateadd(s,timecreated,'19700101 02:00')) = 10 Then 'October' WHEN month(dateadd(s,timecreated,'19700101 02:00')) = 11 Then 'November' WHEN month(dateadd(s,timecreated,'19700101 02:00')) = 12 Then 'December' END AS NVARCHAR(10)) + ' ' + CAST(YEAR(dateadd(s,timecreated,'19700101 02:00')) AS NVARCHAR(5)) AS MyDate FROM mytable ```
How about using `datename()`? ``` select (datename(month, dateadd(second, timecreated, '19700101 02:00')) + datename(year, dateadd(second, timecreated, '19700101 02:00')) ) ``` I'm leaving `02:00` in this because that is how you phrased your question. THe correct conversion, though, would use `00:00`, unless you are trying to correct for a time zone. Also note that the values returned by `datename()` depend on your internationalization settings, so they are not necessarily in English.
There are some built in `CONVERT` functions you can use. I don't know if there is one for the specific format that you want, but 106 is close enough that can be combined with the `RIGHT` function to get what you want, like this: ``` SELECT RIGHT(CONVERT(VARCHAR(11),mdl_user_enrolments.timestart,106), 8) as MyDate FROM MyTable ```
Extract Month-Year from a unix timestamp
[ "", "sql", "sql-server", "sql-server-2008-r2", "" ]
I am writing a query to get all players for all teams. Instead of looping in the application, I decided to get the players of all teams in a single query using `array_agg()`. I have written the query as follows: ``` SELECT team_id, array_agg(team_name) AS teamname, array_agg(player_id||'##'||player_name) AS playerdetails FROM team INNER JOIN players ON team_id = player_team GROUP BY team_id ``` This query gives me the result as below, in the result set the `teamname` is being repeated (exactly to the no. of players) ``` team_id team_name playerdetails 1 {Australia,Australia,Australia,Australia} {"5##Glenn Donald McGrath","6##Shane Warne","2##Steve Waugh","1##Adam Gilchrist"} 2 {India,India,India,India} {"8##Kapil Dev","11##Saurav Ganguly","3##Rahul Dravid","9##Sachin Tendulkar"} 3 {"South Africa","South Africa","South Africa","South Africa"} {"12##Gary Kristen","4##Shaun Pollock","7##Jacques Kallis","10##Alan Donald"} ``` Is there any way to return the result like this ``` team_id team_name playerdetails 1 Australia {"5##Glenn Donald McGrath","6##Shane Warne","2##Steve Waugh","1##Adam Gilchrist"} ``` I have achieved it using a subquery, but want to know if its possible to write it without the subquery ``` SELECT team_id, teamname[1], playerdetails FROM ( SELECT team_id, array_agg(team_name) AS teamname, array_agg(player_id||'##'||player_name) AS playerdetails FROM team INNER JOIN players ON team_id = player_team GROUP BY team_id) AS tempresult ``` The [sqfiddle](http://sqlfiddle.com/#!15/a885d) is here. And I am using Postgresql 8.4 **[EDIT]** I was actually thinking of hack to the `GROUP BY` limitation `column "team.team_status" must appear in the GROUP BY clause or be used in an aggregate function` when try to retrieve the a column which was not been specified in group by ``` SELECT team_id, array_agg(team_name) AS teamname, array_agg(player_id||'##'||player_name) AS playerdetails, team_status -- could be replaced by something like array_agg(team_status)[0] or customfunction(team_status) FROM team INNER JOIN players ON team_id = player_team GROUP BY team_id ```
It was actually a mistake from my part... the answer to my first question lies in that query itself. I just have to enclose the `(array_agg(team_name))[1]`, earlier I tried it without the brackets. ``` SELECT team_id, (array_agg(team_name))[1] AS teamname, array_agg(player_id||'##'||player_name) AS playerdetails FROM team INNER JOIN players ON team_id = player_team GROUP BY team_id ```
Very simply, do not aggregate the `team_name` but `GROUP BY` it: ``` SELECT team_id, team_name, array_agg(player_id||'##'||player_name) AS playerdetails FROM team JOIN players ON team_id = player_team GROUP BY team_id, team_name; ```
Return first element in array_agg()
[ "", "sql", "postgresql", "" ]
I am trying to reference the **a.pointer** inside a subquery but it looks like mysql doesn't allow to reference fields outside a subquery. ``` UPDATE order_items AS a SET a.id_parent = ( SELECT id FROM(( select id from order_items as b where (b.pointer = a.pointer) and (b.id_parent = 0)) )x) WHERE a.id_parent > 9999 ``` What is the best way to access the a.pointer inside the subquery?
Looks like you are trying to do a self join here; something like ``` UPDATE order_items a JOIN order_items b ON b.pointer = a.pointer AND b.id_parent = 0 SET a.id_parent = b.id WHERE a.id_parent > 9999; ```
Try using a join for this ``` UPDATE order_items as a SET a.id_parent = b.id JOIN(select id from order_items as c where c.id_parent = 0) b ON b.pointer = a.pointer WHERE a.id_parent > 9999 ```
Reference field outside the subgroup (Unknown column in where clause)
[ "", "mysql", "sql", "database", "subquery", "" ]
Here's my current situation: I'm presented with a long list of data, containing names and values, with a corresponding value ID. The number of IDs can be varying for data presented, but is constant for the whole set. (This set will have exactly 2 Values per name). The thing is, the names repeat, but the IDs do not. My question then becomes, How can I convert a table that looks like this: ``` Name | ID | Value Sam | 1 | 15 Sam | 2 | 6 Bob | 1 | 9 Bob | 2 | 11 ``` Into something more like this: ``` Name | Value1 | Value2 Sam | 15 | 6 Bob | 9 | 11 ``` Note: I'm finding a really hard time figuring out a title for this question.
You can using pivot for this as example. Take a look at this code snippet: ``` SELECT pvt.Name, pvt.[1] as Value1, pvt.[2] as Value2 FROM yourTable PIVTO ( MAX(Value) FOR id IN([1],[2]) ) as pvt ``` You'll just need to add your additional id's to the `IN()` and the `SELECT`. If you need a dynamic pivot, you can use this one over here: ``` DECLARE @sql nvarchar(max), @columnlist nvarchar(max) SELECT @columnlist = COALESCE(@columnlist + N',['+CONVERT(nvarchar(max),cols.id)+']', N'['+CONVERT(nvarchar(max),cols.id)+']' ) FROM (SELECT DISTINCT id FROM yourTable) as cols -- this is your part SET @sql = N' SELECT pvt.* FROM yourTable PIVTO ( MAX(Value) FOR id IN('+@columnlist+') ) as pvt' EXEC(@sql) ```
If you can be sure that the set will only ever have two values and these will always have an `ID` of either 1 or 2, you could do something like the following: ``` CREATE TABLE #Table (Name VARCHAR(5), ID INT, Value INT) INSERT INTO #Table VALUES ('Sam', 1, 15), ('Sam', 2, 6), ('Bob', 1, 9), ('Bob', 2, 11) SELECT t.Name, t.Value AS Value1, t2.Value AS Value2 FROM #Table t INNER JOIN #Table t2 ON t2.Name = t.Name AND t2.ID = 2 WHERE t.ID = 1 DROP TABLE #Table ``` Note that the temporary table is just here to illustrate the result. The important thing is the `SELECT` statement. This joins the table to itself to get the value for Value2.
Microsoft SQL Server: Adding New Column To Row Instead Of Repeated Values
[ "", "sql", "sql-server", "" ]
I have this mysql table structure: ``` ------------------------------------ | item_id | meta_key | meta_value | ------------------------------------ 159 category Bungalow 159 location Lagos 159 price 45000 160 category Bungalow 160 location Abuja 160 price 53500 ... 350 category Bungalow 350 location Lagos 350 price 32000 ``` What I'd like to do is select several rows matching two or more criteria based on the `meta_key` column. For example, say I wanted to select the `item_id` for every 'Bungalow' located in 'Lagos'. How would I go about doing that? Here's my attempt, which is not working: ``` SELECT `item_id` FROM `item_meta` WHERE `meta_key` = 'category' AND `meta_value` = 'Bungalow' AND `meta_key` = 'location' AND `meta_value` = 'Lagos' ``` Any help will be appreciated.
If you are looking to find the records matching with both the criteria here is a way of doing it ``` select `item_id` FROM `item_meta` where ( `meta_key` = 'category' and `meta_value` = 'Bungalow' ) or ( `meta_key` = 'location' AND `meta_value` = 'Lagos' ) group by `item_id` having count(*)=2 ```
**Try as below with brackets otherwise you will not get expected result :** ``` SELECT `item_id` FROM `item_meta` WHERE (`meta_key` = 'category' AND `meta_value` = 'Bungalow') OR (`meta_key` = 'location' AND `meta_value` = 'Lagos') ```
MySQL: Select several rows based on several keys on a given column
[ "", "mysql", "sql", "database", "" ]
I have two tables, one an abbreviated calendar: ``` create table dbo.calendar ( [date] datetime ) insert into dbo.calendar values ('20150101 00:00:00 AM') -- 1/1/15 insert into dbo.calendar values ('20150102 00:00:00 AM') -- 1/2/15 insert into dbo.calendar values ('20150103 00:00:00 AM') -- 1/3/15 insert into dbo.calendar values ('20150104 00:00:00 AM') -- 1/4/15 ``` and the other a time clock ``` create table dbo.timeclock ( id integer, punchtime datetime, punchtype varchar(25) ) --employee 1 insert into dbo.timeclock values (1,'20150102 08:00:00 AM','in') -- 8am 1/2/15 insert into dbo.timeclock values (1,'20150102 05:00:00 PM','out') -- 5pm 1/2/15 insert into dbo.timeclock values (1,'20150103 08:00:00 AM','in') -- 8am 1/3/15 insert into dbo.timeclock values (1,'20150103 05:00:00 PM','out') -- 5pm 1/3/15 --employee 2 insert into dbo.timeclock values (2,'20150103 08:00:00 AM','in') -- 8am 1/3/15 insert into dbo.timeclock values (2,'20150103 05:00:00 PM','out') -- 5pm 1/3/15 ``` What I'm wanting is to create in dbo.timeclock a 'no time entered' record for each day that each employee has no records for each date in the calendar. The end table would look like this: ``` id punchtime type -------------------------------------------- 1 20150101 00:00:00 AM no time entered 1 20150102 08:00:00 AM in 1 20150102 05:00:00 PM out 1 20150103 08:00:00 AM in 1 20150103 05:00:00 PM out 1 20150104 00:00:00 AM no time entered 2 20150101 00:00:00 AM no time entered 2 20150102 00:00:00 AM no time entered 2 20150103 08:00:00 AM in 2 20150103 05:00:00 PM out 2 20150104 00:00:00 AM no time entered ``` I was able to do this using cursors, but it runs far too slow and I know it's not the 'correct' way of doing it. Thanks for looking!
1. Create a table that contains all the possible dates in the range, @alldates 2. Then insert your missing records with something like this: Query ``` INSERT INTO dbo.timeclock SELECT id ,d.punchtime ,'no time entered' FROM ( SELECT DATE ,id FROM @alldates d CROSS JOIN ( SELECT DISTINCT id FROM dbo.timeclock ) ids ) userDateCross LEFT JOIN dbo.timeclock ON timeclock.id = userdatecross.id AND userdatecross.DATE = timeclock.DATE WHERE timeclock.punctype IS NULL ```
I would create a trigger that auto-populates your `dbo.timeclock` table when you insert into `dbo.calendar` with the default value `no time entered`. Then when you update your `dbo.timeclock` table use another trigger to remove this value after the addition of new values to the table. This will automatically handle what you are describing as you input new values to your tables without having to think about it again. Trigger for insert into `dbo.calendar` ``` CREATE TRIGGER timecard AFTER INSERT ON dbo.calendar FOR EACH ROW BEGIN INSERT INTO dbo.timecard SELECT emp.id, NEW.date, "no time entered" FROM dbo.employee AS emp END; ``` Trigger for insert into `dbo.timecard` ``` CREATE TRIGGER newTime AFTER INSERT ON dbo.timecard FOR EACH ROW BEGIN DELETE FROM dbo.timecard WHERE punchtime=NEW.punchtime AND id=NEW.id AND punchtype="no time entered" END; ```
How can I create missing date records for each employee based off a limited calendar?
[ "", "mysql", "sql", "t-sql", "" ]
I need to find entries in my SQL table that always and only appear in a certain value. For example: ``` DeviceID Transmission -------- ------------ 000329 Inventory 000980 Inventory 004406 Starting 000980 Stopping 000329 Inventory 004406 Inventory ``` Now I need to find all DeviceIDs that only have Inventory Transmissions and never Starting or Stopping. In this case 000329.
You can select all `Transmission = 'Inventory'` ids and filter out those exist in `Transmission in('Starting', 'Stopping')`: ``` select distinct(DeviceID) from YourTable WHERE Transmission = 'Inventory' and DeviceID not in ( select distinct(DeviceID) from YourTable WHERE Transmission in('Starting', 'Stopping') ); ``` SQL Fiddle: <http://sqlfiddle.com/#!9/81896/12>
You can use `NOT EXISTS` to exclude DeviceID's that also have Starting or Stopping. ``` select DeviceID from tablename t1 where not exists (select 1 from tablename t2 where t1.DeviceID = t2.DeviceID and t2.Transmission in ('Starting','Stopping')) and t1.Transmission = 'Inventory' ```
How to get entries from SQL that only appear in certain value?
[ "", "mysql", "sql", "" ]
I want to group `C,C++,C# and VB,VB.NET` as a different single category with summarized result. Do I need to substitute the team name into a unified value `(e.g. replace C,C++,C# into "C related")` and then do the grouping? Which one is better in terms of performance? do i need to use Store procedures? Can someone please advise me how to do it? Thanks! PS: I am using SQL server **Original table** ``` INSERT INTO Table1 ([Team], [Room]) VALUES ('Java', 'Room A'), ('Java', 'Room B'), ('Java', 'Room C'), ('Java', 'Room A'), ('Java', 'Room C'), ('C++', 'Room A'), ('C++', 'Room B'), ('C++', 'Room C'), ('C', 'Room A'), ('C', 'Room B'), ('C', 'Room C'), ('VB', 'Room B'), ('VB', 'Room C'), ('C#', 'Room A'), ('C#', 'Room B'), ('C#', 'Room C'), ('VB.Net', 'Room A'), ('VB.Net', 'Room B'), ('VB.Net', 'Room C') ``` **Expected output** ``` ('Team', 'Room', 'Count'), ('Java', 'Room A', 2), ('Java', 'Room B', 1), ('Java', 'Room C', 2), ('C related', 'Room A', 3), ('C related', 'Room B', 3), ('C related', 'Room C', 3), ('VB related', 'Room A', 1), ('VB related', 'Room B', 2), ('VB related', 'Room C', 2) ```
I would have another table that maps the language to a language family. That is, Java would map to a value of, say, 1, the C-related to 2, and the VB-related to 3. Then, you can join the two tables and group by the language family. This should perform better than grouping by the strings. If you are interested in improving the performance even more, you could denormalize and add a column to your table for the language family.
You can use a `group by` query with a `case` statement: ``` select (case when team in ('C', 'C++', 'C#') then 'C related' when team in ('VB', 'VB.NET') then 'VB related' else team end) as team, room, count(*) from table1 group by (case when team in ('C', 'C++', 'C#') then 'C related' when team in ('VB', 'VB.NET') then 'VB related' else team end), room; ```
How to group two categories as a one group in SQL
[ "", "sql", "sql-server", "" ]
I have a **query** that gives: ``` itemid deadlineneeded delievrydate quantity 200 15/07/15 14/07/15 5 200 15/07/15 14/07/15 10 200 15/07/15 13/07/15 25 201 15/07/15 14/07/15 30 200 14/07/15 10/07/15 3 201 15/07/15 15/07/15 100 ``` It gives the information from multiple tables. Basically it means When items arrive to warehouse (delievrydate) and how many. The deadlineneeded means when i'm going to use it. My goal is to find out the total quantity of itemid that arrives 1 day before deadlineeded. for example with the last data I want to get: ``` itemid deadlineneeded quantity 200 15/07/15 43 (5+10+25+3) 200 14/07/15 3 //5+10+25 not included because deliverydate 201 15/07/15 30 //100 not included because deliverydate ``` How do i get it?
``` SELECT itemid, deadlineneeded, sum(quantity) AS total_quantity FROM <your table> WHERE (deadlineneeded - delievrydate)::int >= 1 GROUP BY 1, 2 ORDER BY 1, 2; ``` This uses a "delievrydate" (looks like a typo to me) that is *at least* 1 day before the "deadlineneeded" date, as your sample data is suggesting.
``` select distinct date_trunc('day',deadlineneeded),sum from (select *,sum(quantity) over (partition by deadlineneeded-delievrydate) from tablename) t where deadlineneeded-delievrydate = '1 day'; ```
postgresql complex group by in query
[ "", "sql", "postgresql", "" ]
Consider the following scenario: ``` Area Code Count BP 90-99 10 CL 78-87 10 ``` I need ten records generated for both of these. ``` Area Code BP 90 BP 91 BP 92 BP 93 BP 94 BP 95 BP 96 BP 97 BP 98 and so on. ``` In oracle, this can be easily done by using connect by level. How to do this using MySQL. Please note that I do have the number of iterations to be made in the third column called count.
You need to have a table of numbers. This can be generated as a derived table on-the-fly: ``` select area, (substring_index(code, '-', 1) + n.n - 1) as code from (select 1 as n union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9 union all select 10 ) n join scenario s on n.n <= s.count; ``` You need to be sure that the list of numbers is big enough for the maximum count in the tables. It is handy if you have a table of numbers available. Normally, an `auto_increment` column can help generate such a table.
Use number table and try this approach ``` Create table numbers(number int) insert into numbers(number) select 1 union all select 2 union all . . . select 100 select t1.area, left(code,locate('-',code)-1)*1 from table as t1 inner join numbers as t2 on 1=1 where left(code,locate('-',code)-1)*1 +number <=substring(code,locate('-',code)+1,length(code))*1 ```
How to generate multiple records for every record in a table using a column value in MySQL?
[ "", "mysql", "sql", "database", "loops", "iteration", "" ]
I am trying to write a query that would allow me to get a list of companies that were created this month that at least would have one order. I have two tables dbo.Companies and dbo.Orders They are linked by CompanyID I have written a partial script but I cannot figure out how to complete it. ``` SELECT COUNT(*) FROM dbo.Companies WHERE Month(CreatedDateTime) = MONTH(getDate()) ``` That gives me a list of all the companies that have been created this month but I am confused how to get the final part of 'that have at least one order'.
The query is relatively straightforward: ``` SELECT COUNT(*) FROM dbo.Companies WHERE Month(CreatedDateTime) = MONTH(getDate()) AND YEAR(CreatedDateTime) = YEAR(getDate()) AND EXISTS (SELECT 1 FROM dbo.Orders WHERE Orders.CompanyID = Companies.CompanyID) ``` You can also use an Inner Join on Companies and Orders. Additionally, you need to check both month and year for companies created in the current month. If you only check month, then a Company created in the current month of ANY year will match `MONTH(getDate())`
`EXISTS` solution: ``` SELECT * FROM dbo.Companies c where Month(CreatedDateTime) = MONTH(getDate()) and exists (select 1 from dbo.Orders o where o.CompanyID = c.CompanyID) ``` `JOIN` version: ``` select distinct c.* FROM dbo.Companies c JOIN dbo.Orders o ON o.CompanyID = c.CompanyID WHERE Month(CreatedDateTime) = MONTH(getDate()) ```
SQL Select All Companies that have been created this month with at least one order
[ "", "sql", "sql-server", "select", "" ]
Please look at the below tables and result. Check my query and help me to get the result as below. ``` Table : incident ---------------- incident_id usr_id item_id 10059926 191 61006 Table: act_reg -------------- act_reg_id act_type_id incident_id usr_id act_type_sc 454244 1 10059926 191 ASSIGN 471938 115 10059926 191 TRAVEL TIME 473379 40 10059926 191 FOLLOW UP 477652 115 10059926 191 TRAVEL TIME 477653 107 10059926 191 SITE ARRIVAL 489091 5000054 10059926 191 ADD_ATTCHMNTS Result(Need to get) ------------------- incident_id usr_id item_id Attachment 10059926 191 61006 Yes ``` My query: ``` SELECT incident.incident_id,incident.usr_id,incident.item_id,Attachemt FROM incident RIGHT JOIN act_reg ON incident.incident_id=act_reg.incident_id ```
If i understand your requirement you want only rows from table `incident` and an additonal column if there are attchments, which is the case if there is at least one record in table `act_reg` with the same `incident_id` + `usr_id` and `act_type_sc=ADD_ATTCHMNTS`. I would use `CASE WHEN EXISTS`: ``` SELECT incident_id, usr_id, item_id, Attachment = CASE WHEN EXISTS ( SELECT 1 FROM act_reg a WHERE i.incident_id = a.incident_id AND i.usr_id = a.usr_id AND a.act_type_sc = 'ADD_ATTCHMNTS' ) THEN 'Yes' ELSE 'No' END FROM incident i ``` `Demo`
Something like this should help ``` SELECT incident.incident_id,incident.usr_id,incident.item_id, 'Yes' Attachemt FROM incident where exists (select * from act_reg where incident.incident_id = act_reg.incident_id and act_reg.act_type_sc = 'ADD_ATTCHMNTS' ) ```
SQL Join Query Selection
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I know we can simply create a new table and copy the old table by doing ``` select * into tbl2 from tbl1 ``` i would like to check if table tbl2 exists, if it does then copy all the rows, if it doesn't then create a new one without having to specify the column names in tbl2 since I'm copying all the columns from tbl2 in tbl1. then, I would like to drop old table(tbl1) ``` if not exists(select * from tbl1 where id = 1) create table tbl2() drop tbl1 go ```
``` IF (EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'tbl2')) BEGIN -- tbl2 exists, so just copy rows INSERT INTO tbl2 SELECT * FROM tbl1; END ELSE BEGIN -- tbl2 doesn't exist, so create new table tbl2 and copy rows SELECT * INTO tbl2 FROM tbl1; DROP tbl1; END ``` This won't copy indexes, keys, etc though - just the column names on create (in the else branch).
``` If object_id ("tempdb..#tbl2") is null Select * into #tbl2 from tbl1 Else Insert #tbl2 select * from tbl1 ``` If you don't want it to be temp table get rid of # and tempdb
copy table and drop it
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "stored-procedures", "" ]
I have two SQL tables: `users` & `userGroup` with data like: **users**: ``` user | age | gender testUserA, 25, Male testUserB, 30, Female testUserC, 35, Male ``` **userGroup**: ``` user | group testUserA, groupA testUserB, groupA testUserC, groupB ``` How would I phrase a query to list the male users in groupA?
All you need is a `join` and a `where` clause to filter `gender` and `group`: ``` select u.user from user u join userGroup ug on u.user = ug.user where u.gender = 'Male' and ug.group = 'groupA' ```
This query would join the two tables together and also filter by gender. ``` SELECT u.User, u.Gender, ug.Group FROM user u INNER JOIN userGroup ug ON u.user = ug.user WHERE u.gender = 'Male' AND ug.Group = 'groupA'; ```
SQL query for matching a column across two tables
[ "", "sql", "" ]
I have the following `Group By` SQL: ``` SELECT s.Login_Name, COUNT(s.s1CIDNumber) FROM [dbSuppHousing].[dbo].[tblSurvey] s group by s.Login_Name ``` I want to know how I can join this result set to another table(`tblUsers`) to add user\_id to the result set (`tblSurvey` and `tblUsers` have 1-1 relationship on Login\_Name) I tried the following: ``` Select u.User_Id from tblUsers u,a.Login_Name inner join (SELECT s.Login_Name Login_Name, COUNT(s.s1CIDNumber)as abc FROM [dbSuppHousing].[dbo].[tblSurvey] s group by s.Login_Name) a on u.Login_Name=a.Login_Name ``` I get errors. The problem is columns of **`a`** are not visible outside. For example `a.abc`
You have mistake here `from tblUsers u,a.Login_Name` try to move this piece of code `a.Login_Name` to select ``` Select u.User_Id, a.Login_Name from tblUsers u inner join (SELECT s.Login_Name Login_Name, COUNT(s.s1CIDNumber)as abc FROM [dbSuppHousing].[dbo].[tblSurvey] s group by s.Login_Name) a on u.Login_Name=a.Login_Name ```
Your `from` clause is in the wrong place: ``` Select u.User_Id, a.* from tblUsers u inner join (SELECT s.Login_Name, COUNT(s.s1CIDNumber) as abc FROM [dbSuppHousing].[dbo].[tblSurvey] s GROUP BY s.Login_Name ) a ON u.Login_Name = a.Login_Name; ``` 
How to join result of a Group by Query to another table
[ "", "sql", "t-sql", "" ]
I have a `projects` table with the following columns `id`, `project_name`, `remix_of` A project can be a remix of another project and the id of the project being remixed is stored in the `remix_of` column ``` id | project_name | remix_of ---------------------------- 1 | 1st Project | 0 2 | 2nd Project | 0 3 | 3rd Project | 2 4 | 4th Project | 1 5 | 5th Project | 2 ``` I want to query all the projects, ordering by the amount of times it has been remixed, in this case it would be: ``` 2nd Project | 2 remixes 1st Project | 1 remix 3rd Project | 0 4th Project | 0 5th Project | 0 ``` I'm using MySQL and have little experience with SQL overall.
You can do this with a `left join` or correlated subquery: ``` select p.project_name, (select count(*) from projects pr where pr.remix_of = p.id) as remixes from projects p order by remixes desc; ```
``` DROP TABLE IF EXISTS my_table; CREATE TABLE my_table (id INT NOT NULL AUTO_INCREMENT PRIMARY KEY ,project_name VARCHAR(20) NOT NULL ,remix_of INT NULL ); INSERT INTO my_table VALUES (1,'1st Project',NULL), (2,'2nd Project',NULL), (3,'3rd Project',2), (4,'4th Project',1), (5,'5th Project',2); SELECT x.project_name , COUNT(y.id) remixes FROM my_table x LEFT JOIN my_table y ON y.remix_of = x.id GROUP BY x.project_name; +--------------+---------+ | project_name | remixes | +--------------+---------+ | 1st Project | 1 | | 2nd Project | 2 | | 3rd Project | 0 | | 4th Project | 0 | | 5th Project | 0 | +--------------+---------+ ```
Order table based on specific column count
[ "", "mysql", "sql", "laravel", "" ]
I have a table that displays hits on a link for a every day a long with an "origin" column ``` hitdate originid hitcount ================================== 2011-01-05 2 25 2011-01-05 3 3 2011-04-06 2 1 2011-04-06 6 11 2011-05-06 7 9 2011-05-09 2 25 2011-07-10 3 3 2011-17-11 2 1 2011-01-12 6 11 2012-01-06 7 9 ``` What I want is to programmatically sum the hitcount for every 3 month, for each originID, since the first day of the set (which is May 1st 2011) until today's date The results I want to get ``` Trimester originid hitcount ============================================== 2011-01-05 to 2011-31-07 1 0 2 26 3 3 6 11 2011-01-08 to 2011-31-10 .. .. ```
I think a possible source of trouble with using `datepart(qq, ...` as the other answers do is that (1) this doesn't seem to account for the possibility of the date set spanning multiple years, and (2) it makes assumptions about the earliest date in your data set. I've tried to write a solution that will work for any start date, even if it's not the first day of a month. It's a bit more complicated, so perhaps someone can suggest a way to simplify it. ``` -- Sample data from the question. declare @hits table (hitdate date, originid bigint, hitcount int); insert @hits values (convert(date, '20110501', 112), 2, 25), (convert(date, '20110501', 112), 3, 3), (convert(date, '20110604', 112), 2, 1), (convert(date, '20110604', 112), 6, 11), (convert(date, '20110605', 112), 7, 9), (convert(date, '20110905', 112), 2, 25), (convert(date, '20111007', 112), 3, 3), (convert(date, '20111117', 112), 2, 1), (convert(date, '20111201', 112), 6, 11), (convert(date, '20120601', 112), 7, 9); -- Define the date range that we're concerned with. declare @beginDate date, @endDate date; select @beginDate = min(hitdate), @endDate = convert(date, getdate()) from @hits; -- Build a list of three-month periods that encompass the date range defined -- above. Each period will be inclusive on the lower end of the range and -- exclusive on the upper end. declare @Trimester table (beginDate date, endDate date); while @beginDate <= @endDate begin insert @Trimester values (@beginDate, dateadd(m, 3, @beginDate)); set @beginDate = dateadd(m, 3, @beginDate); end; -- Finally, assign each hit to one of these periods. select [Trimester] = convert(varchar, T.beginDate) + ' to ' + convert(varchar, dateadd(d, -1, T.endDate)), H.originid, hitcount = sum(H.hitcount) from @hits H inner join @Trimester T on H.hitdate >= T.beginDate and H.hitdate < T.endDate group by T.beginDate, T.endDate, H.originid order by T.beginDate, H.originid; ``` The result set is: ![Query results](https://i.stack.imgur.com/yIBp3.png) One difference between my result set and yours is that yours contains an entry saying that there were zero hits for `originid = 1`. I haven't done anything with this since I don't know where you're storing the set of valid `originid` values, but it should be fairly easy for you to tweak this query to support that if you want, just by meddling with the joins in the final step. **Update:** A little more on my concerns with using `datepart`: * Using `datepart(qq, hitdate)` by itself assumes that quarter boundaries are calendar quarters, e.g. 1 Apr–30 Jun, as has already been noted in another answer. * Using something like `datepart(qq, dateadd(mm,1,hitdate))` shifts the quarter by a month so you get one starting on 1 May as in your sample data, but suppose you want to start in June instead of May. You have to change the query. * `datepart(qq, ...` by itself will return the same value for two dates that are in the same quarter but different years. So hits on 1 May 2011 and those on 1 May 2012 will be grouped together. You can adjust for this, but at the time of this writing, the other answers aren't doing so. * Using `datepart` in any of the ways described above assumes that quarters begin on the first day of a month. Suppose the earliest date in your result set was 15 May. My query will treat 15 May–14 Aug as the first quarter, 15 Aug–14 Nov as the second quarter, etc. That last one, I think, might be the most difficult to overcome if you want a solution that uses `datepart(qq, ...)`. If you're comfortable with the assumption that the start date is always the first day of a month, then a simpler solution than I have suggested is possible. In the query above, if you want to use a different start date, you simply assign the date you want to `@beginDate`.
Use DATEPART to break the date into quarters. As your 3 month periods are not standard quarters (Jan-Mar, Apr-June), you can use DATEADD in order to offset your hitdate so that it falls in the appropriate three month period: ``` Select DATEPART(qq,dateadd(mm,1,hitdate)) as QRTR, originid, SUM(hitcount) from tablename group by DATEPART(qq,dateadd(mm,1,hitdate)), originid ```
How to query data for every x month in SQL?
[ "", "sql", "sql-server", "" ]
I'm new to SQL, so forgive me if I am understanding this incorrectly. I have one table named Employees, where I have columns for: ``` EmployeeID - This is Int Identity (1,1) FirstName LastName ``` I have a second table named Calendar, with the calendar information populated for the next 50 years. Columns are: ``` FullDate Year Month Week DayofWeek DayofMonth DayofYear TypeofDay - (Weekday, Weekend, Bank holiday etc) ``` Now for where I'm a little confused - I think I need a third table (Table3) that links the two above tables, so I have a table with something like: ``` TableId FullDate - linked from calendar table EmployeeID - linked from employee table FirstName - linked from employee table LastName - linked from employee table Shift1Text Shift2Text ``` If my understanding is correct, I can then use a command like: ``` select * from Table3 where FullDate = **Chosen Date / DateRange** ``` so I would end up with an output along the lines of: ``` Table ID | FullDate | EmployeeID | FirstName | LastName | Shift1Text | Shift2Text ---------+------------+------------+-----------+----------+------------+------------ 1 | 22/06/2015 | 1 | Joe | Blogs | SomeText | SomeText 2 | 22/06/2015 | 2 | Fred | Smith | SomeText | SomeText 3 | 22/06/2015 | 3 | Bob | Jones | SomeText | SomeText 4 | 23/06/2015 | 1 | Joe | Blogs | SomeText | SomeText 5 | 23/06/2015 | 2 | Fred | Smith | SomeText | SomeText ``` and so on... The problem is, I have no idea how to go about linking the tables in this way, or automatically populating the rows of the third table with the date and employee info from the first 2 tables.
Better to add ID in you Calender table so your schema should be something like this![enter image description here](https://i.stack.imgur.com/T3y07.png) *Note :* in EmployeeDate table you dont need to add FirstName or LastName because you have EmployeeID which references to Employee table where you can easy get those fields. And your query will be like this. ``` Select * from Employee as E Join EmployeeDate as ED on E.EmployeeID = ED.EmployeeID Join DateTable D on D.Dateid = ED.Dateid Where D.fulldate = <your full Date> ``` Thank
You need to avoid duplicating data between tables, and instead do a join in your query. The link exists with the employee ID in the Calender table, so you don't need the third table. Roughly speaking, your query would look something like this: ``` select c.fulldate, e.employeeid, e.firstname, e.lastname (other columns....) from employees e join calendar c on c.employeeid = e.employeeid ``` Otherwise, look into using a view (if my-sql can use them?)
Newbie to linking SQL tables
[ "", "sql", "sql-server", "" ]
I have two SQL tables (I am using SQLite). ``` Table1 (code TEXT) Table2 (code TEXT, codeTable1 TEXT) ``` How can I fetch all the table1's content which has at least one row in the table 2 with the codeTable1 not null?
I think you want a correlated subquery: ``` select t1.* from Table1 t1 where exists (select 1 from Table2 t2 where t2.code = t1.code and t2.codeTable1 is not null ); ``` This seems like a pretty direct translation of your requirements.
A simple join should do it: Here's the query: ``` select * from table1 join table2 on table1.code = table2.table1_code where table1.code is not null ``` Here's the full example: ``` use example; drop table if exists table1; drop table if exists table2; create table table1 ( code varchar(64) ); create table table2 ( code varchar(64), table1_code varchar(64) references table1(code) ); insert into table1 values('CODE1'); insert into table1 values('CODE2'); insert into table1 values('CODE3'); insert into table1 values('CODE4'); insert into table1 values('CODE5'); insert into table1 values(null); insert into table2 values('VAL1', 'CODE1'); insert into table2 values('VAL3', 'CODE3'); insert into table2 values('VAL5', 'CODE5'); insert into table2 values(null, null); insert into table2 values(null, null); insert into table2 values(null, null); select * from table1 join table2 on table1.code = table2.table1_code where table1.code is not null + --------- + --------- + ---------------- + | code | code | table1_code | + --------- + --------- + ---------------- + | CODE1 | VAL1 | CODE1 | | CODE3 | VAL3 | CODE3 | | CODE5 | VAL5 | CODE5 | + --------- + --------- + ---------------- + 3 rows ```
How to select columns in a table which have values not null in another table
[ "", "sql", "sqlite", "" ]
I am trying to retrieve data from one table and then insert it into another table. This is a a sample of the first table in which there is the following data. tb1 is the table which consists of data. The two columns Manager and TeamLeader basically means for example : Josh is managed by Vik and so on. An employee can also be a manager to another employer. For example, Josh is the manager of Nirvan and Deva. ``` +---------+-------------+ | tbl1 | +---------+-------------+ | Manager | Employee | +---------+-------------+ | Vik | Josh | +---------+-------------+ | Vik | Cindy | +---------+-------------+ | Vik | Alvin | +---------+-------------+ | Vik | Kim | +---------+-------------+ | Josh | Nirvan | +---------+-------------+ | Josh | Deva | +---------+-------------+ | Cindy | Mervyn | +---------+-------------+ | Nirvan | Reeta | +---------+-------------+ | Nirvan | Zaki | +---------+-------------+ | Nirvan | Sunny | +---------+-------------+ ``` What i want is to insert all these records in another table with the following columns : Id(which is set to IDENTITY/AUTONUM), Name(name of employee/manager), ParentId(of the manager which a particular employee has to report to). So for example, I should be getting something of the sort : ``` ID Name ParentId 1 Vik 0 2 Josh 1 3 Cindy 1 4 Alvin 1 5 Kim 1 6 Nirvan 2 7 Deva 2 8 Mervyn 3 9 Reeta 6 10 Zaki 6 11 Sunny 6 ``` I am having difficulty to get the right sql to retrieve this data from the first table and insert it into another table.
``` INSERT INTO tbl2 ( Name ,parentId ) SELECT DISTINCT manager ,0 FROM tbl1 WHERE manager NOT IN ( SELECT employee FROM tbl1 ) INSERT INTO tbl2 SELECT DISTINCT employee ,0 FROM tbl1 UPDATE tbl2 SET parentid = parent.id FROM tbl2 INNER JOIN tbl1 ON tbl2.Name = tbl1.employee INNER JOIN tbl2 parent ON parent.Name= tbl1.manager ```
You have to do it in two steps. One to create all your ID fields. Then you can join with this table to match up the employee to find the manager and the manager's ID: ``` insert into MyNewTable (Name, ParentID) select Manager, ParentID from tbl1 union select Employee, ParentID from tbl1 update MyNewTable set MyNewTable.ParentId = Managers.Id from MyNewTable join tbl1 on tbl1.Employee = MyNewTable.Name join MyNewTable Managers on MyNewTable.Name = Managers.Manager ```
Retrieve data from one table and insert into another table
[ "", "sql", "asp.net", "sql-server", "" ]
I've got a database that contains release versions of software and I want to be able to pull back all versions that are greater than the current version, ordered by version number. However, the releases are sorted in a custom (but standard) way - from alpha version to beta version to main release to patch. So here's an example of the ordering: ``` 100a1 100a4 100b1 100 100p1 101 101p3 etc. ``` Is it possible to form an SQL query that pulls back this data given the custom ordering or does > only work for given orderings like integers and dates? I'm working with MSSQL if that makes any difference.
Here is my code example. Not the shortest one but it holds many demo input/output and can be further simplified if you understand what I want. ``` CREATE TABLE #versions(version nvarchar(10)) INSERT INTO #versions(version) VALUES(N'100a1'),(N'100a4'),(N'100b1'),(N'100p1'),(N'100'),(N'101'),(N'101p3') -- Just an example using substrings etc. how to get the SELECT version, SUBSTRING(version,1, CASE WHEN PATINDEX(N'%[a-z]%',version) > 0 THEN PATINDEX(N'%[a-z]%',version)-1 ELSE LEN(version) END ) as version_number, SUBSTRING(version, CASE WHEN PATINDEX(N'%[a-z]%',version) > 0 THEN PATINDEX(N'%[a-z]%',version) ELSE 0 END, PATINDEX(N'%[0-9]%', SUBSTRING(version,1, CASE WHEN PATINDEX(N'%[a-z]%',version) > 0 THEN PATINDEX(N'%[a-z]%',version)-1 ELSE LEN(version) END ) ) ) as version_suffix, SUBSTRING(version, PATINDEX(N'%[a-z]%', SUBSTRING(version, CASE WHEN PATINDEX(N'%[a-z]%',version) > 0 THEN PATINDEX(N'%[a-z]%',version) ELSE LEN(version) END, LEN(version) ) ), PATINDEX(N'%[0-9]%', SUBSTRING(version,1, CASE WHEN PATINDEX(N'%[a-z]%',version) > 0 THEN PATINDEX(N'%[a-z]%',version)-1 ELSE LEN(version) END ) ) ) as version_sub FROM #versions -- Now your code: ;WITH vNumber AS( SELECT version,SUBSTRING(version,1, CASE WHEN PATINDEX(N'%[a-z]%',version) > 0 THEN PATINDEX(N'%[a-z]%',version)-1 ELSE LEN(version) END ) as version_number FROM #versions ), vSuffix AS( SELECT version, SUBSTRING(version, CASE WHEN PATINDEX(N'%[a-z]%',version) > 0 THEN PATINDEX(N'%[a-z]%',version) ELSE LEN(version) END, LEN(version) ) as version_suffix FROM #versions ) SELECT dat.version FROM ( SELECT vn.version, vn.version_number, CASE SUBSTRING(vn.version, CASE WHEN PATINDEX(N'%[a-z]%',vn.version) > 0 THEN PATINDEX(N'%[a-z]%',vn.version) ELSE 0 END, 1 ) WHEN N'a' THEN 1 WHEN N'b' THEN 2 WHEN N'' THEN 3 WHEN N'p' THEN 4 END as version_suffix, SUBSTRING(vn.version, PATINDEX(N'%[a-z]%', vs.version_suffix ), PATINDEX(N'%[0-9]%', SUBSTRING(vn.version,1, CASE WHEN PATINDEX(N'%[a-z]%',vn.version) > 0 THEN PATINDEX(N'%[a-z]%',vn.version)-1 ELSE LEN(vn.version) END ) ) ) as version_sub FROM vNumber as vn INNER JOIN vSuffix as vs ON vn.version = vs.version ) AS dat ORDER BY dat.version_number, dat.version_suffix, dat.version_sub DROP TABLE #versions ``` This is my input: ``` version ---------- 100a1 100a4 100b1 100p1 100 101 101p3 ``` And this is the result: ``` version ---------- 100a1 100a4 100b1 100 100p1 101 101p3 ``` Anyway. I would suggest to split those values into separate columns. It will make your live much easier. :-)
As long as you can actually describe how the ordering is supposed to work, sure. The two basic approaches are: * Convert the value into something ordinal. For example, you could use something like `order by left([Version] + '__', 5)`. Making a single integer out of the more complex value also works. * Separate the value into multiple values that are each ordinal, and use all of those in the `order by`, in any order you want. This is the more idiomatic way of handling this in SQL - basically, why are you using one value `101p1` when you're logically working with `101, p, 1`? Parsing is a bit tricky to handle in SQL, because SQL really is designed for normalized data sets - and you're effectively storing multiple values in one column. If your rules aren't too complicated, though, this should still be doable. It's not going to be awfully pretty, though :D For fixed length values, this is pretty simple, of course - that's the equivalent of using e.g. `001p01` as filenames in the file system - the alphabetical ordering *is* the correct ordering. You could then simply use `order by` on the whole value, or split it into parts based on `substring`s. For values with separators, it's a bit uglier, but still pretty easy - `1.p.1` can be split relatively easily, and then you can order by each of the parts in sequence. However, your system seems to be a better fit for humans than machines - there's no real hints to follow. Basically, it seems that you're looking at a pattern of "numbers, letters, numbers... treat numbers as numbers, and letters as letters". This is actually quite tricky to handle in T-SQL. It might be worth it to bring in the help of the CLR, and regular expressions in particular - I'm not sure if you'll be able to handle this in general for an unlimited amount of number/letter groups anyway, though. The simplest way by far seems to be to simply separate the version column into multiple columns, each with just one value - something like `MajorVersion, Level, Revision` or something like that, corresponding to `101, Alpha, 3`.
SQL greater than custom ordering
[ "", "sql", "sql-server", "" ]
How to write query for this result? I need output of `EXP_POINT` value in between with find middle value of this `PKT_NO` wise. Means that near by value find. ![enter image description here](https://i.stack.imgur.com/yIZ1R.jpg) ``` SELECT * FROM TABLE_MAIN T1 CROSS APPLY ( SELECT * FROM TABLE_SUB xT2 WHERE T2.PKT_NO = xT2.PKT_NO AND xT2.EXP_POINT BETWEEN T1.EXP_POINT-0.100 AND T1.EXP_POINT + 0.100 )EX ``` I tired this query but I am not getting proper result.
My solution assumes that there can be any number of rows in `TABLE_SUB`. I have assumed that when you specify the middle row, that you mean that the rows are sorted by `EXT_POINT`, if this is not what you need you will have to amend the `ORDER BY` clause of the `PARTITION` statement in the `RowOrderPerPkt CTE`. If there are an odd number of rows then it takes the middle row per `PKT`. If there are an even number of rows then it takes the middle two rows per `PKT` and takes the average of them. So here it is: [SQL Fiddle](http://sqlfiddle.com/#!3/393a5/3) **MS SQL Server 2008 Schema Setup**: ``` CREATE TABLE TABLE_MAIN (PKT INT,EXT DECIMAL(10,3)) INSERT INTO TABLE_MAIN (PKT,EXT) VALUES (1101,0.508), (1102,1.998), (1103,0.423) CREATE TABLE TABLE_SUB (PKT INT,EXT DECIMAL(10,3)) INSERT INTO TABLE_SUB (PKT,EXT) VALUES (1101,0.504), (1101,0.505), (1101,0.510) ,(1102,1.990), (1102,1.995), (1102,2.005) ,(1103,0.504), (1103,0.505), (1103,0.510), (1103,1.990) ``` **Query 1**: ``` ;WITH RowOrderPerPkt AS ( SELECT PKT, EXT, ROW_NUMBER() OVER (PARTITION BY PKT ORDER BY EXT) AS RN FROM TABLE_SUB ), NumRowsPerPkt AS ( SELECT PKT, COUNT(*) AS MaxRows FROM TABLE_SUB GROUP BY PKT ) -- TABLE_SUB with an odd number of rows per PKT -- Simply take the middle row -- i.e. MaxRows / 2 + 1 SELECT T1.PKT, T1.EXT, ROPP.EXT FROM TABLE_MAIN T1 INNER JOIN RowOrderPerPkt ROPP ON ROPP.PKT = T1.PKT INNER JOIN NumRowsPerPkt NRPP ON NRPP.PKT = ROPP.PKT WHERE NRPP.MaxRows % 2 = 1 AND ROPP.RN = NRPP.MaxRows / 2 + 1 UNION -- TABLE_SUB with an even number of rows per PKT -- Simply take the middle 2 rows and find the average -- i.e. get the rows MaxRows / 2 and MaxRows / 2 + 1 SELECT T1.PKT, T1.EXT, AVG(ROPP.EXT) FROM TABLE_MAIN T1 INNER JOIN RowOrderPerPkt ROPP ON ROPP.PKT = T1.PKT INNER JOIN NumRowsPerPkt NRPP ON NRPP.PKT = ROPP.PKT WHERE NRPP.MaxRows % 2 = 0 AND (ROPP.RN = NRPP.MaxRows / 2 OR ROPP.RN = NRPP.MaxRows /2 + 1) GROUP BY T1.PKT, T1.Ext ``` **[Results](http://sqlfiddle.com/#!3/393a5/3/0)**: ``` | PKT | EXT | EXT | |------|-------|--------| | 1101 | 0.508 | 0.505 | | 1102 | 1.998 | 1.995 | | 1103 | 0.423 | 0.5075 | ```
``` declare @t table (PKT INT,EXT DECIMAL(10,3)) insert into @t (PKT,EXT)values (1101,0.508),(1102,1.998) declare @tt table (PKT INT,EXT DECIMAL(10,3)) insert into @tt (PKT,EXT)values (1101,0.504),(1101,0.505),(1101,0.510),(1102,1.990),(1102,1.995),(1102,2.005) select * from @t T CROSS APPLY (select ROW_NUMBER()OVER(PARTITION BY PKT ORDER BY EXT )RN, PKT,EXT FROm @tt )TT WHERE T.PKT = TT.PKT AND TT.RN = 2 ```
How to write query for this using between in SQL Server 2008?
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I am trying to do a SQL query for user with certain permission `enabled` flag. I know, I can do this: ``` select u.ID, u.Name, (select p.Value from Permissions p where p.UserID = u.ID AND p.Key = 'CanEdit') as IsPermissionEnabled from Users u ``` But it's not exactly what I need, can I do something like this: ``` select u.ID, u.Name, ((select p.Value from Permissions p where p.UserID = u.ID AND p.Key = 'CanEdit') = 'True') as IsPermissionEnabled from Users u ``` It didn't work for me. So, how to change my query to make it work?
Try this query ``` select u.ID, u.Name, case when p.value>0 then 'True' else '' end as IsPermissionEnabled from Users u left join permission p on p.UserID = u.ID and p.key='CanEdit' ```
Surely you should just join to the table to get `p.Value`. Then you can do with it whatever you like: ``` SELECT u.ID, u.Name, p.Value as IsPermissionEnabled FROM Users u LEFT OUTER JOIN Permissions p ON p.UserID = u.ID AND p.Key = 'CanEdit'; ```
SQL Subquery column equals operation
[ "", "sql", "sql-server", "t-sql", "" ]
I am working with two sql queries that I would like to combine as one so the result of the first query would be in the first column and the result of the second one in column two. How can I acheive this? I tried union, but it will put the results in two rows.. thats not what I want... ``` select count(*) as ColumnA from Inventory i, Sale s where i.vin=s.vin and i.condition='new' select count(*) as ColumnB from Inventory i, Sale s where i.vin=s.vin and i.condition='used' order by 1 desc; ```
You can use combine two sub-queries in one query like this: ``` select (select count(*) from Inventory i, Sale s where i.vin=s.vin and i.condition='new') as New, (select count(*) from Inventory i, Sale s where i.vin=s.vin and i.condition='used') as Used ``` What were you trying to achieve with order by statement?
You can obtain both counts at the same time with a little bit different query, which will be slightly more efficient than combining two queries: ``` SELECT SUM(CASE WHEN i.condition = 'new' THEN 1 ELSE 0 END), SUM(CASE WHEN i.condition = 'used' THEN 1 ELSE 0 END) FROM Inventory i JOIN Sale s ON i.vin = s.vin ```
Combining the results from two separate queries in to two columns
[ "", "sql", "sql-server", "" ]
I'm making a web page in ASP.NET MVC which compares prices from different shops. I have a one-to-many with products and the shops, where the `SHOP` has one `PRODUCT` and a `PRODUCT` has many `SHOP`s, the problem is that the product is the same but the price is different. Example: ``` 3 shops sells one fork. Shop 1: $10 Shop 2: $20 Shop 3: $30 ``` Is the best way to make a new product for each shop or can I some how change the price?
Ideally what you want is a many-to-many relationship between your `Shop` and `Product` entities: ``` public class Shop { public int ShopId {get; set;} public virtual ICollection<ShopProduct> ShopProducts {get; set;} } public class Product { public int ProductId {get; set;} public string Name {get; set;} public virtual ICollection<ShopProduct> ShopProducts {get; set;} } public class ShopProduct { public int ProductId {get; set;} public int ShopId {get; set;} public virtual Product Product {get; set;} public virtual Shop Shop {get; set;} public decimal Price {get; set;} } ``` By the above example, each `Shop` can have many `Product`s and each `Product` can exist in many `Shop`s. At the junction of each `Shop`-`Product` combination, you specify the price. Let me show an example: ``` ID | Shop Name -------------- 1 Shop 1 2 Shop 2 3 Shop 3 ID | Product Name ----------------- 1 Fork ``` So now we can do this (your example from above): ``` ProductId | ShopId | Price ---------------------------------------------------- 1 1 10.00 <- fork for Shop 1 @ $10 1 2 20.00 <- fork for Shop 2 @ $20 1 3 30.00 <- fork for Shop 3 @ $30 ``` You can now add as many shops and products as you want, and link them in the many-to-many table. A table like this is called a **junction table** in database parlance.
I would create 3 tables for that ``` Table Shop - ShopId (PK) - Name - ... Table Product - ProductId (PK) - Name - ... Table Price - ShopId (FK) - ProductId (FK) - Price - Date (optional if you want price history) ``` If you want price history then fields `ShopId, ProductId, Date` have to form unique constraint
Database object with different data
[ "", "sql", "asp.net", "asp.net-mvc", "database", "entity-framework-6", "" ]
I have a query to display the year value from a table and default the current year. ``` select distinct year(startdate) as syear, year(startdate) as eyear, 1 as sOrder from [TTS].[dbo].[Class] where year(startdate) <> year(getdate()) union all select year(getdate()) as syear, year(getdate()) as eyear, 0 as sOrder order by sOrder asc, syear desc ``` ![enter image description here](https://i.stack.imgur.com/DhbGv.png) Here default year is 2015 (with Sorder 0) . Now I Need to display the Default Year based on month value. For e.g if year is 2015 and months are in jan,feb,mar,april then default year is 2015. else default year is 2016. i have tried the below query ``` select distinct year(startdate) as syear, year(startdate) as eyear, 1 as sOrder from [TTS].[dbo].[Class] where year(startdate) <> year(getdate()) union all select year(getdate())+1 as syear, year(getdate())+1 as eyear, 0 as sOrder where month(getdate()) not in(1,2,3,4) union all select year(getdate()) as syear, year(getdate()) as eyear, 0 as sOrder where month(getdate())in(1,2,3,4) order by sOrder asc, syear desc ``` But year values are getting duplicated. can any help me in this?
SQL Server is correct in what it's doing as you are requesting an additional row to be returned which if ran now 2015-06-22 would return "2016" Your distinct only works on the first select you've done so these are your options: 1) Use cte's with distincts ``` with subq1 (syear, eyear, sOrder) as (select year(getdate())+1 as syear, year(getdate())+1 as eyear, 0 as sOrder where month(getdate()) not in(1,2,3,4) union all select year(getdate()) as syear, year(getdate()) as eyear, 0 as sOrder where month(getdate())in(1,2,3,4)), subq2 as (select distinct year(startdate) as syear, year(startdate) as eyear, 1 as sOrder from [TTS].[dbo].[Class]) Select distinct syear, eyear, sOrder from subq1 UNION ALL select syear, eyear, sOrder from subq2 s2 where s2.syear <> (select syear from subq1) order by sOrder asc, syear desc ``` SQL Fiddle for this can be found here: <http://sqlfiddle.com/#!6/f89b3a/3/0> 2) You could try use a grouping: ``` select distinct syear, eyear, min(sOrder) from (select distinct year(startdate) as syear, year(startdate) as eyear, 1 as sOrder from [TTS].[dbo].[Class] union all select year(getdate())+1 as syear, year(getdate())+1 as eyear, 0 as sOrder where month(getdate()) not in(1,2,3,4) union all select year(getdate()) as syear, year(getdate()) as eyear, 0 as sOrder where month(getdate())in(1,2,3,4)) subq Group by syear, eyear order by min(sOrder) asc, syear desc ``` SQL Fiddle for this can be found here: <http://sqlfiddle.com/#!6/f89b3a/5/0> I personally would recommend option 1 over option 2.
Replace your code (where you tried to find the current year) with the below one should work. ``` SELECT YEAR(DATEADD(MM,-4,GETDATE())) + 1 like SELECT DISTINCT year(startdate) AS syear ,year(startdate) AS eyear ,1 AS sOrder FROM [TTS].[dbo].[Class] WHERE year(startdate) <> year(getdate()) UNION ALL SELECT YEAR(DATEADD(MM, - 4, GETDATE())) + 1 AS syear ,YEAR(DATEADD(MM, - 4, GETDATE())) + 1 AS eyear ,0 AS sOrder ORDER BY sOrder ASC ,syear DESC ```
Default the year based on month value
[ "", "sql", "sql-server", "" ]
I have found answers regarding searching for duplicates of an entry in one column, but I want to search for duplicates of a combination of entries of two columns. I have the columns 'Area', 'Status', and 'Update Date' as parts of my table. There are 3 different areas and 2 statuses, but there are duplicates among combinations. By that, I mean that (area1,status1) could have been updated at times t1, t2, and t3. I want to find the earliest instance (first entry in the Update Date column) FOR EACH (area,status) combination for a total of 6 rows in my output. Is this possible? For example, for this one \_id, I need 6 rows. The code below outputs a table of 12 rows for the given \_id because there are (area,status) duplicates: ``` select _id, area, status, update_date from history where area in ('a', 'b', 'c') and status in ('done','pending') and request_id = 123 order by update_date desc ``` EDIT: Sorry. I know this is confusing. I'll write it out the long way to hopefully clear things up. For \_id1, I need the most up-to-date (a1,s1), (a1,s2), (a2,s1), (a2,s2), (a3,s1), and (a3,s2). Then I need those for \_id2, then \_id3, and so on. Hope that makes sense and helps a bit more.
You can use the `rank` window function to assign a rank per unique combination of `area` and `status`, and then take the first (=earliest) one for each: ``` SELECT _id, area, status, update_date FROM (SELECT _id, area, status, update_date, RANK() OVER (PARTITION BY area, status ORDER BY update_date ASC) AS rk FROM history WHERE area IN ('a', 'b', 'c') AND status IN ('done', 'pending') AND request_id = 123) t WHERE rk = 1 ORDER BY update_date DESC ```
You can use a subselect for this, as example. ``` SELECT DISTINCT update_dat._id, h.area, h.status, update_dat.update_date FROM history h OUTER APPLY( SELECT TOP 1 _id, update_date FROM history m WHERE m.area = h.area AND m.status = h.status ORDER BY update_date ) as update_dat WHERE h.area in ('a', 'b', 'c') and h.status in ('done','pending') and request_id = 123 ```
Can I write an SQL query to filter duplicate entries of a combination of columns?
[ "", "sql", "sql-server", "select", "" ]
I have the following data: ``` Name | Condition Mike | Good Mike | Good Steve | Good Steve | Alright Joe | Good Joe | Bad ``` I want to write an if statement, if `Bad` exists, I want to classify the name as `Bad`. If `Bad` does not exist but `Alright` Exists, then classify as `Alright`. If only `Good` exists, then classify as `good`. So my data would turn into: ``` Name | Condition Mike | Good Steve | Alright Joe | Bad ``` Is this possible in SQL?
An Access query would be easy if you first create a table which maps *Condition* to a *rank* number. ``` Condition rank --------- ---- Bad 1 Alright 2 Good 3 ``` Then a `GROUP BY` query would give you the minimum *rank* for each *Name*: ``` SELECT y.Name, Min(c1.rank) AS MinOfrank FROM [YourTable] AS y INNER JOIN conditions AS c1 ON y.Condition = c1.Condition GROUP BY y.Name; ``` If you want to display the *Condition* string for those ranks, join back to the *conditions* table again: ``` SELECT sub.Name, sub.MinOfrank, c2.Condition FROM ( SELECT y.Name, Min(c1.rank) AS MinOfrank FROM [YourTable] AS y INNER JOIN conditions AS c1 ON y.Condition = c1.Condition GROUP BY y.Name ) AS sub INNER JOIN conditions AS c2 ON sub.MinOfrank = c2.rank; ``` Performance should be fine with indexes on those *conditions* fields. Seems to me this approach could also work in those other databases (MySQL and SQL Server) tagged in the question.
You can use a case statement to rank the conditions then max() or min() to summarize the results before returning them back to the user in the same format. Query: ``` SELECT [Name] , case min(case condition when 'bad' then 0 when 'alright' then 1 else 2 end) when 0 then 'bad' when 1 then 'alright' when 2 then 'good' end as Condition from mytable group by [name] ```
Aggregate Text data using SQL
[ "", "mysql", "sql", "sql-server", "ms-access", "" ]
I'm trying to write a simple "undo"-system for a table with the following structure: ``` id element position start_date end_date 1 1 23 01/01/2015 05/01/2015 2 2 36 01/01/2015 NULL 3 1 17 05/01/2015 NULL ``` So, when a new action is added for an existing element, it adds a new row with the new position and the current date, and changes the end\_date of the previous row of that element to the current date. Now, to undo this, I delete the row of the element where there is no end\_date defined (this is the latest action for that element), but then I have to change the end\_date of the previous row to `null`. So, in pseudolanguage, the query needed would be: ``` UPDATE [table] SET end_date = NULL WHERE element = 1 AND start_date = 'highest_start_date_to_be_found_of_this_element' ``` Question: what to type instead of `highest_start_date_to_be_found_of_this_element` to make it work?
That should give you the latest start\_date to every entry which has the same value in element like the defined value in update statement. So you need to change it only at one place. ``` UPDATE [table] t SET t.end_date = NULL WHERE t.element = 1 AND t.start_date = (select max(sub.start_date) from [table] sub where sub.element = t.element) ```
One method is to use a subquery with `TOP` and `ORDER BY`: ``` update t set end_date = NULL from (select top 1 t.* from table t order by start_date desc ); ``` This will update only one row. If you want all rows with the maximum start date, then use `with ties` in the subquery.
Update where column has highest value
[ "", "sql", "sql-server", "" ]
Can someone help me with a sql query? I'm looking for the most recent date (datum). It has the same OBJID but not the same RADOMGID. ``` BESADR ARSMEDEL METOD DATUM FNR BYGGID OBJID RADOMGID bladv 410 spar 20040930 40014686 1 14778 13640 bladv 340 m 19860505 40014686 1 14778 1026 ``` Thanks.
Do a `GROUP BY`, with `MAX` to find each OBJID's max datum: ``` select OBJID, max(DATUM) from tablename group by OBJID ``` If you want the whole row (with max datum), use `NOT EXISTS` to exclude older rows: ``` select * from tablename t1 where not exists (select 1 from tablename t2 where t2.OBJID = t1.OBJID and t2.datum > t1.datum) ``` Will return both rows if there's a tie.
If you want just the latest, assuming the table is called my\_object\_list ``` select BESADR, ARSMEDEL, METOD, max(DATUM) DATUM, FNR, BYGGID, OBJID, RADOMGID from my_object_list mol Where Objid = '14778' group by BESADR, ARSMEDEL, METOD, FNR, BYGGID, OBJID, RADOMGID having max(datum) = mol.datum; ```
SQL query, MAX date, group by
[ "", "sql", "oracle", "" ]
I'm trying to retrieve the event ID and endDate from a list that includes multiple endDates for the same ID. That step was easy enough using max() and grouping, but where I am struggling is then restricting those results to only those where the max end date has already passed. I believe this requires the use of nested SQL, but I just can't get it to work. Records: ``` eventID | endDate 1 05/01/2015 1 05/01/2014 1 05/01/2013 2 05/01/2016 3 07/01/2016 4 05/01/2014 4 05/01/2013 ``` Desired results (where today = June 2015): ``` eventID | endDate 1 05/01/2015 4 05/01/2014 ``` This code gets me the list of all events and their max end date, and from this I need to get the ContentID and EndDate where the EndDate is before today. I've tried adding a where statement to limit the date but that gets executed prior to the max(event.EndDate) which doesn't get me the results I need. ``` select event.ContentID, max(event.EndDate) as EndDate from event group by event.ContentID ``` Any help would be much appreciated!
Use the having clause instead of where to filter based on aggregate formulas such as max(): ``` SELECT event.ContentID ,max(event.EndDate) AS EndDate FROM event GROUP BY event.ContentID HAVING max(event.EndDate) < getdate() ```
On SQL-Server you could use this: ``` SELECT event.ContentID, max(event.EndDate) as EndDate FROM event GROUP BY event.ContentID HAVING MAX(event.EndDate) <= GETDATE() ``` on MySQL: ``` SELECT event.ContentID, max(event.EndDate) as EndDate FROM event GROUP BY event.ContentID HAVING MAX(event.EndDate) <= CURDATE() ```
SQL nested select, simple issue
[ "", "sql", "" ]
I have two tables ``` Users(user_id, name) Competition(comp_id, user_id, score) ``` I need to select user\_id that occur the most in competition. Can I do it with max(count)? Is there any way to get exception if two users have same most occurances? I tried: ``` SELECT MAX(numberr), USER_ID FROM ( SELECT USER_ID, COUNT(COMP_ID) AS numberr FROM COMPETITION GROUP BY USER_ID ) GROUP BY USER_ID; ``` But result I get is: ``` MAX(numberr) USER_ID 4 1 2 2 6 3 ```
You can get it done with `COUNT`: ``` SELECT * FROM (SELECT USER_ID,COUNT(comp_id) as numberr FROM COMPETITION GROUP BY USER_ID ORDER BY COUNT(comp_id) DESC) T1 WHERE rownum=1; ```
To get *multiple* users with the max count use a RANK: ``` SELECT numberr, USER_ID FROM ( SELECT USER_ID, COUNT(COMP_ID) AS numberr ,RANK() OVER (PARTITION BY USER_ID ORDER BY COUNT(COMP_ID) DESC) rnk FROM COMPETITION GROUP BY USER_ID ) dt WHERE rnk = 1; ```
select id that occur the most
[ "", "sql", "oracle", "count", "max", "" ]
Either get only those userIDs with the specified @Lastname or if @LastName = 'All' then get all userIDs even if they have NULL values as their LastName. ``` SELECT userID INTO #table FROM users WHERE LastName = CASE WHEN @LastName = 'All' THEN LastName ELSE @LastName END ``` The above query only returns those userIDs where the LastName is not NULL.
Another shorter form `where @LastName in ('All', LastName);` [SqlFiddle](http://sqlfiddle.com/#!9/d7a58/9)
Change your condition to this instead: ``` where (@LastName = 'All' and 1=1) or (LastName = @LastName) ``` The idea is that when `@LastName` = 'All' then your query should not have any filters. The 1=1 condition will be always true, and together with the other check should return all results. The other path the query can take is filtering by a specific Last Name, which should definitely exclude null values. Update: THE 1=1 condition is redundant. You can rewrite the condition as: ``` where (@LastName = 'All') or (LastName = @LastName) ``` [Demo](http://sqlfiddle.com/#!9/d7a58/4)
WHERE If condition is not met get all values including NULL
[ "", "mysql", "sql", "" ]
I am trying to join two tables one is a unique feature the seconds is readings taken on several dates that relate to the unique features. I want all of the records in the first table plus the most recent reading. I was able to get the results I was looking for before adding the shape field. By using the code ``` SELECT Table1.Name, Table1.ID, Table1.Shape, Max(Table2.DATE) as Date FROM Table1 LEFT OUTER JOIN Table2 ON Table1.ID = table2.ID GROUP BY Table1.Name, Table1.ID, Table1.Shape ``` The shape field is a geometry type and I get the error > 'The type "Geometry" is not comparable. It can not be use in the Group By Clause' So I need to go about it a different way, but not sure how. Below is a sample of the two tables and the desired results. ``` Table1 Name| ID |Shape AA1 | 1 | X BA2 | 2 | Y CA1 | 3 | Z CA2 | 4 | Q Table2 ID | Date 1 | 5/27/2013 1 | 6/27/2014 2 | 5/27/2013 2 | 6/27/2014 3 | 5/27/2013 3 | 6/27/2014 ``` My Desired Result is ``` Name| ID |Shape |Date AA1 | 1 | X | 6/27/2014 BA2 | 2 | Y | 6/27/2014 CA1 | 3 | Z | 6/27/2014 CA2 | 4 | Q | Null ```
You can do the aggregation on `Table2` in a CTE, finding the `MAX(DATE)` for each ID, and then join that result to `Table1`: ``` WITH AggregatedTable2(ID, MaxDate) AS ( SELECT ID, MAX(DATE) FROM Table2 GROUP BY ID ) SELECT t1.ID, t1.Name, t1.Shape, t2.MaxDate FROM Table1 t1 LEFT JOIN AggregatedTable2 t2 ON t1.ID = t2.ID ```
Try casting geometry as a varchar. ``` Select Table1.Name, Table1.ID, cast(Table1.Shape as varchar(1)) AS Shape, Max(Table2.DATE) as Date FROM Table1 LEFT OUTER JOIN Table2 ON Table1.ID = table2.ID Group By Table1.Name, Table1.ID, cast(Table1.Shape as varchar(1)) ```
Joining two sql tables with a one to many relationship, but want the max of the second table
[ "", "sql", "sql-server", "max", "outer-join", "" ]
I have two tables with the same number of columns with no primary keys (I know, this is not my fault). Now I need to delete all rows from table A that exists in table B (they are equal, each one with 30 columns). The most immediate way I thought is to do a `INNER JOIN` and solve my problem. But, write conditions for all columns (worrying about `NULL`) is not elegant (maybe cause my tables are not elegant either). I want to use `INTERSECT`. I am not knowing how to do it? This is my first question: I tried ([SQL Fiddle](http://sqlfiddle.com/#!6/9eecb7db59d16c80417c72d1/1882)): ``` declare @A table (value int, username varchar(20)) declare @B table (value int, username varchar(20)) insert into @A values (1, 'User 1'), (2, 'User 2'), (3, 'User 3'), (4, 'User 4') insert into @B values (2, 'User 2'), (4, 'User 4'), (5, 'User 5') DELETE @A FROM (SELECT * FROM @A INTERSECT SELECT * from @B) A ``` But all rows were deleted from table `@A`. This drived me to second question: why the command `DELETE @A FROM @B` deletes all rows from table `@A`?
Try this: ``` DELETE a FROM @A a WHERE EXISTS (SELECT a.* INTERSECT SELECT * FROM @B) ``` Delete from @A where, for each record in @A, there is a match where the record in @A intersects with a record in @B. This is based on Paul White's [blog post](http://web.archive.org/web/20180422151947/http://sqlblog.com:80/blogs/paul_white/archive/2011/06/22/undocumented-query-plans-equality-comparisons.aspx) using INTERSECT for inequality checking. [SQL Fiddle](http://sqlfiddle.com/#!6/9eecb7db59d16c80417c72d1/1922)
To answer your first question you can delete based on `join`: ``` delete a from @a a join @b b on a.value = b.value and a.username = b.username ``` The second case is really strange. I remember similar case here and many complaints about this behaviour. I will try to fing that question.
DELETE WITH INTERSECT
[ "", "sql", "sql-server", "sql-server-2012", "sql-delete", "intersect", "" ]
I've got a dataset similar to the test data below: ``` create table #colors (mon int, grp varchar(1), color varchar(5)) insert #colors values (201501,'A','Red'), (201502,'A','Red'), (201503,'A','Red'), (201504,'A','Red'), (201505,'A','Red'), (201506,'A','Red'), (201501,'B','Red'), (201502,'B','Red'), (201503,'B','Blue'), (201504,'B','Blue'), (201505,'B','Blue'), (201506,'B','Blue'), (201501,'C','Red'), (201502,'C','Red'), (201503,'C','Blue'), (201504,'C','Green'), (201505,'C','Green'), (201506,'C','Green'), (201501,'D','Red'), (201502,'D','Red'), (201503,'D','Blue'), (201504,'D','Blue'), (201505,'D','Red'), (201506,'D','Red') ``` I'd like to know the path each group took in terms of color, as well the latest month a category was a particular color prior to a change in color. In this way, the month associated with a color serves as the upper temporal bound for the category-color combination. I've tried to do this with a CTE and the `row_number()` function, as in the code below, but it doesn't quite work. Here is the sample code: ``` ; with colors (grp, color, mon, rn) as ( select grp , color , mon , row_number() over (partition by grp order by mon asc) rn from ( select grp , color , max(mon) mon from #colors group by grp, color ) as z ) select grp , firstColor , firstMonth , secondColor , secondMonth , thirdColor , thirdMonth from ( select c1.grp , c1.color firstColor , c1.mon firstMonth , c2.color secondColor , c2.mon secondMonth , c3.color thirdColor , c3.mon thirdMonth , row_number() over (partition by c1.grp order by c1.mon asc) rn from colors c1 left outer join colors c2 on ( c1.grp = c2.grp and c1.color <> c2.color and c1.rn = c2.rn - 1 ) left outer join colors c3 on ( c1.grp = c3.grp and c2.color <> c3.color and c2.rn = c3.rn - 1 ) ) as d where rn = 1 order by grp ``` Which results in the following **(incorrect)** result set: ![result set](https://i.stack.imgur.com/h9f5D.png) As you can see, there is no indication that group D's original color was red-- it should be Red (201502) --> Blue (201504) --> Red (201506). This is because of the use of the `max()` function, but removing it necessitates modifying the join logic in a way I haven't been able to deduce. I've tried removing the `max()` function and changing the partition on the `row_number()` to include the color, but I think this reduces to the same sets logically. How do I account for the scenario when there are fewer categories than changes between those categories?
I'd take a different approach, and generally I'd avoid "predefining" the number of months in the columns (if possible). Here's a solution that can work for separating months into rows, but it actually then combines the result into the expected output format: ``` WITH nCTE (mon, grp, color, n) AS ( SELECT *, ROW_NUMBER() OVER(PARTITION BY grp ORDER BY mon) n FROM colors ), monthsCTE (mon, grp, color, n) AS ( SELECT l.mon, l.grp, l.color, ROW_NUMBER() OVER(PARTITION BY l.grp ORDER BY l.mon) n FROM nCTE l LEFT JOIN nCTE r ON l.grp = r.grp AND l.n = r.n - 1 WHERE l.color != r.color OR r.color IS NULL ) SELECT m1.grp, m1.color, m1.mon, m2.color, m2.mon, m3.color, m3.mon FROM monthsCTE m1 LEFT JOIN monthsCTE m2 ON m1.grp = m2.grp AND m2.n = 2 LEFT JOIN monthsCTE m3 ON m1.grp = m3.grp AND m3.n = 3 WHERE m1.n = 1 ORDER BY 1 ``` And a [fiddle](http://sqlfiddle.com/#!6/07391/24) You can use the "inside" of the monthsCTE instead of the outer `SELECT` to get the result in separate rows (then you don't need the `ROW_NUMBER...` part), or leave it like this... > EDIT: It's actually easier to do what you REALLY wanted. Just remove the `GROUP BY` clause (and the interrupting `MAX()` functions). > EDIT2: As noted by Me.Name, old solution would fail over years. Corrected code fragment & fiddle.
Using a slightly different approach, with first using the lead window function to determine if the color changed, and only then ranking the rows, based on where the color changed: ``` ;with nextcols as ( select grp, color, mon, lead(color, 1, 'none') over (partition by grp order by mon ) nextcol from #colors ) , ranked as ( select *, ROW_NUMBER() over (partition by grp order by mon) MonthIndex from nextcols where color <> nextcol ) --perhaps you could go pivoting here, but joining on the monthindex works select r1.grp, r1.color firstCol, r1.mon firstMon, r2.color secondCol, r2.mon secondMon, r3.color thirdCol, r3.mon thirdMon from ranked r1 left join ranked r2 on r2.grp=r1.grp and r2.MonthIndex = 2 left join ranked r3 on r3.grp=r1.grp and r3.MonthIndex = 3 where r1.MonthIndex = 1 ``` [Fiddle](http://sqlfiddle.com/#!6/5f01f/1)
Finding max value prior to a change between n categories across a window for m>n changes between categories
[ "", "sql", "sql-server", "group-by", "window-functions", "" ]
I have the query: ``` SELECT substring(Code,1,6) as Code From Table Group by substring(Code,1,6) Order By Code ``` However I'm trying to add a column that is the calculated sum of the rows that have been grouped in this query. Example: ``` Code Number S05080A 20 S05080B 20 S70331A 8 S70331B 4 ``` Would then become: ``` Code Sum S05080 40 S70331 12 ``` Hopefully someone can help me out with this one, would be much appreciated!
``` SELECT substring(Code,1,6) as Code, SUM(Number) AS [Sum] -- Add this line From Table Group by substring(Code,1,6) Order By Code ```
You've already done the hard work - simply add `SUM(Number)` as another column and alias it ``` SELECT substring(Code,1,6) as Code, SUM(Number) AS Sum From Table Group by substring(Code,1,6) Order By Code ```
Calculating the sum of rows that have been grouped in SQL
[ "", "sql", "sql-server", "" ]
I hope you will be able to help me out. I just started learning SQL and while applying my knowledge at work, I got stuck.I have SQL Database with multiple tables, which contain various data for properties of the items offered for sale. So far I successfully created a query which pulls most of the information needed. Unfortunately, the last table became problematic for me.Table is formatted like this: ``` | fkStockItemId | PropertyName | PropertyValue | PropertyType | | ItemSKU-1 | Item Style | SB-01123 | Attribute | | ItemSKU-1 | Item Size | X-Small | Attribute | | ItemSKU-1 | Item Color | Red | Attribute | | ItemSKU-2 | Item Style | AA-66002 | Attribute | | ItemSKU-2 | Item Size | Medium | Attribute | | ItemSKU-2 | Item Color | Green | Attribute | | ItemSKU-3 | Item Style | 110445 | Attribute | | ItemSKU-3 | Item Size | Small | Attribute | ``` Output I am trying to get is like this: ``` | SKU | Item Style | Item Size | Item Color | | ItemSKU-1 | SB-01123 | X-Small | Red | | ItemSKU-2 | AA-66002 | Medium | Green | | ItemSKU-3 | 110445 | Small | *Null* | Please note that last column "PropertyType" is for technical purposes and is not needed to be queried. ``` This is what I got so far: ``` SELECT si.ItemNumber, si.ItemTitle, si.ItemDescription, si.RetailPrice, si.Weight, sl.Quantity, c.CategoryName, siep.ProperyValue, siep.ProperyName FROM StockItem si LEFT OUTER JOIN StockLevel sl ON si.pkStockItemID = sl.fkStockItemId LEFT OUTER JOIN ProductCategories c ON si.CategoryId = c.CategoryId LEFT OUTER JOIN StockItem_ExtendedProperties siep ON si.pkStockItemID = siep.fkStockItemId WHERE siep.ProperyName = 'Item Style' ``` Tables "StockLevel" and "ProductCategories" show results just fine. If you notice, last "StockItem\_ExtendedProperties" JOIN and "siep.ProperyValue", "siep.ProperyName" coupled with "WHERE siep.ProperyName = 'Item Style'" only allowed me to query 1 property. Thank you for your help and time!
Move the propertyname filter to the ON clause from the WHERE clause. Then join again for each property: ``` SELECT si.ItemNumber, si.ItemTitle, si.ItemDescription, si.RetailPrice, si.Weight, sl.Quantity, c.CategoryName, style.ProperyValue as style, size.ProperyValue as size FROM StockItem si LEFT OUTER JOIN StockLevel sl ON si.pkStockItemID = sl.fkStockItemId LEFT OUTER JOIN ProductCategories c ON si.CategoryId = c.CategoryId LEFT OUTER JOIN StockItem_ExtendedProperties style ON si.pkStockItemID = style.fkStockItemId AND style.ProperyName = 'Item Style' LEFT OUTER JOIN StockItem_ExtendedProperties size ON si.pkStockItemID = size.fkStockItemId AND size.ProperyName = 'Item Size' ```
To get each value in a column like that, you can first write a subquery for each property, and then `JOIN` them all together, like this: ``` SELECT m1.fkStockItemId, m1.propertyValue AS 'Item Style', m2.propertyValue AS 'Item Size', m3.propertyValue AS 'Item Color' FROM( SELECT fkStockItemId, propertyValue FROM myTable WHERE propertyName = 'Item Style') m1 LEFT JOIN( SELECT fkStockItemId, propertyValue FROM myTable WHERE propertyName = 'Item Size') m2 ON m2.fkStockItemId = m1.fkStockItemId LEFT JOIN( SELECT fkStockItemId, propertyValue FROM myTable WHERE propertyName = 'Item Color') m3 ON m3.fkStockItemId = m2.fkStockItemId; ``` Here is an [SQL Fiddle](http://sqlfiddle.com/#!9/8d509/13) example.
How to create a SQL query for the following operation?
[ "", "mysql", "sql", "database", "qsqlquery", "isqlquery", "" ]
I have the following piece of code that's just to make sure that the temporary table doesn't exist. If the table exist I want to truncate it. ``` CREATE TABLE #LookupLinks( [SyncID] uniqueidentifier, [Name] nvarchar(50), [SQLTable] nvarchar(50) ) --I create this just to test my try-catch BEGIN TRY CREATE TABLE #LookupLinks( [SyncID] uniqueidentifier, [Name] nvarchar(50), [SQLTable] nvarchar(50) ) END TRY BEGIN CATCH PRINT N'#LookupLinks already existed and was truncated.'; TRUNCATE TABLE #LookupLinks END CATCH ``` What I want this to do: 1. The temp-table is created 2. Attempt to create it again 3. error sends us into the catch 4. table is truncated and everything continues as normal What happens: ERROR: There is already an object named '#LookupLinks' in the database. What am I doing wrong here?
This is because SQL Server parses and validates the whole batch. So when parsing the second `CREATE TABLE` statement, it errors out saying: > There is already an object named '#LookupLinks' in the database. See this example: ``` IF 1 = 1 BEGIN CREATE TABLE #temp(col INT) END ELSE BEGIN CREATE TABLE #temp(col INT) END ``` It produces an error saying: > There is already an object named '#temp' in the database. The workaround is to use `Dynamic SQL`. ``` -- CREATE the table for testing IF OBJECT_ID('tempdb..#LookupLinks') IS NOT NULL DROP TABLE #LookupLinks CREATE TABLE #LookupLinks( [SyncID] uniqueidentifier, [Name] nvarchar(50), [SQLTable] nvarchar(50) ) -- Final query IF OBJECT_ID('tempdb..#LookupLinks') IS NOT NULL BEGIN TRUNCATE TABLE #LookupLinks PRINT N'#LookupLinks already existed and was truncated.' END ELSE BEGIN DECLARE @sql NVARCHAR(MAX) = '' SELECT @sql = ' CREATE TABLE #LookupLinks( [SyncID] uniqueidentifier, [Name] nvarchar(50), [SQLTable] nvarchar(50) )' EXEC sp_executesql @sql PRINT N'#LookupLinks was created.' END ``` --- If you do not have the first `CREATE TABLE` statement,your query will work just fine. Or if you put a `GO` before the `BEGIN TRY`. ``` IF OBJECT_ID('tempdb..#LookupLinks') IS NOT NULL DROP TABLE #LookupLinks -- DROP FIRST CREATE TABLE #LookupLinks( [SyncID] uniqueidentifier, [Name] nvarchar(50), [SQLTable] nvarchar(50) ) --I create this just to test my try-catch GO BEGIN TRY CREATE TABLE #LookupLinks( [SyncID] uniqueidentifier, [Name] nvarchar(50), [SQLTable] nvarchar(50) ) END TRY BEGIN CATCH PRINT N'#LookupLinks already existed and was truncated.'; TRUNCATE TABLE #LookupLinks END CATCH ``` Still, it's because SQL server parses and validates the whole batch. The `GO` statement will put the statements into their own batches, thus the error is now not happening. Even CeOnSql's [answer](https://stackoverflow.com/a/31020763/2203084) will work fine.
I think what you really want to achieve is this: ``` IF OBJECT_ID('tempdb..#LookupLinks') IS NOT NULL --Table already exists BEGIN TRUNCATE TABLE #LookupLinks PRINT N'#LookupLinks already existed and was truncated.'; END ELSE BEGIN CREATE TABLE #LookupLinks( [SyncID] uniqueidentifier, [Name] nvarchar(50), [SQLTable] nvarchar(50) ) END ``` ![enter image description here](https://i.stack.imgur.com/w9fOv.jpg)
TRY...CATCH doesn't seem to work
[ "", "sql", "sql-server", "error-handling", "sql-server-2012", "try-catch", "" ]
I am executing a dynamic SQL query. where parameters are passed dynamically. I first wrote similar to following code. ``` DECLARE @Id nvarchar(max); DECLARE @query nvarchar(max); SET @Id ='RMH_108' SET @query ='Select * FROM [A06].[Syn_RMDemand] WHERE RMHierarchyId =' + @Id PRINT @query EXEC(@query) ``` > ![Error](https://i.stack.imgur.com/9NZRV.png) Then I wrote this. ``` DECLARE @Id nvarchar(max); DECLARE @query nvarchar(max); SET @Id ='RMH_108' SET @query ='Select * FROM [A06].[Syn_RMDemand] WHERE RMHierarchyId = '''+@Id+'''' PRINT @query EXEC(@query) ``` > ![ERROR](https://i.stack.imgur.com/MSpOA.png) This time code successfully executed. Since parameter can be empty I need to convert that to null. I modified the code and wrote this ``` DECLARE @Id nvarchar(max); DECLARE @query nvarchar(max); SET @Id = '' SET @Id = NULLIF(@Id,'') -- COMMENTED SET @Id ='RMH_108' SET @query ='Select * FROM [A06].[Syn_RMDemand] WHERE RMHierarchyId = '''+@Id+'''' PRINT @query EXEC(@query) ``` > ![Error](https://i.stack.imgur.com/i5xsL.png) No errors. Query crashed. Does anybody come up with this sort of problem ?
No need to convert it to NULL just check with your value ``` DECLARE @Id nvarchar(max); DECLARE @query nvarchar(max); SET @Id = '' -- COMMENTED SET @Id ='RMH_108' SET @query ='Select * FROM [A06].[Syn_RMDemand] WHERE RMHierarchyId = '''+@Id+'''' PRINT @query EXEC(@query) ```
You should use `sp_executeSQL` and remove string concatenation like this. I am assuming that if `NULL` is passed you want all the rows to be returned. **Query** ``` DECLARE @Id nvarchar(max); DECLARE @query nvarchar(max); SET @Id ='RMH_108'SET @query ='Select * FROM [A06].[Syn_RMDemand] WHERE RMHierarchyId = @Id OR @ID IS NULL' PRINT @query EXEC sp_executeSQL @query,N'@Id NVARCHAR(MAX)',@Id ```
Passing Null value parameter to dynamic SQL query. How can this be successfully executed?
[ "", "sql", "sql-server", "" ]
I need filter result of a join query between two tables but I don't have a condition for the "where clause". What I need is to filter based on id\_project like this: 1. if id\_project is equal to 24 *(24 is the default project)* then it should return only rows with id\_project =24. **here rows 1,3...10 will be selected** 2. if id\_project is equal to 25, then I need those rows which have id\_project=25 plus those rows which has " id\_project=24 and not id\_project 25, **so rows number 2 to 11 will be selected** With this query : ``` SELECT tp.id_tag, tp.id_project, tp.NJTagName, tp.node_level , tl.id_level FROM instrumentation.dbo.tag_project tp INNER JOIN instrumentation.dbo.tag_level tl ON tl.id_tag=tp.id_tag // where tl.id_level=69 and tp.node_level=1 ``` I get this result : ![enter image description here](https://i.stack.imgur.com/dD3cu.jpg) How can I change my query to do this?
Thinking a little more about your request, it burns down to: For every id\_tag give me the ID requested, or 24 if not available. This can be done in one query, where you use a ranking with ROW\_NUMBER, in which you prefer the requested ID over the 24. ``` select * from ( select tp.*, row_number() over(partition by id_tag order by case when id_project = 24 then 2 else 1 end) as rn from tag_project tp where id_project in (24, @REQUESTED_ID) ) ranked where rn = 1; ``` Here is your original query changed accordingly: ``` SELECT id_tag, id_project, NJTagName, node_level, id_level FROM ( SELECT tp.id_tag, tp.id_project, tp.NJTagName, tp.node_level , tl.id_level , row_number() over (partition by tp.id_tag order by case when tp.id_project = 24 then 2 else 1 end) as rn FROM instrumentation.dbo.tag_project tp INNER JOIN instrumentation.dbo.tag_level tl ON tl.id_tag=tp.id_tag WHERE tl.id_level=69 AND tp.node_level=1 AND tp.id_project in (24, @REQUESTED_ID) ) ranked WHERE rn = 1; ```
So depending of whether the user asks for ID 24 or not there are two different queries to execute. Here is the query for ID 24: ``` select * from tag_project where id_project = 24; ``` Here is the query for @OTHERID. We use UNION ALL to combine the 24 records with the @OTHER records. And we use NOT EXISTS in order to avoid certain records. ``` select * from tag_project where id_project = @OTHERID union all select * from tag_project tp where id_project = 24 and not exists ( select * from tag_project tp2 where tp2.id_tag = tp.id_tag and tp2.id_project = @OTHERID ); ```
conditional where clause
[ "", "sql", "sql-server", "inner-join", "" ]
![myTable](https://i.stack.imgur.com/fTfRB.png) Basically I have a table name myTable and inside has Name,Class , I would like to sort them out like the table below with a count of each name and number of 1,2,3s they have.Below is the codes I currently had but it work on small amount of entries, if the table has 20,000 lines it took a very long time to execute. ``` Select DISTINT(Name), (Select count(*) from `myTable` d1 where d1.Name = dd.Name and Class=1), (Select count(*) from `myTable` d2 where d2.Name = dd.Name and Class=2), (Select count(*) from `myTable` d3 where d3.Name = dd.Name and Class=3) from `myTable` dd Order by Name; ```
It is classic conditional aggregation: ``` Select Name, sum(case when Class = 1 then 1 else 0 end), sum(case when Class = 2 then 1 else 0 end), sum(case when Class = 3 then 1 else 0 end) from myTable group by Name ```
Do a `GROUP BY` with conditional counting: ``` select Name, count(case when class = 1 then 1 end), count(case when class = 2 then 1 end), count(case when class = 3 then 1 end) from `myTable` group by Name; ``` **Note:** `DISTINCT` is not a function on a column, it works on the whole selected rows. (It doesn't matter if the column-name is put within parentheses or not.) ``` select distinct(col1), col2, ... ``` is the same as ``` select distinct col1, col2, ... ``` and also the same as ``` select distinct col1, (col2), ... ```
mySQL count table efficiently
[ "", "mysql", "sql", "database", "" ]
I'm trying to find similar values from an array -not just one, but a group of them, while the sum of their element-wise differences is to be the lowest possible value EXAMPLE: 0 2 4 6 8 9 11 15 16 19 pick 5 numbers RESULT: 4 6 8 9 11 or 2 4 6 8 9 Where the sum of the element-wise difference of both groups is 7. The problem is I need to select such group of 1500 numbers from an array of 2927 numbers and I'm not sure if algorithm which takes groups of 0-1500 (indexes)numbers and sums the differences, then goes i+1 until it reaches the 1427-2927 group is effective (finally I would check the smallest sum and which group it belongs to). Note, that numbers are sorted (doesn't matter if ASC or DESC) and I'm trying to do that using PostgreSQL. Thanks in advance.
[SQL Fiddle](http://sqlfiddle.com/#!15/a3f63/18) **PostgreSQL 9.3 Schema Setup**: A small dataset of random data: ``` CREATE TABLE test ( id INT, population INT ); INSERT INTO TEST VALUES ( 1, 12 ); INSERT INTO TEST VALUES ( 2, 11 ); INSERT INTO TEST VALUES ( 3, 14 ); INSERT INTO TEST VALUES ( 4, 6 ); INSERT INTO TEST VALUES ( 5, 7 ); INSERT INTO TEST VALUES ( 6, 7 ); INSERT INTO TEST VALUES ( 7, 1 ); INSERT INTO TEST VALUES ( 8, 15 ); INSERT INTO TEST VALUES ( 9, 14 ); INSERT INTO TEST VALUES ( 10, 14 ); INSERT INTO TEST VALUES ( 11, 15 ); INSERT INTO TEST VALUES ( 12, 12 ); INSERT INTO TEST VALUES ( 13, 11 ); INSERT INTO TEST VALUES ( 14, 3 ); INSERT INTO TEST VALUES ( 15, 8 ); INSERT INTO TEST VALUES ( 16, 1 ); INSERT INTO TEST VALUES ( 17, 1 ); INSERT INTO TEST VALUES ( 18, 2 ); INSERT INTO TEST VALUES ( 19, 3 ); INSERT INTO TEST VALUES ( 20, 5 ); ``` **Query 1**: ``` WITH ordered_sums AS ( SELECT ID, POPULATION, ROW_NUMBER() OVER ( ORDER BY POPULATION ) AS RN, POPULATION - LAG(POPULATION,4) OVER ( ORDER BY POPULATION ) AS DIFFERENCE FROM test ), minimum_rn AS ( SELECT DISTINCT FIRST_VALUE( RN ) OVER wnd AS optimal_rn FROM ordered_sums WINDOW wnd AS ( ORDER BY DIFFERENCE ) ) SELECT ID, POPULATION FROM ordered_sums o INNER JOIN minimum_rn m ON ( o.RN BETWEEN m.OPTIMAL_RN - 4 AND m.OPTIMAL_RN ) ``` **[Results](http://sqlfiddle.com/#!15/a3f63/18/0)**: ``` | id | population | |----|------------| | 10 | 14 | | 9 | 14 | | 3 | 14 | | 11 | 15 | | 8 | 15 | ``` The query above will select `5` rows - to change it to select `N` rows then change the `4`s in the `LAG` function and in the last line to `N-1`.
This solution should work. Row\_number() to get the order. self-Join on +1499, then order by the difference of sizes in the pair. ``` DECLARE @cities TABLE ( city VARCHAR(512) ,size INT ,rownum INT ) INSERT INTO @cities SELECT * ,row_number() OVER ( ORDER BY size ) rownum FROM rawdata SELECT * ,d.size - c.size difference FROM @cities c INNER JOIN @cities d ON c.rownum + 1499 = d.rownum WHERE c.rownum <=2927-1499 ORDER BY d.size - c.size ```
Find group of N similar numbers in group of N+M numbers
[ "", "sql", "algorithm", "" ]
I have following SQL which should throw an error: > Ambiguous column name since `conferencetitle` column is in 2 tables; `course` and `coursesession`. Please note im selecting this column only once. ``` SELECT CourseSessionID, CourseSessionNum, usrCourseSessionNum, CourseSession.ConferenceTitle, ConferenceDescription, MaxParticipants, NumParticipants, openSeats = ISNULL(MaxParticipants, 0) - ISNULL(NumParticipants, 0), WaitingList, WaitingListCap, SessionCancelled, StartDate, SchedTime, SchedDate, StartTime, ShowThruDate, CourseSession.RegStartDate, ExtraCostCreditInfo, Days, IsMaster, IsMasterNoCredit, CutOffDate, DisableConfRes, MasterSessionId, Case When Days = 1 Then 'Scheduled Date/Time' Else 'Scheduled Date(s)/Time(s)' End As daysMessage, Location.Name, econtent, InstructorId, PaymentInstr, SpecialInstr, ContactName, ContactEmail, ContactPhone, ConfCWFunctionType, ConfCWFunctionType_opt, PageNumber, VideoTeleConference, CourseType, ConferenceRequiredInd, DisableConfRes, ShowSeatsRemainingOnPublic, LinkedSessionID, CourseSession.CourseNum, PublicStandardKey, PublicMasterKey, PublicBreakoutKey FROM CourseSession LEFT OUTER JOIN Location ON CourseSession.LocNum = Location.LocNum INNER JOIN Course On CourseSession.CourseNum = Course.CourseNum WHERE Course.CourseNum = 1944 AND (Course.DisplayStartingDate IS NULL OR DATEDIFF(n, Course.DisplayStartingDate, GETDATE()) >= 0) AND (CourseSession.DisplayDate IS NULL OR DATEDIFF(n, CourseSession.DisplayDate, GETDATE()) >= 0) AND ((CourseType = 0 AND CourseSession.ShowThruDate + ' 11:59:59 PM' >= GetDate()) OR (CourseType = 1 And Course.ConfShowThruDate + ' 11:59:59 PM' >= GetDate() AND CourseSession.ShowThruDate + ' 11:59:59 PM' >= GetDate())) AND MasterSessionID Is Null AND SessionCancelled = 0 Order By PageNumber, ConferenceSortOrder, StartDate, StartTime, LinkedSessionID, ConferenceTitle,CASE WHEN numparticipants < Maxparticipants THEN 0 ELSE 1 END, Location.Name ``` I have similar (actually same) database structure for diiferent clients who use the app. I observed that this query gives `'Ambiguous column name'` error only for few clients while for others it works fine. I'm unable to figure out the reason for this. **EDIT**: I run the query on same version of SQL Server 2008. 1. Changing the order by clause fixes this;but **i need to find why the error is not there** on other databases even though i run the same query on them which have same db structure.
> Changing the order by clause fixes this;but i need to find why the > error is not there on other databases even though i run the same query > on them which have same db structure. The behavior has changed between SQL Server 2000 and SQL Server 2005. Read the Microsoft answer to the following connect item for more details on why it was done: * [Behaviour Change 2000->2005: Ambiguous column name error using a column twice](https://connect.microsoft.com/SQLServer/feedback/details/203628/behaviour-change-2000-2005-ambiguous-column-name-error-using-a-column-twice) > Thank you for the report. This change is by design and was done as > part of an overall effort to make binding of column names more > predictable and align it with the SQL Standard. Among the other > changes were: not to ignoring column prefixes and limiting column > references to 2-part names instead of allowing more prefixes. This was > done to help uncover queries that were not running as the author > intended or had typos or cut/paste errors. In most of the investigated > cases, this has proven to be beneficial. As you note, the original > behavior is maintained in backwards compat mode, so if you need this > query to continue running as-is, you'll need to use the compat mode. What you can do to check if this is really the case is to run the following query: ``` SELECT compatibility_level FROM sys.databases WHERE name = 'DatabaseName' ``` If the result is 80 or below, then you are running in SQL Server 2000 compatibility, so you won't experience this error. See the 3rd item of [Differences Between Compatibility Level 80 and Level 90](https://msdn.microsoft.com/en-us/library/bb510680%28v=sql.100%29.aspx) for additional details.
The problem is in the `ORDER` statement where you don't specify which `ConferenceTitle` to use: ``` ORDER BY PageNumber, ConferenceSortOrder, StartDate, StartTime, LinkedSessionID, Course.ConferenceTitle ```
SQL Server doesn't detect ambiguity
[ "", "sql", "sql-server", "sql-server-2008", "" ]
(SQL Server 2012 being used) I found some topics on query optimization, and comparing EXISTS to COUNT, but I couldn't find this exact problem. I have a query that looks something like this: ``` select * from tblAccount as acc join tblUser as user on acc.AccountId = user.AccountId join tblAddress as addr on acc.AccountId = addr.AccountId ... **a few more joins** where acc.AccountId in ( select * accountid from (select accountid, count(*) from tblUser where flag = 1 group by accountId) as tbl where c != 1 ``` This query runs in an instant (although the db is quite big, around 70Gb). When I wrap the query in an EXISTS as in: ``` if exists ( **Exact same query as above** ) begin RAISERROR('Account found without exactly one flagged user.', 16, 1); end else begin print 'test passed.' end ``` Suddenly the query takes about 5-6 seconds to complete. I've tried specifying IF EXISTS (SELECT TOP 1 FROM... and also tried NOT EXISTS (which was even slower). But neither work to speed this up. If the normal select query completes basically instantly, then does anyone know why wrapping it in the EXISTS causes so much extra computation? And/or anyone have any ideas to work around this (I'm just trying to throw an error if any records are found at all by the original query). Thanks!
Did you try running the original query with TOP 1? most likely it will be just as slow. Sometimes when the optimizer thinks that something is very likely and going to return a vast set of data with little effort (i.e. almost all records are going to get returned), it chooses mostly loop joins because it only needs to get the first one and a loop join is good for only getting a couple records. When that turns out to not be true, it takes forever and a day to get results. In your case, it sounds like it's very rare, so this choice hurts badly. Try instead doing something like `SELECT @count = COUNT(*) FROM ...` and then checking if that count is non-zero.
I've fought this issue as well. The query was 10ms when I ran it on its own but once I put it in the If Exists it went to 4 minutes. No matter what I tried it didn't go back to 10ms. The issue was re-produced on 4 different servers but not on 2 servers. Servers all had the same db backup and same patch level of mssql 2012. servers were on different OS and varying hardware settings. I tried * adjusting the max memory grant - no affect * changing the threshold for parallelism - no affect * rewrite the query to make it simpler - no affect * use top 1 - no affect * cleared the cache between changes - no affect * break the query into some indexed views where I could (can't do it to parts using outer join) - no affect * applied recommended missing index - reduced time from 4 to 3 minutes but still not the 10 ms I expected. * change the outer join to a where not in (sub-query) - no affect * run sp\_updateStats - no affect The only solution that worked for me was to put the results in a temp table and do the if exists against that temp table. SELECT top 1 1 AS junk INTO #me FROM yourCraxyQueryHere IF EXISTS ( SELECT 1 FROM #me ) SELECT GETDATE() hope this helps
SQL Server: IF EXISTS massively slowing down a query
[ "", "sql", "sql-server", "sql-server-2012", "query-optimization", "exists", "" ]
I am having Following data in table ``` Country State RequestStatus India MP 0 India MP 1 Usa At 0 Usa At 0 ``` How to get All the Completed, Pending and total request with Country and state wise grouped in MSSql query. ``` 0: Pendint 1: Complete ``` Out put should be something like below ``` Country State Total Pending Complete India MP 2 1 1 USA At 2 2 0 ```
``` SELECT Country,State , COUNT(* ) AS Total ,SUM(CASE WHEN RequestStatus = 0 THEN 1 ELSE 0 END ) AS Pending ,SUM(CASE WHEN RequestStatus = 1 THEN 1 ELSE 0 END ) AS Complete FROM @Test T GROUP BY Country,State ```
Use this: ``` select country, [state], count(*) as Total, sum(case when requeststatus = 0 then 1 else 0 end) as Pending, sum(case when requeststatus = 1 then 1 else 0 end) as Complete from tbl group by country, [state] ``` We first group by the country and state to get aggregated data. Then, for calculating the total based on specific condition, we use `case` construct to filter out the matching records. e.g. If we want to get only the Pending Requests, we only increment the counter when `requeststatus = 0`. [Demo](http://rextester.com/GGBWYI4404)
How to Use where Clause in Count function
[ "", "sql", "sql-server-2008", "" ]
I have data like this ``` --------------------------- | code | other column --------------------------- | C | a | null | a | A | a | null | a | null | a ---------------------------- ``` How can i write query to get row\_number without counting null column. ``` ---------------------------------- | id | code | other column | ---------------------------------- | 1 | C | a | | null | a | 2 | A | a | | null | a | | null | a ---------------------------------- ```
Well, not specifically. But you can get what you want by using conditional logic: ``` select (case when code is not null then row_number() over (partition by (case when code is not null then 1 else 0 end) order by . . . ) end) as id ``` It is not clear to me what the `order by` is for the `row_number()` which is what the `. . .` means.
If you need to order on code (descendent in your example) with NULLs last: ``` select decode(code,null,null,row_number() over (order by code DESC NULLS LAST)) rn, code from test; ``` If you need to order on OTHER column: ``` select decode(code,null,null,row_number() over (order by decode(code,null,null,'x') NULLS LAST, other DESC)) rn, code, other from test; ```
Can row_number() ignore null in oracle
[ "", "sql", "oracle", "" ]
I have a lists of names **John, Rupert, Cassandra, Amy**, and I want to get names which are not exists in table: **Cassandra, Amy** How should I write such query? My table: ``` +----+--------+-----------+------+ | id | name | address | tele | +----+--------+-----------+------+ | 1 | Rupert | Somewhere | 022 | | 2 | John | Doe | 029 | | 3 | Donald | Armstrong | 021 | | 4 | Bob | Gates | 022 | +----+--------+-----------+------+ ```
Think in sets. You add names to a the result set with UNION ALL, you remove names from the result set with EXCEPT. ``` select 'John' union all select 'Rupert' union all select 'Cassandra' union all select 'Amy' except select name from mytable; ```
Build up a list of your names to check and do a left join to the `users` table: ``` with to_check (name) as ( values ('John'), ('Rupert'), ('Cassandra'), ('Amy') ) select tc.name as missing_name from to_check tc left join the_table tt on tt.name = tc.name where tt.name is null; ``` SQLFiddle example: <http://sqlfiddle.com/#!15/5c4f5/1>
Rows which do not exist in a table
[ "", "sql", "postgresql", "" ]
I have this variable of a string, which I can use to filter for certain days of the week, or the whole weekend using ``` where datepart(dw,date) in (@weekday) ``` where I declare `@weekday` as `nvarchar` When I have `@weekday = '1'` or any single day it works perfectly but when I try `@weekday = '1,7'` it fails to convert the data type to `int`. Any ideas how to fix this?
You can't use this that way. You only have these options: 1. to run the whole script in a dynamic SQL statement, which would be bad. 2. change your where to something like that: `where datepart(dw,date) = @weekday1 OR datepart(dw,date) = @weekday2` 3. change your `nvarchar` variable to a table variable, insert 1 and 7 into it and using it in a `JOIN` inside your query to filter the rows.
You are attempting to use `@weekday` as if you had a piece of dynamic SQL, such that your where clause would be: ``` where datepart(dw, date) in (1, 7) ``` However, what is actually happening is: ``` where datepart(dw, date) in ('1, 7') ``` This is trying to compare `datepart(dw, date)` to `'1, 7'` which is obviously doomed to fail because one is an `INT` and the other is an `NVARCHAR`. Try either using dyanmic SQL or finding another solution to your problem. You could for example try something like (may not work out the box but investigate it: ``` where (@weekday = '1, 7' AND datepart(dw,date) in (1, 7)) OR (@weekday != '1, 7' AND datepart(dw,date) = @weekday) ``` This sees if the `@weekday` is set to `'1, 7'`, and if so manually checks for a `datepart` in 1 and 7, or if not performs your original check for a normal weekday.
SQL Server filter for weekend only in variable
[ "", "sql", "sql-server", "" ]
I am trying to write an SQLite statement to get the closest datetime from an user input (from a WPF datepicker). I have a table IRquote(rateId, quoteDateAndTime, quoteValue). For example, if the user enter 10/01/2000 and the database have only fixing stored for 08/01/2000, 07/01/2000 and 14/01/2000, it would return 08/01/2000, being the closest date from 10/01/2000. Of course, I'd like it to work not only with dates but also with time. I tried with this query, but it returns the row with the furthest date, and not the closest one: ``` SELECT quoteValue FROM IRquote WHERE rateId = '" + pRefIndexTicker + "' ORDER BY abs(datetime(quoteDateAndTime) - datetime('" + DateTimeSQLite(pFixingDate) + "')) ASC LIMIT 1; ``` Note that I have a function DateTimeSQLite to transform user input to the right format. I don't get why this does not work. How could I do it? Thanks for your help
To get the closest date, you will need to use the **[strftime('%s', datetime)](https://www.sqlite.org/lang_datefunc.html)** SQLite function. With [this](http://sqlfiddle.com/#!5/40935/5) example/demo, you will get the most closest date to your given date. Note that the date `2015-06-25 10:00:00` is the input datetime that the user selected. ``` select t.ID, t.Price, t.PriceDate, abs(strftime('%s','2015-06-25 10:00:00') - strftime('%s', t.PriceDate)) as 'ClosestDate' from Test t order by abs(strftime('%s','2015-06-25 10:00:00') - strftime('%s', PriceDate)) limit 1; ``` SQL explanation: We use the `strftime('%s') - strftime('%s')` to calculate the difference, in seconds, between the two dates (*Note:* it has to be '%s', not '%S'). Since this can be either positive or negative, we also need to use the `abs` function to make it all positive to ensure that our `order by` and subsequent `limit 1` sections work correct.
If the table is big, and there is an index on the datetime column, this will use the index to get the 2 closest rows (above and below the supplied value) and will be more efficient: ``` select * from ( select * from ( select t.ID, t.Price, t.PriceDate from Test t where t.PriceDate <= datetime('2015-06-23 10:00:00') order by t.PriceDate desc limit 1 ) d union all select * from ( select t.ID, t.Price, t.PriceDate from Test t where t.PriceDate > datetime('2015-06-23 10:00:00') order by t.PriceDate asc limit 1 ) a ) x order by abs(julianday('2015-06-23 10:00:00') - julianday(PriceDate)) limit 1 ; ``` Tested in [SQLfiddle](http://sqlfiddle.com/#!5/e068a/2).
SQLite query to get the closest datetime
[ "", "sql", "sqlite", "datetime", "" ]
I'm dumbfounded that this question has not been asked meaningfully already. How does one go about creating an equivalent function in SQL like `LTRIM` or `RTRIM` for carriage returns and line feeds ONLY at the start or end of a string. Obviously `REPLACE(REPLACE(@MyString,char(10),''),char(13),'')` removes ALL carriage returns and new line feeds. Which is NOT what I'm looking for. I just want to remove leading or trailing ones.
Find the first character that is *not* `CHAR(13)` or `CHAR(10)` and subtract its position from the string's length. **LTRIM()** ``` SELECT RIGHT(@MyString,LEN(@MyString)-PATINDEX('%[^'+CHAR(13)+CHAR(10)+']%',@MyString)+1) ``` **RTRIM()** ``` SELECT LEFT(@MyString,LEN(@MyString)-PATINDEX('%[^'+CHAR(13)+CHAR(10)+']%',REVERSE(@MyString))+1) ```
Following functions are enhanced types of `trim` functions you can use. Copied from [sqlauthority.com](http://blog.sqlauthority.com/2008/10/10/sql-server-2008-enhenced-trim-function-remove-trailing-spaces-leading-spaces-white-space-tabs-carriage-returns-line-feeds/) These functions remove trailing spaces, leading spaces, white space, tabs, carriage returns, line feeds etc. **Trim Left** ``` CREATE FUNCTION dbo.LTrimX(@str VARCHAR(MAX)) RETURNS VARCHAR(MAX) AS BEGIN DECLARE @trimchars VARCHAR(10) SET @trimchars = CHAR(9)+CHAR(10)+CHAR(13)+CHAR(32) IF @str LIKE '[' + @trimchars + ']%' SET @str = SUBSTRING(@str, PATINDEX('%[^' + @trimchars + ']%', @str), LEN(@str)) RETURN @str END ``` **Trim Right** ``` CREATE FUNCTION dbo.RTrimX(@str VARCHAR(MAX)) RETURNS VARCHAR(MAX) AS BEGIN DECLARE @trimchars VARCHAR(10) SET @trimchars = CHAR(9)+CHAR(10)+CHAR(13)+CHAR(32) IF @str LIKE '%[' + @trimchars + ']' SET @str = REVERSE(dbo.LTrimX(REVERSE(@str))) RETURN @str END ``` **Trim both Left and Right** ``` CREATE FUNCTION dbo.TrimX(@str VARCHAR(MAX)) RETURNS VARCHAR(MAX) AS BEGIN RETURN dbo.LTrimX(dbo.RTrimX(@str)) END ``` **Using function** ``` SELECT dbo.TRIMX(@MyString) ``` If you do use these functions you might also consider changing from `varchar` to `nvarchar` to support more encodings.
Remove only leading or trailing carriage returns
[ "", "sql", "sql-server", "string", "t-sql", "" ]
I have the following model structure: Model Visitor ``` class Visitor < ActiveRecord::Base has_many: triggers end ``` Model Trigger ``` class Trigger < ActiveRecord::Base belongs_to :visitor, :inverse_of => :triggers belongs_to :event, :inverse_of => :triggers end ``` Model Event ``` class Event < ActiveRecord::Base has_many: triggers end ``` I am trying to setup a custom association in Visitor model like so: ``` has_many: triggers_that_pass_some_condition ,:class_name => "Trigger", :conditions => ["triggers.some_column >= events.some_column"] ``` The problem is that it doesn't work .. I am guessing I have to do some kind of join to compare columns of two separate models (that are associated with each other) I have tried triggers.some\_column >= triggers.event.some\_column That does not work either. Anyone has any suggestions? thanks!
Thanks to the clue from Darpa, I eventually settled on this: ``` has_many :custom_trigger, {:class_name => "Trigger", :include => :event, :conditions => ["triggers.some_column >= events.another_column"]} ```
Make sure you first add the correct association between `Visitor` and `Trigger` in your model setup. From there, you can add a custom association as follows: ``` class Visitor < ActiveRecord::Base has_many :approved_triggers, -> { includes(:events).where("events.something = ?", true).references(:events) }, class_name: 'Trigger', inverse_of: :visitor end class Trigger < ActiveRecord::Base belongs_to :visitor, inverse_of :triggers end ``` Right now your `Trigger` class holds no association to a `Visitor`.
has_many :condition multiple models how to use join here?
[ "", "sql", "ruby-on-rails", "join", "activerecord", "associations", "" ]
## Performance problem on update MySql MyISAM big table making column ascending order based on an index on same table My problem is that the server have only 4 GB memory. I have to do an update query like this: [previous asked question](https://stackoverflow.com/questions/19459643/mysql-update-incremented-not-auto-incremented-column-values) Mine is this: ``` set @orderid = 0; update images im set im.orderid = (select @orderid := @orderid + 1) ORDER BY im.hotel_id, im.idImageType; ``` On `im.hotel_id, im.idImageType` I have an ascending index. On `im.orderid` I have an ascending index too. The table have **21 millions records** and is an MyIsam table. The table is this: ``` CREATE TABLE `images` ( `photo_id` int(11) NOT NULL, `idImageType` int(11) NOT NULL, `hotel_id` int(11) NOT NULL, `room_id` int(11) DEFAULT NULL, `url_original` varchar(150) COLLATE utf8_unicode_ci NOT NULL, `url_max300` varchar(150) COLLATE utf8_unicode_ci NOT NULL, `url_square60` varchar(150) COLLATE utf8_unicode_ci NOT NULL, `archive` int(11) NOT NULL DEFAULT '0', `orderid` int(11) NOT NULL DEFAULT '0', PRIMARY KEY (`photo_id`), KEY `idImageType` (`idImageType`), KEY `hotel_id` (`hotel_id`), KEY `hotel_id_idImageType` (`hotel_id`,`idImageType`), KEY `archive` (`archive`), KEY `room_id` (`room_id`), KEY `orderid` (`orderid`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; ``` The problem is the performance: hang for several minutes! Server disk go busy too. My question is: there is a better manner to achieve the same result? Have I to partition the table or something else to increase the performance? I cannot modify server hardware but can tuning MySql application db server settings. best regards
### Tanks to every body. Yours answers help me much. I think that now I have found a better solution. This problem involve in two critical issue: * efficient paginate on large table * update large table. To go on efficient paginate on large table I have found a solution by make a previous update on the table but doing so I fall in issues on the 51 minute time needed to the updates and consequent my java infrastructure time out (spring-batch step). Now by yours help, I found two solution to paginate on large table, and one solution to update large table. To reach this performance the server need memory. I try this solution on develop server using ***32 GB*** memory. # common solution step To paginate follow a fields tupla like I needed I have make one index: ``` KEY `hotel_id_idImageType` (`hotel_id`,`idImageType`) ``` to achieve the new solution we have to change this index by add the primary key part to the index tail `KEY hotel_id_idImageType (hotel_id,idImageType, primary key fields)`: ``` drop index hotel_id_idImageType on images; create index hotelTypePhoto on images (hotel_id, idImageType, photo_id); ``` This is needed to avoid touch table and use only the index file ... ***Suppose we want the 10 records after the 19000000 record.*** ***The decimal point is this `,` in this answers*** # solution 1 This solution is very practice and not needed the extra field `orderid` and you have not to do any update before the pagination: ``` select * from images im inner join (select photo_id from images order by hotel_id, idImageType, photo_id limit 19000000,10) k on im.photo_id = k.photo_id; ``` To make the table k on my ***21 million table records*** need only 1,5 sec because it use only the three field in index `hotelTypePhoto` so haven't to access to the table file and work only on index file. The order was like the original required (hotel\_id, idImageType) because is included in (hotel\_id, idImageType, photo\_id): same subset... The join take no time so every first time the paginate is executed on the same page need only 1,5 sec and this is a good time if you have to execute it in a batch one on 3 months. On production server using ***4 GB memory*** the same query take 3,5 sec. Partitioning the table do not help to improve performance. If the server take it in cache the time go down or if you do a jdbc params statment the time go down too (I suppose). If you have to use it often, it have the advantage that it do not care if the data change. # solution 2 This solution need the extra field `orderid` and need to do the orderid update one time by batch import and the data have not to change until the next batch import. Then you can paginate on the table in 0,000 sec. ``` set @orderid = 0; update images im inner join ( select photo_id, (@orderid := @orderid + 1) as newOrder from images order by hotel_id, idImageType, photo_id ) k on im.photo_id = k.photo_id set im.orderid = k.newOrder; ``` The table k is fast almost like in the first solution. This all update take only 150,551 sec much better than 51 minute!!! (150s vs 3060s) After this update in the batch you can do the paginate by: ``` select * from images im where orderid between 19000000 and 19000010; ``` or better ``` select * from images im where orderid >= 19000000 and orderid< 19000010; ``` this take 0,000sec to execute first time and all other time. ## Edit after Rick comment # Solution 3 This solution is to avoid extra fields and offset use. But need too take memory of the last page read like in [this solution](http://mysql.rjweb.org/doc.php/pagination) ***This is a fast solution and can work on online server production using only 4GB memory*** Suppose you need to read last ten records after 20000000. There is two scenario to take care: * You can start read it from the first to the 20000000 if you need all of it like me and update some variable to take memory of last page read. * you have to read only the last 10 after 20000000. In this second scenario you have to do a pre query to find the start page: ``` select hotel_id, idImageType, photo_id from images im order by hotel_id, idImageType, photo_id limit 20000000,1 ``` It give to me: ``` +----------+-------------+----------+ | hotel_id | idImageType | photo_id | +----------+-------------+----------+ | 1309878 | 4 | 43259857 | +----------+-------------+----------+ ``` This take 6,73 sec. So you can store this values in variable to next use. Suppose we named `@hot=1309878, @type=4, @photo=43259857` Then you can use it in a second query like this: ``` select * from images im where hotel_id>@hot OR ( hotel_id=@hot and idImageType>@type OR ( idImageType=@type and photo_id>@photo ) ) order by hotel_id, idImageType, photo_id limit 10; ``` The first clause `hotel_id>@hot` take all records after the actual first field on scrolling index but lost some record. To take it we have to do the OR clause that take on the first index field all remained unread records. This take only 0,10 sec now. But this query can be optimized (bool distributive): ``` select * from images im where hotel_id>@hot OR ( hotel_id=@hot and (idImageType>@type or idImageType=@type) and (idImageType>@type or photo_id>@photo ) ) order by hotel_id, idImageType, photo_id limit 10; ``` that become: ``` select * from images im where hotel_id>@hot OR ( hotel_id=@hot and idImageType>=@type and (idImageType>@type or photo_id>@photo ) ) order by hotel_id, idImageType, photo_id limit 10; ``` that become: ``` select * from images im where (hotel_id>@hot OR hotel_id=@hot) and (hotel_id>@hot OR (idImageType>=@type and (idImageType>@type or photo_id>@photo)) ) order by hotel_id, idImageType, photo_id limit 10; ``` that become: ``` select * from images im where hotel_id>=@hot and (hotel_id>@hot OR (idImageType>=@type and (idImageType>@type or photo_id>@photo)) ) order by hotel_id, idImageType, photo_id limit 10; ``` Are they the same data we can get by the limit? To quick ***not exhaustive*** test do: ``` select im.* from images im inner join ( select photo_id from images order by hotel_id, idImageType, photo_id limit 20000000,10 ) k on im.photo_id=k.photo_id order by im.hotel_id, im.idImageType, im.photo_id; ``` This take 6,56 sec and the data is the same that the query above. So the test is positive. In this solution you have to spend 6,73 sec only the first time you need to seek on first page to read (but if you need all you haven't). To real all other page you need only 0,10 sec a very good result. Thanks to rick to his hint on a solution based on store the last page read. # Conclusion On ***solution 1*** you haven't any extra field and take 3,5 sec on every page On ***solution 2*** you have extra field and need a big memory server (32 GB tested) in 150 sec. but then you read the page in 0,000 sec. On ***solution 3*** you haven't any extra field but have to store last page read pointer and if you do not start reading by the first page you have to spend 6,73 sec for first page. Then you spend only 0,10 sec on all the other pages. Best regards ## Edit 3 ***solution 3*** is exactly that suggested by Rick. Im sorry, in my previous ***solution 3*** I have do a mistake and when I coded the right solution then I have applied some boolean rule like distributive property and so on, and after all I get the same Rich solution! regards
You can use some of this: 1. Update engine to InnoDB, it blocks only one row, not all the table on update. 2. Create #temp table with photo\_id and good orderid and than update your table from this temp: ``` update images im, temp tp set im.orderid = tp.orderid where im.photo_id = tp.photo_id ``` it will be fastest way and when you fill your tmp table - you have no blocks on primary table. 3. You can drop indexes before mass update. After all your single update you have rebuilding of indexes and it has a long time.
Update mysql big table hang too time
[ "", "mysql", "sql", "performance", "myisam", "" ]
I used `round()` function to show the file size in from byte to MB. The data was like the following- ``` file_size ---------- 2463 ``` I used this to convert- ``` round(document.file_size / (1024*1024), 3) file_size ``` Now the data is showing like this- ``` file_size ---------- .002 ``` But, what I want is like this- ``` file_size ---------- 0.002 ``` Just the left most zero. How can I have this?
Try `TO_CHAR(round(document.file_size / (1024*1024), 3), 990.000) file_size` More information you can find here: <https://asktom.oracle.com/pls/apex/f?p=100:11:0::NO::P11_QUESTION_ID:2739503983093>
you want a leading zero... try this: ``` to_char(round(document.file_size / (1024*1024), 3),0999) file_size ``` <http://docs.oracle.com/cd/B19306_01/server.102/b14200/sql_elements004.htm>
how to show zero on the left before decimal point when using round()?
[ "", "sql", "oracle", "" ]
``` SELECT name , ( SUM(CASE WHEN TransTypeName LIKE 'credit%' THEN amount ELSE 0 END) - SUM(CASE WHEN TransTypeName LIKE 'Debit%' THEN amount ELSE 0 END) ) * 5 / 100 AS Interest FROM ..... ``` This query return some negative value,I want convert negative value into zero. How to write a query
Something like this maybe? ``` SELECT t1.name AS Name, CASE WHEN t1.Interest > 0 THEN Interest ELSE 0 END AS Interest FROM (select name, (sum(case when TransTypeName like 'credit%' then amount else 0 end) - sum(case when TransTypeName like 'Debit%' then amount else 0 end)) *5/100 as Interest from ...........) as t1 ```
Just give this ``` select greatest(:yourval,0) ``` (Works in Azure and SQL Server 2022.)
How to convert negative value into zero
[ "", "sql", "sql-server", "sql-server-2008", "" ]
So I have a table `Movement` that has a column called `IDNumber`. What I want to do is count the number of times an ID appears in `IDNumber` for each ID in that column I could do this individually for each ID like this: ``` SELECT COUNT(*)FROM movement WHERE IDNumber = 379; SELECT COUNT(*)FROM movement WHERE IDNumber = 654; SELECT COUNT(*)FROM movement WHERE IDNumber = 789; SELECT COUNT(*)FROM movement WHERE IDNumber = 878; ``` But I want to do this for *all* the the ID's in `IDNumber`, and in one query because the actual dataset that I'm working on is much bigger. So how would I do this? When I tried to do this, I created another table `IDNumbers` with a columns `ID` that just listed all the distinct ID's present in `Movement`, and I did this: ``` INSERT INTO IDCount(`ID`,`Count`) SELECT n.ID, m.COUNT(*) FROM movement m JOIN IDNumbers n ON n.ID = m.IDNumber ``` and I want the results to be like: ``` IDNumber COUNT 379 2 654 1 789 1 878 1 ``` This is the **[SQL Fiddle](http://sqlfiddle.com/#!9/0a886/2)** for it. But I know thats completely wrong... how would I do it? Any help would be greatly appreciated, thanks!!
How about: ``` SELECT IDNumber, Count(IDNumber) FROM movement GROUP BY IDNumber ``` You didn't say it was a requirement that it be inserted into a table, right?
Try looking at the group by function with MySQL ``` SELECT COUNT(*), IdNumber FROM movement GROUP BY IdNumber; ``` You can even make it fancier by adding more conditions with a where clause ``` SELECT COUNT(*), IdNumber FROM movement WHERE IDNumber < 1000 GROUP BY IdNumber; ``` or ``` SELECT COUNT(*), IdNumber FROM movement WHERE IDNumber IN (1,2,3,4) GROUP BY IdNumber; ``` To save the results into a table - ``` INSERT INTO target_table SELECT COUNT(*), IdNumber FROM movement GROUP BY IdNumber; ``` the target table would have the columns `count`, and `IDNumber`
SQL - Confusion with INSERT INTO...SELECT and COUNT(*)?
[ "", "mysql", "sql", "select", "count", "sql-insert", "" ]
``` +------------+---------------+---------------+----------------+ | Product ID | Part Sequence | Part Material | Description | +------------+---------------+---------------+----------------+ | 1 | 1 | Steel | Part A | | 1 | 2 | CFK | Part B | | 1 | 3 | CFK | Part B Variant | | 1 | 4 | Steel | Part C | | 1 | 5 | GFK | Part D | | 1 | 6 | Plastic | Part E | | 2 | 1 | Steel | Part A | | 2 | 2 | CFK | Part B | | 2 | 3 | Steel | Part F | | 2 | 4 | CFK | Part B | | 2 | 5 | Steel | Part G | | 2 | 6 | Silicon | Part D+ | | 2 | 7 | Plastic | Part E | +------------+---------------+---------------+----------------+ ``` (the ordering by Product ID and Part Sequence is only done for readability, my db table is unorderd) I need to query all rows for each product id with a part sequence equal or higher to the last steel part. So for the table above the expected result would be: ``` +------------+---------------+---------------+----------------+ | Product ID | Part Sequence | Part Material | Description | +------------+---------------+---------------+----------------+ | 1 | 4 | Steel | Part C | | 1 | 5 | GFK | Part D | | 1 | 6 | Plastic | Part E | | 2 | 5 | Steel | Part G | | 2 | 6 | Silicon | Part D+ | | 2 | 7 | Plastic | Part E | +------------+---------------+---------------+----------------+ ``` I could use the solution from [SQL Select only rows with Max Value on a Column FILTERED by Column](https://stackoverflow.com/questions/25245008/sql-select-only-rows-with-max-value-on-a-column-filtered-by-column) to find the last steel part and then filter anything with a higher part sequence, but I'm hoping for a more efficient solution.
Using a windowed aggregate function (requiring only a single pass over the table) rather than using a join (which will require two passes over the table): [It will also return products where there are no steel parts - if this is an issue then a similar windowed query can be use to filter those rows out.] [SQL Fiddle](http://sqlfiddle.com/#!4/a3a24/1) **Oracle 11g R2 Schema Setup**: ``` CREATE TABLE TEST ( Product_ID, Part_Sequence, Part_Material, Description ) AS SELECT 1, 1, 'Steel', 'Part A' FROM DUAL UNION ALL SELECT 1, 2, 'CFK', 'Part B' FROM DUAL UNION ALL SELECT 1, 3, 'CFK', 'Part B Variant' FROM DUAL UNION ALL SELECT 1, 4, 'Steel', 'Part C' FROM DUAL UNION ALL SELECT 1, 5, 'GFK', 'Part D' FROM DUAL UNION ALL SELECT 1, 6, 'Plastic', 'Part E' FROM DUAL UNION ALL SELECT 2, 1, 'Steel', 'Part A' FROM DUAL UNION ALL SELECT 2, 2, 'CFK', 'Part B' FROM DUAL UNION ALL SELECT 2, 3, 'Steel', 'Part F' FROM DUAL UNION ALL SELECT 2, 4, 'CFK', 'Part B' FROM DUAL UNION ALL SELECT 2, 5, 'Steel', 'Part G' FROM DUAL UNION ALL SELECT 2, 6, 'Silicon', 'Part D+' FROM DUAL UNION ALL SELECT 2, 7, 'Plastic', 'Part E' FROM DUAL UNION ALL SELECT 3, 1, 'Silicon', 'Part A' FROM DUAL UNION ALL SELECT 3, 2, 'Plastic', 'Part B' FROM DUAL; ``` **Query 1**: ``` SELECT Product_ID, Part_Sequence, Part_Material, Description FROM ( SELECT t.*, COALESCE( SUM( CASE Part_Material WHEN 'Steel' THEN 1 ELSE 0 END ) OVER ( PARTITION BY Product_ID ORDER BY Part_Sequence ROWS BETWEEN 1 FOLLOWING AND UNBOUNDED FOLLOWING ), 0 ) AS isInvalid FROM TEST t ) WHERE isInvalid = 0 ``` **[Results](http://sqlfiddle.com/#!4/a3a24/1/0)**: ``` | PRODUCT_ID | PART_SEQUENCE | PART_MATERIAL | DESCRIPTION | |------------|---------------|---------------|-------------| | 1 | 4 | Steel | Part C | | 1 | 5 | GFK | Part D | | 1 | 6 | Plastic | Part E | | 2 | 5 | Steel | Part G | | 2 | 6 | Silicon | Part D+ | | 2 | 7 | Plastic | Part E | | 3 | 1 | Silicon | Part A | | 3 | 2 | Plastic | Part B | ```
Try below ``` Select * from product join (Select Max(Part_Sequence) as sequence,productid from product group by Productid, Part Sequence) d on product.Part_Sequence = d.sequence and product.productid = d.productid ``` if you want query for specific part just apply where clause to inner and outer query for that part
SQL - Select rows with values greater than max value for id and category
[ "", "sql", "oracle", "" ]
This is my table sample, I had deleted a lot column. ``` id orid to_id seq 1 1 5 A12 2 2 6 A12 3 3 7 A12 4 4 _ A12 <--- I want to find this row 5 5 _ A13 6 6 _ A13 7 7 _ A13 ``` I want to find the extra data. I want use this two SQL to find extra data. ``` (select * from ForgeRock where seq = 'A13') a (select * from ForgeRock where seq != 'A13') b ``` I had tried it, but not my wanted. How can I do it? Thanks. ``` select b.* from (select * from ForgeRock where seq = 'A13') a right join (select * from ForgeRock where seq != 'A13') b on a.to_id = b.to_id; ``` This is my [sqlfiddle](http://sqlfiddle.com/#!9/3279d/5)
I found the answer. This is I want, and this is my code. ``` select b.* from (select * from ForgeRock where seq = 'A13') a right join (select * from ForgeRock where seq != 'A13') b on a.id = b.to_id where b.to_id = '_'; ``` This is [sqlfiddle](http://sqlfiddle.com/#!9/3279d/17)
I am guessing that you want to find rows that have no connection to another row. ``` select fr.* from ForgeRock fr where fr.to_id is NULL and not exists (select 1 from ForgeRock fr2 where fr.from_id = fr2.to_id ); ```
select the extra row data
[ "", "mysql", "sql", "" ]
I'm using a stored procedure with a CTE and doing some paging. I also want to return an output parameter with the total count of the returned query before my paging. My problem is that I get an error that "OrderedSet" is not a valid object name. ``` @ft INT, @page INT, @pagesize INT, @count INT OUTPUT AS BEGIN DECLARE @offset INT SET @offset = @page * @pagesize -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- Insert statements for procedure here WITH OrderedSet AS ( SELECT *, ROW_NUMBER() OVER (ORDER BY Id DESC) AS 'Index' FROM tbl_BulkUploadFiles buf WHERE buf.FileType = @ft ) SELECT * FROM OrderedSet WHERE [Index] BETWEEN @offset AND (@offset + @pagesize) SET @count = (SELECT COUNT(*) FROM OrderedSet) END ``` So my issue is on the last line, error is that last OrderedSet is not a valid object name. Thanks in advance for any help!
Here are 2 approaches that avoid copying and pasting all the CTEs multiple times. **Return total rows as column of result set** Benefit here is that you can calculate total rows without multiple queries and temp tables, but you have to add logic to your front end to get the total row count from the first row of the result set before iterating over it to display the paged set. Another consideration is that you must account for no rows being returned, so set your total row count to 0 if no rows returned. ``` ;WITH OrderedSet AS ( SELECT *, ROW_NUMBER() OVER (ORDER BY Id DESC) AS Seq, ROW_NUMBER() OVER (ORDER BY Id) AS SeqRev FROM tbl_BulkUploadFiles buf WHERE buf.FileType = @ft ) SELECT *, Seq + SeqRev - 1 AS [TotalCount] FROM OrderedSet WHERE Seq BETWEEN @offset AND (@offset + @pagesize) ``` **Utilize a temp table** While there is a cost of a temp table, if your database instance follows best practices for tempdb (multiple files for multi-cores, reasonable initial size, etc), 200k rows may not be a big deal since the context is lost after the stored proc completes, so the 200k rows don't exist for too long. However, it does present challenges if these stored procs are called quite often concurrently - doesn't scale too well. However, you are not keeping the entire table - just the paged rows, so hopefully your page sizes are much smaller than 200k rows. The approach below tries to minimize the tempdb cost being able to calculate the row count by getting only the first row due to the method of ASC and DESC ROW\_NUMBERs. ``` ;WITH OrderedSet AS ( SELECT *, ROW_NUMBER() OVER (ORDER BY Id DESC) AS Seq, ROW_NUMBER() OVER (ORDER BY Id) AS SeqRev FROM @buf buf --tbl_BulkUploadFiles buf WHERE buf.FileType = @ft ) SELECT * INTO #T FROM OrderedSet WHERE Seq BETWEEN @offset AND (@offset + @pagesize) SET @count = COALESCE((SELECT TOP 1 SeqRev + Seq - 1 FROM #T), 0) SELECT * FROM #T ``` Note: The method used above for calculating row counts was adapted from [How to reference one CTE twice?](https://stackoverflow.com/questions/2136659/how-to-reference-one-cte-twice) and <http://www.sqlservercentral.com/articles/T-SQL/66030/>.
Only alternative I see is repeating the query as inline view ``` select @count = numrows FROM ( SELECT count(*) as numrows, ROW_NUMBER() OVER (ORDER BY Id DESC) AS 'Index' FROM tbl_BulkUploadFiles buf WHERE buf.FileType = @ft ) XXX WHERE [Index] BETWEEN @offset AND (@offset + @pagesize) ```
SQL stored procedure SET output param using COUNT(*) ON a CTE
[ "", "sql", "sql-server", "stored-procedures", "pagination", "common-table-expression", "" ]
I am working with sql language. I want to select a `substring` from the string : `Rayan`. When I execute, `select substring(col1, 1, 5) from table1` it works fine. My question is, If I give the length of the string more than it exists, it will shown the same result? ie, `select substring(col1, 1, 6) from table1` result : `Rayan` Now my need : If we give the length more than the string length, it will not return anything. Anyone have any idea? Fiddle : <http://sqlfiddle.com/#!2/90438/2>
You can check the length of the column and conditionally show no output in the event that the length you are trying to select (in characters) be *greater* than this length: ``` SELECT id, CASE WHEN CHAR_LENGTH(col1) <= 10 THEN SUBSTRING(col1, 1, 10) ELSE '' END AS cond_col1 FROM table1 ``` You can replace the value `10` with whatever makes sense in your use case (you didn't give us this information either in the OP or the Fiddle), e.g. a column.
You are use substring function. You are specify maximum length. SUBSTRING ( expression ,start , length ) **Arguments** **expression** Is a character, binary, text, ntext, or image expression. **start** Is an integer or bigint expression that specifies where the returned characters start. If start is less than 1, the returned expression will begin at the first character that is specified in expression. In this case, the number of characters that are returned is the largest value of either the sum of start + length- 1 or 0. If start is greater than the number of characters in the value expression, a zero-length expression is returned. **length** Is a positive integer or bigint expression that specifies how many characters of the expression will be returned. If length is negative, an error is generated and the statement is terminated. If the sum of start and length is greater than the number of characters in expression, the whole value expression beginning at start is returned.
Select substring in sql
[ "", "mysql", "sql", "postgresql", "" ]
There are many *slightly similar* questions, but none solve precisely this problem. "[Find All Rows With Null Value(s) in Any Column](https://stackoverflow.com/questions/14488859/find-all-rows-with-null-values-in-any-column)" is the closest one I could find and offers an answer for SQL Server, but I'm looking for a way to do this in PostgreSQL. How can I select only the rows that have NULL values in *any* column? I can get all the column names easily enough: ``` select column_name from information_schema.columns where table_name = 'A'; ``` but it's unclear how to check multiple column names for NULL values. Obviously this won't work: ``` select* from A where ( select column_name from information_schema.columns where table_name = 'A'; ) IS NULL; ``` And [searching](https://www.google.com/search?q=postgres+check+multiple+columns+for+null+values&oq=postgres+check+multiple+columns+for+null+values&aqs=chrome..69i57.22589j0j7&sourceid=chrome&es_sm=91&ie=UTF-8#newwindow=1&q=postgres+SQL+check+multiple+columns+for+null+values) has not turned up anything useful.
You can use `NOT(<table> IS NOT NULL)`. From [the documentation](http://www.postgresql.org/docs/9.4/static/functions-comparison.html) : > If the expression is row-valued, then IS NULL is true when the row > expression itself is null or when all the row's fields are null, while > IS NOT NULL is true when the row expression itself is non-null and all > the row's fields are non-null. So given table `t`, ``` SELECT * FROM t; β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ f1 β”‚ f2 β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ (null) β”‚ 1 β”‚ β”‚ 2 β”‚ (null) β”‚ β”‚ (null) β”‚ (null) β”‚ β”‚ 3 β”‚ 4 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”˜ (4 rows) ``` here are all the possible variants: ``` SELECT * FROM t SELECT * FROM t WHERE (t IS NULL); WHERE (t IS NOT NULL); β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ f1 β”‚ f2 β”‚ β”‚ f1 β”‚ f2 β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ (null) β”‚ (null) β”‚ β”‚ 3 β”‚ 4 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”˜ (1 row) (1 rows) SELECT * FROM t SELECT * FROM t WHERE NOT (t IS NULL); WHERE NOT (t IS NOT NULL); β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ f1 β”‚ f2 β”‚ β”‚ f1 β”‚ f2 β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ (null) β”‚ 1 β”‚ β”‚ (null) β”‚ 1 β”‚ β”‚ 2 β”‚ (null) β”‚ β”‚ 2 β”‚ (null) β”‚ β”‚ 3 β”‚ 4 β”‚ β”‚ (null) β”‚ (null) β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”˜ (3 rows) (3 rows) ```
An alternative solution which **might** be of use in some edge cases could be the following (all of the code below is available on the fiddle [here](https://dbfiddle.uk/3DWDB3QB)): ## Setup: ``` CREATE TABLE t ( f1 INT, f2 INT, f3 TEXT, f4 INT, f5 TEXT -- -- arbitrary number of fields... -- ); ``` Populate: ``` INSERT INTO t VALUES (null, 1, 'blah', 5, 'xxx'), (2, null, 'yyy', 6, 'wwww'), (null, null, 'ggg', 99, 'wwww'), (3, 4, 'yyy', 17, 'yuoyo'); -- no NULLs ``` ## Row SQL: First, we run: ``` SELECT ROW(t) FROM t; ``` Result: ``` row ("(,1,blah,5,xxx)") ("(2,,yyy,6,wwww)") ("(,,ggg,99,wwww)") ("(3,4,yyy,17,yuoyo)") ``` Hmmm... a bit messy, so now we run: ``` SELECT ROW(t.*) FROM t; ``` Result: ``` row (,1,blah,5,xxx) (2,,yyy,6,wwww) (,,ggg,99,wwww) (3,4,yyy,17,yuoyo) ``` OK, this is bit tidier - so, now , the solution that I'm proposing. Please bear in mind that I'm not suggesting that it is better than the accepted answer (see performance analysis below). What I **am** suggesting is that my solution **might** be of use in some tricky edge cases. So, we run: ``` WITH cte AS ( SELECT ROW(t.*)::TEXT FROM t ) SELECT * FROM cte WHERE row ~ '^\(,|,,|,\)'; ``` Explanation of the regex: * `^` - the caret character is an [anchor](https://www.regular-expressions.info/quickstart.html) - it means the start of the string. * `\` - is the escape character. * `(` - the character being escaped. It's a [metacharacter](https://www.regular-expressions.info/quickstart.html) (i.e. special meaning in regexes) - normally for [`capturing groups`](https://www.regular-expressions.info/brackets.html), but being escaped, it's now a literal bracket. * `,` - a literal comma. * `|` - the pipe character - [alternation](https://www.regular-expressions.info/quickstart.html) in regexes - i.e. `OR` - taken altogether, `^\(,|,,` means either `(,` OR `,,`. * `|,\)` - wrapping it up, the final OR and then matching `,)`. So, the regex matches **either** `(,` OR `,,` OR `,)`. Basically, this pulls out any row where there is a `NULL` **either** at the beginning of the row string, OR in the middle OR at the end of the string. * \*\* a word of warning, this regex could fail if you have "weird" text strings with random characters - it is conceivable that you may get one of the combinations above - *caveat emptor*! Result: ``` row (,1,blah,5,xxx) -- matches (, (2,,yyy,6,wwww) -- matches ,, (,,ggg,99,wwww) -- matches both (, and ,, -- -- Note that the row with no NULLs has been excluded, as desired -- ``` ## Performance analysis: Queries should always be checked for performance, so we run: ``` EXPLAIN (ANALYZE, VERBOSE, BUFFERS, SETTINGS) WITH cte AS ( SELECT ROW(t.*)::TEXT FROM t ) SELECT * FROM cte WHERE row ~ '^\(,|,,|,\)'; ``` Result (details snipped, see [fiddle](https://dbfiddle.uk/3DWDB3QB)): ``` Planning Time: 0.038 ms Execution Time: 0.062 ms ``` and the same for `SELECT * FROM t WHERE NOT (t IS NOT NULL);`: ``` Planning Time: 0.039 ms Execution Time: 0.018 ms ``` So, we can see that the accepted answer takes ~ 30% of the time that this regex solution does. Regexes are very powerful, but with greater power comes greater complexity - apply regexes with caution! A great site to learn about them is to be found [here](https://www.regular-expressions.info/).
How to find all rows with a NULL value in any column using PostgreSQL
[ "", "sql", "postgresql", "" ]
I want to change a format of the dates to format HH:MI:SS PM/AM in SQL Oracle. I want to use it in case when like this: case when to\_char(a.dtime,'HH:MI:SS') >= 12:00:00 then to\_char(a.dtime,'HH:MI:SS PM') else null end as date but SQL does not want to show me all to\_char times > 12:00 but it shows only those where there is 12:%%:%%. And it does not work with to\_date. Any suggestions? Thanks a lot in advance.
You can use [`TO_CHAR( datetime, format )`](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions180.htm) to format a `DATE` column. The format options are given [here](http://docs.oracle.com/cd/B19306_01/server.102/b14200/sql_elements004.htm#i34924). This would give you the time part of a date value (12-hour clock plus AM/PM): ``` SELECT TO_CHAR( column_name,'HH:MI:SS AM') FROM your_table ``` **Edit - Addressing the update** You can do: ``` SELECT CASE WHEN TO_NUMBER( TO_CHAR( a.dtime_appl_creation, 'HH24' ) ) >= 12 THEN TO_CHAR( a.dtime, 'HH:MI:SS PM' ) END AS "date" FROM table_name ```
``` SELECT to_char(sysdate,'dd/MM/yyyy hh:mi:ss am') FROM dual ```
SQL How do I extract time from a date to HH:MI:SS PM/AM?
[ "", "sql", "oracle", "" ]
I have a list of components which periodically report status. I want to write a query to find a list of the most recent status grouped by component. Typically I would use the solution described by the accepted answer for this question: [MySQL order by before group by](https://stackoverflow.com/questions/14770671/mysql-order-by-before-group-by) However there can be multiple statuses reported each second, so there is no guarantee that I will retrieve the most recent. Therefore I would like to find the status with the most recent timestamp and in the case of duplicate timestamps, the one with the highest PK. Ideally I would like to have a query like the following: ``` SELECT * FROM component_status ORDER BY component_status.timestamp DESC, component_status.component_status_id DESC GROUP BY component_status.component_id; ``` However you cannot perform a GROUP BY after the ORDER BY. Has anybody had a similar problem and found a solution?
I ended up going with the following query to solve my problem: ``` SELECT component_status.* FROM component_status JOIN (SELECT MAX(component_status_id) AS component_status_id FROM component_status JOIN (SELECT MAX(timestamp) AS timestamp, component_id FROM component_status WHERE timestamp <= NOW() GROUP BY component_id) AS most_recent_status USING (component_id) WHERE component_status.timestamp = most_recent_status.timestamp GROUP BY component_id) AS most_recent_status USING (component_status_id) ``` With a composite index on component\_id and timestamp the query is instantaneous.
You can use variables to simulate ``` ROW_NUMBER() OVER (PARTITION BY component_id ORDER BY `timestamp` DESC, component_status_id DESC) ``` window function: ``` SELECT component_id, component_status_id, `timestamp` FROM ( SELECT component_id, component_status_id, `timestamp`, @row_number:= IF (@cid <> component_id, IF (@cid := component_id, 1, 1), IF (@cid := component_id, @row_number + 1, @row_number + 1)) AS rn FROM component_status CROSS JOIN (SELECT @row_number:= 0, @cid := -1) vars ORDER BY `timestamp` DESC, component_status_id DESC ) t WHERE rn = 1 ``` `rn=1` in the outer query selects the most recent record per `component_id`. In case there are two or more records having the same `timestamp`, then the one having the greatest `component_status_id` will be selected. [**Demo here**](http://sqlfiddle.com/#!9/b0da1/8)
Find most recent record MySQL per component
[ "", "mysql", "sql", "group-by", "sql-order-by", "" ]
I have a table PostingPeriod that uses a company calendar to track all working days. Simplified, it looks like this: ``` Date Year Quarter Month Day IsWorkingDay 25.06.2015 2015 2 6 25 1 26.06.2015 2015 2 6 26 1 27.06.2015 2015 2 6 27 0 ``` I have another table that contains all purchase lines with the Orderdate, confirmed delivery date from the vendor and the maximum allowed timeframe in working days between orderdate and deliverydate: ``` PurchID OrderDate ConfDelivery DeliveryDays 1234 14.04.2015 20.05.2015 30 1235 14.04.2015 24.05.2015 20 ``` I want to create a new column that returns the maximum allowed Date (regardless of workday or not) for each order. The usual approach (Workingdays / 5 to get weeks, multiplied by 7 to get days) doesn't work, as all holidays etc need to be taken into consideration. As this is for a DWH that will feed an OLAP database, performance is not an issue.
You could do this by assigning each working day an arbitrary index using `ROW_NUMBER`, e.g. ``` SELECT Date, WorkingDayIndex = ROW_NUMBER() OVER(ORDER BY Date) FROM dbo.Calendar ``` Which will give you something like: ``` Date WorkingDayIndex ----------------------------- 2015-04-27 80 2015-04-28 81 2015-04-29 82 2015-04-30 83 2015-05-01 84 2015-05-05 85 2015-05-06 86 2015-05-07 87 ``` Then if you want to know the date that is *n* working days from a given date, find the date with an index *n* higher, i.e. 2015-04-27 has an index of 80, therefore 5 working days later would have an index of 85 which yields 2015-05-05. **FULL WORKING EXAMPLE** ``` /***************************************************************************************************************************/ -- CREATE TABLES AND POPULATE WITH TEST DATA SET DATEFIRST 1; DECLARE @Calendar TABLE (Date DATE, IsWorkingDay BIT); INSERT @Calendar SELECT TOP 365 DATEADD(DAY, ROW_NUMBER() OVER(ORDER BY object_id), '20141231'), 1 FROM sys.all_objects; UPDATE @Calendar SET IsWorkingDay = 0 WHERE DATEPART(WEEKDAY, Date) IN (6, 7) OR Date IN ('2015-01-01', '2015-04-03', '2015-04-06', '2015-05-04', '2015-05-25', '2015-08-31', '2015-12-25', '2015-12-28'); DECLARE @T TABLE (PurchID INT, OrderDate DATE, ConfDeliveryDate DATE, DeliveryDays INT); INSERT @T VALUES (1234, '20150414', '20150520', 30), (1235, '20150414', '20150524', 20); /***************************************************************************************************************************/ -- ACTUAL QUERY WITH WorkingDayCalendar AS ( SELECT *, WorkingDayIndex = ROW_NUMBER() OVER(ORDER BY Date) FROM @Calendar WHERE IsWorkingDay = 1 ) SELECT * FROM @T AS t INNER JOIN WorkingDayCalendar AS c1 ON c1.Date = t.OrderDate INNER JOIN WorkingDayCalendar AS c2 ON c2.WorkingDayIndex = c1.WorkingDayIndex + t.DeliveryDays; ``` If this is a common requirement, then you could just make `WorkingDayIndex` a fixed field on your calendar table so you don't need to calculate it each time it is required.
Starting from OrderDate, the Date if you advance N(DeliveryDays) WorkingDays. If i understood correctly you want something like this: ``` select PurchID, OrderDate, ConfDelivery, DeliveryDay, myDays.[Date] myWorkingDayDeliveryDate from Purchases p outer apply ( select [Date] from ( select ROW_NUMBER() OVER ( ORDER BY Date ) myDays, [Date] from PostingPeriod pp where IsWorkingDay = 1 and pp.date >= p.OrderDate ) myDays where myDays = p.DeliveryDay ) myDays ```
Add X number of Working days to a date
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "" ]
Given table: ``` Agreement: id: int type: string version: int ``` Given Records like: ``` 1 | Terms | 1 2 | Terms | 2 3 | Privacy | 1 ``` I'd like a query that outputs: ``` 2 | Terms | 2 3 | Privacy | 1 ``` I've tried all manner of distinct and self joins to max(version), I just can't seem to crack it.
The easiest thing would probably be to use the `rank` window function: ``` SELECT id, type, version FROM (SELECT id, type, version, RANK() OVER (PARTITION BY type ORDER BY version DESC) AS rk FROM agreement) t WHERE rk = 1 ```
It should work: ``` select max(id) , type , max(version) from Agreement group by type ```
Select all records of type 'x' that are the highest 'version'
[ "", "sql", "postgresql", "greatest-n-per-group", "" ]
In postgresql, I have a `case` statement that I need to add a "not equals" clause. When `v1` equals v2, I want it to say 1, when `v1` **DOES NOT EQUAL** `v2`, I would like to say 2. ``` create table test ( v1 varchar(20), v2 varchar(20) ); insert into test values ('Albert','Al'),('Ben','Ben') select case v1 when v2 then 1 else 3 end from test ``` I tried using `!=` or `<>`, but that does not seem to work. Does anyone have any idea how to use not equals in a `case` statement in SQL?
You could always just be more explicit with your case statement. Here's an example... ``` select case when v1 = v2 then 1 when v1 <> v2 then 2 end from test ```
First, you start by [reading the documentation](http://www.postgresql.org/docs/8.1/static/functions-conditional.html). You'll note that the SQL `case` function takes one of two forms: ``` case {expression} when {value-1} then {result-1} ... when {value-N} then {result-N} [ else {default-result} ] end ``` or ``` case when {boolean-condition-1} then {result-1} ... when {boolean-condition-N} then {result-N} [ else {default-result] end ``` So, you can say something like ``` select * , case when v1 = v2 then 1 when v1 != v2 then 2 when v1 is null and v2 is not null then 2 when v1 is not null and v2 is null then 2 else 1 -- both v1 and v2 are null end as are_equal from test ``` Note that * You cannot mix the two forms, and * The `else` clause is optional: ifnot specified, the return value from the function for any values not matching a `when` clause is `null`, nad * Because `null` fails all tests save for explicit tests for nullity via `is [not] null`, if you need to check for `null`, you either have to use the 2nd form (`... case when x is null then y else z end`) or have the null fall through and be handled by an `else` clause.
Not equals when using CASE statement in SQL
[ "", "sql", "postgresql", "case", "" ]