Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I was debugging someone else's query and came across a very weird statement that looked like it shouldn't work at all. I distilled this down from the original query to this: ``` DECLARE @c TABLE (id INT); DECLARE @y TABLE (name VARCHAR(50) PRIMARY KEY); INSERT INTO @c VALUES (1); SELECT c.* FROM @c c WHERE id NOT IN ( SELECT id FROM @y WHERE id IS NOT NULL); ``` But how can this possibly work?? I added the constraint that id IS NOT NULL, but removing this doesn't appear to change the behaviour. You can also remove the PRIMARY KEY on the temporary table, this was just a play to show that the execution plan uses the index somehow!? This is the "lite" version: ``` DECLARE @c TABLE (id INT); DECLARE @y TABLE (name VARCHAR(50)); INSERT INTO @c VALUES (1); SELECT * FROM @c WHERE id NOT IN (SELECT id FROM @y); ``` When executed in SQL Server 2008 R2 this will return an answer of 1.
I'm not sure why, but when the sub-query is calling on incorrect columns, and is essentially invalid, the outer select statement still runs. The in / not in predicate with the sub query is essentially disregarded. I've seen this before but never find out why. However I just has a look around and found this link: <https://connect.microsoft.com/SQLServer/feedback/details/542289/subquery-with-error-does-not-cause-outer-select-to-fail> Where someone mentions the following: > I agree that the behavior is confusing but it is ANSI standard > behavior for column name resolution at different scopes. > > See this KB article for more info: > <http://support.microsoft.com/kb/298674> > > The reason this confused me even more is that I used the column name > that does not exist in *either* table and hence got “expected” > (invalid column name) error. So, if you are using a column name that > cannot be resolved in the inner scope (SELECT Table1Id FROM Table2) > but can be resolved in outer scope (SELECT \* FROM Table1 WHERE ...), > it will be resolved and bound at that scope. > > This appears confusing in the example you gave but the same logic is > applies if you use such construct in the WHERE clause of the inner > query. E.g. consider inner query looking like this: > > ... (SELECT Table2Id FROM Table2 WHERE Table2Id = Table1Id) > > The query by itself will fail but it will work since Table1Id will be > bound Table1 in outer query.
In a subquery, if the column name you mention doesn't exist in the tables that form the subquery, SQL will next look for the columns in the enclosing query (unless you've given an alias). So, the `id` mentioned in your subquery is actually the `id` column from the `@c` table. So, that `id` value will be returned once for every row that the subquery returns - but given that the subquery returns an empty set, this means that the `NOT IN` considers no values and so is successful. --- This is why it's a really good habit to get into to use table aliases in subqueries - that way, if you accidentally name a column that only exists in an outer table, you get an error rather than an unexpected result: ``` SELECT * FROM @c WHERE id NOT IN (SELECT y.id FROM @y y); ``` produces an error.
Using an IN statement that looks like it shouldn't work, but it somehow executes
[ "", "sql", "sql-server", "t-sql", "sql-server-2008-r2", "" ]
I have a rather simple SSIS package that I've used many times to import a tab delimited file into a single table in a database. I attached a new source file to the package and attempted to run the package. * The package starts * A cmd prompt appears briefly, then disappears [?!] * The process then exits, on the Flat File Source component. [??!] * Output displays as follows: > SSIS package "C:\Users...\Conversion\LoadHistory.dtsx" > starting. > > Information: 0x4004300A at Load Data to Legacy > Database - Test, SSIS.Pipeline: Validation phase is beginning. > > Information: 0x4004300A at Load Data to Legacy Database - > Test, SSIS.Pipeline: Validation phase is beginning. > > Information: > 0x40043006 at Load Data to Legacy Database - Test, > SSIS.Pipeline: Prepare for Execute phase is beginning. Information: > 0x40043007 at Load Data to Legacy Database - Test, > SSIS.Pipeline: Pre-Execute phase is beginning. > > Information: 0x402090DC > at Load Data to Legacy Database - Test, Flat File Source > [14]: The processing of file > "C:\Users...\Conversion\Production\Historicals\Source\_2341.txt" has started. > > Information: 0x4004300C at Load > Data to Legacy Database - Test, SSIS.Pipeline: Execute > phase is beginning. > > **SSIS package "C:\Users...\Conversion\LoadHistory.dtsx" > finished: Canceled.** > > **The program '[4380] DtsDebugHost.exe: DTS' has > exited with code 0 (0x0).** The file appears to adhere to the format specs I am expecting. The only concern I can think of is that the file originally was encoded as UCS-2 Little Endian and we are expecting a UTF-8 or ANSI format. I used Notepad++ to re-encode the file as UTF-8 and the file passed the initial meta-data checks as a result, so I have to assume that is resolved. I am not sure what could be causing the package to automatically cancel. Has anyone experienced this before?
I found the issue. It appears the file being used as a source was to blame afterall. The UTF-8 format, while passing the meta-data check, appears to be at fault. I converted the file to ANSI format, as a shot in the dark, and was able to import the file normally without the above anomalies. I am not sure as to why command prompt was opening however. Thank you for the responses
After upgrading OS the problem started, changed Visual Studio to ran as Administrator. That fixed the problem for me.
SSIS Package Cancels instantly on Debug
[ "", "sql", "ssis", "etl", "flat-file", "" ]
I am using the UNION syntax to retrieve a product code and description from several databases. I want to retrieve only a unique product code, even if this product code has several descriptions. I want to retreive only the first result. To do that, I am using this script: ``` SELECT * FROM (SELECT tab1.code, tab1.description FROM tab1 UNION SELECT tab2.code, tab2.description FROM tab2 UNION SELECT tab3.code, tab3.description FROM tab3) ``` Unfortunately, this script will retrieve several product codes if the specific product has more than one description. How can this be modified to retrieve only the first occurrence with a description?
Your query might be more efficient with a `full outer join`, if there are *no* duplicates within a table and `description` does not take on `NULL` values: ``` SELECT coalesce(tab1.code, tab2.code, tab3.code) as code, coalesce(tab1.description, tab2.description, tab3.description) as description FROM tab1 full outer join tab2 on tab2.code = tab1.code full outer join tab3 on tab3.code = coalesce(tab1.code, tab2.code); ``` This saves the duplicate elimination step (or aggregation) and allows better use of indexes.
If you want ANY one description, you can go with max or min like this: ``` select code, max(description) from (your set of unions) group by code ``` In this case, you can change UNION to UNION ALL to skip on sorting. If you really want the first one, you would need to indicate it: ``` select code, description from ( select code, description, ord, min(ord) over (partition by code) min_ord from ( select code, description, 1 as ord from table1 union all select code, description, 2 as ord from table2 union all select code, description, 3 as ord from table3 ) ) where ord = min_ord ```
Union with no duplicate only for first column
[ "", "sql", "oracle", "union", "" ]
How can I take for each value in a column, the value divided by the average of that column and have it return me a value in a new column? Here is my table: ``` A B C _ _ _ 1 4 #((1/2)+(1/8))/2 # this is data i hope to get in col C 2 9 #((2/2)+(9/8))/2 3 11 #((3/2)+(11/8))/2 ``` The formula i want in column c is :( A/AVG(A)+B/AVG(B) )/2 Here is my MYSQL query: ``` update table f set f.C=(((f.A)/(SELECT AVG(f.A)))+((f.B)/(SELECT AVG(f.B))))/2; ``` I ended up only getting 1 in all the rows for col C. Thanks
You can do this in one statement by using an `update` with a `join`: ``` update table f cross join (select sum(a) as suma, sum(b) as sumb from table f ) var set f.C = (f.A/var.suma + f.B/var.sumb) / 2; ```
what you expect is this: ``` ( A/SUM(A)+B/SUM(B) )/2 ``` And not ``` ( A/AVG(A)+B/AVG(B) )/2 ``` Therefore, do: ``` update table f set f.C=(((f.A)/(SELECT SUM(f.A)))+((f.B)/(SELECT SUM(f.B))))/2; ```
Mysql take a value in a column divide by the average of the entire column
[ "", "mysql", "sql", "sql-update", "subquery", "" ]
I have some problems with creating a database view. I would like to join several tables to a single view. I would like to get each column as a row, instead of a column. Here is a small example of two tables: > **Tables** ``` | Table1: Id | Table1: Type | | Table2: Id | Table2:Type | |:----------:|:------------:| |:----------:|:-----------:| | 1 | A | | 1 | C | | 1 | B | | 2 | D | | 2 | B | | 2 | E | ``` If I use following SQL statement, I get as expected following result: > **SQL Statement** ``` SELECT table1.id, table1.Type AS Type1, table2.Type AS Type2 FROM table1 INNER JOIN table2 ON table2.id = table1.id ``` > **Result** ``` | Id | Type1 | Type2 | |:--: |:-----: |:-----: | | 1 | A | C | | 1 | B | C | | 2 | B | D | | 2 | B | E | ``` I knew that this is correct. But I would like to get something like: ``` | Id | Type | |:--: |:-----: | | 1 | A | | 1 | B | | 1 | C | | 2 | B | | 2 | D | | 2 | E | ``` This view displays for each `Id`, `Type` combination a single row. How could I joind tables in that way?
Try this: ``` SELECT id, Type FROM table1 UNION ALL SELECT id, Type FROM table2 ORDER BY ID,TYPE ``` Result: ``` ID TYPE 1 A 1 B 1 C 2 B 2 D 2 E ``` See result in [**SQL Fiddle**](http://www.sqlfiddle.com/#!2/59d6b6/2).
``` SELECT id, Type from table1 union all SELECT id, Type from table2 order by id, type ```
How to join tables in rows instead of columns?
[ "", "sql", "join", "view", "" ]
i'm not really good in SQL, i want to insert 2 values retrived by a php form, and a 3dr value from another table: ``` insert into tab1(A,B,C) values('foo,'bar',select id from tab2 where name = "Doe") ``` I've been on mysql doc, it says it's possible to do that, but there is no exemple... Can you help me? Thanks
You can use [INSERT INTO ... SELECT](http://dev.mysql.com/doc/refman/5.0/en/insert-select.html) syntax here. I could be like: ``` INSERT INTO tab1(A,B,C) SELECT 'foo','bar', id from tab2 where name = "Doe" ```
You should use `INSERT INTO SELECT`, so query will be like this: ``` INSERT INTO tab1(A,B,C) SELECT 'foo', 'bar', `id` FROM tab2 where name = 'Doe' ``` More information [here](http://dev.mysql.com/doc/refman/5.0/en/insert-select.html)
How to do this sql request
[ "", "mysql", "sql", "select", "insert", "" ]
So I'm trying to convet a timestamp to seconds. I read that you could do it this way ``` to_char(to_date(10000,'sssss'),'hh24:mi:ss') ``` But turns out this way you can't go over 86399 seconds. This is my date format: +000000000 00:00:00.000000 What's the best approach to converting this to seconds? (this is the result of subtracting two dates to find the difference).
It looks like you're trying to find the total number of seconds in an `interval` (which is the datatype returned when you subtract two `timestamps`). In order to convert the `interval` to seconds, you need to `extract` each component and convert them to seconds. Here's an example: ``` SELECT interval_value, (EXTRACT (DAY FROM interval_value) * 24 * 60 * 60) + (EXTRACT (HOUR FROM interval_value) * 60 * 60) + (EXTRACT (MINUTE FROM interval_value) * 60) + EXTRACT (SECOND FROM interval_value) AS interval_in_sec FROM (SELECT SYSTIMESTAMP - TRUNC (SYSTIMESTAMP - 1) AS interval_value FROM DUAL) ```
You could convert timestamp to date by adding a number (zero in our case). Oracle downgrade then the type from timestamp to date ex: ``` select systimestamp+0 as sysdate_ from dual ``` and the difference in secondes between 2 timestamp: ``` SQL> select 24*60*60* ((SYSTIMESTAMP+0) -(TO_TIMESTAMP('16-MAY-1414:10:10.123000','DD-MON-RRHH24:MI:SS.FF')+0) ) diff_ss from dual; DIFF_SS ---------- 15140 ```
oracle sql date format to only seconds
[ "", "sql", "oracle", "" ]
I have a table similar to the following: ``` CREATE TABLE stats ( name character varying(15), q001001 numeric(9,0), q001002 numeric(9,0), q001003 numeric(9,0), q001004 numeric(9,0), q001005 numeric(9,0) ) ``` I need to query this table for the sums of various fields within, like this: ``` SELECT sum(q001001) as total001, sum(q001002) as total002, sum(q001005) as total005, FROM stats; ``` This produces a result with ONE row of data, and THREE columns. However, I need the results to be listed the other way around, for reporting purposes. I need THREE rows and ONE column (well, two actually, the first being the field that was sum'd) like this: ``` FieldName | SUM ----------+------- q001001 | 12345 q001002 | 5432 q001005 | 986 ``` I'd like to use some SQL like this, where the `field_name` (from a lookup table of the field names in the stats table) is used in a sub-query: ``` select l.field_name, (select sum(l.field_name) from stats) from stats_field_names_lookup as l where l.field_name in ('Q001001', 'Q001002', 'Q001005'); ``` The thinking here is that `sum(l.field_name)` would be replaced by the actual field name in question, for each of those in the `WHERE` clause, and then evaluated to provide the correct sum'd result value. This, however, fails with the following error: > function sum(character varying) does not exist because the value there is a text/character. How can I cast that character value to an unquoted string to be evaluated properly? This SQL works. But, of course, gives the same sum'd values for each `field_name`, since it is hard coded as `q001001` here. ``` select l.field_name, (select sum(q001001) from stats) from stats_field_names_lookup as l where l.field_name in ('Q001001', 'Q001002', 'Q001005'); ``` So, I think the idea is sound in theory. Just need help figuring out how to get that character/string to be understood as a field\_name. Anyone have any ideas?
Actually I don't know how to specify column names dynamically, but I suggest this way. ``` SELECT 'q001001' as FieldName, sum(q001001) as SUM FROM stats UNION SELECT 'q001002' as FieldName, sum(q001002) as SUM FROM stats UNION SELECT 'q001003' as FieldName, sum(q001003) as SUM FROM stats; ``` It's easy and would be a solution to your original problem.
## Basic query It's inefficient to calculate each sum separately. Do it in a single `SELECT` and "cross-tabulate" results. To keep the answer "short" I reduced to two columns in the result. Expand as needed. ### Quick & dirty Unnest two arrays with equal number of elements in parallel. Details about this technique [here](https://stackoverflow.com/questions/12414750/is-there-something-like-a-zip-function-in-postgresql-that-combines-two-arrays/12414884#12414884) and [here](https://stackoverflow.com/questions/16992339/why-is-postgresql-array-access-so-much-faster-in-c-than-in-pl-pgsql/16994266#16994266). ``` SELECT unnest('{q001001,q001002}'::text[]) AS fieldname ,unnest(ARRAY[sum(q001001), sum(q001002)]) AS result FROM stats; ``` "Dirty", because unnesting in parallel is a non-standard Postgres behavior that is frowned upon by some. Works like a charm, though. Follow the links for more. ### Verbose & clean Use a [CTE](http://www.postgresql.org/docs/current/interactive/queries-with.html) and `UNION ALL` individual rows: ``` WITH cte AS ( SELECT sum(q001001) AS s1 ,sum(q001002) AS s2 FROM stats ) SELECT 'q001001'::text AS fieldname, s1 AS result FROM cte UNION ALL SELECT 'q001002'::text, s2 FROM cte; ``` "Clean" because it's purely standard SQL. ### Minimalistic Shortest form, but it's also harder to understand: ``` SELECT unnest(ARRAY[ ('q001001', sum(q001001)) ,('q001002', sum(q001002))]) FROM stats; ``` This operates with an array of **anonymous records**, which are hard to unnest (but possible). ### Short To get individual columns with original types, declare a type in your system: ``` CREATE TYPE fld_sum AS (fld text, fldsum numeric) ``` You can do the same for the session temporarily by creating a temp table: ``` CREATE TEMP TABLE fld_sum (fld text, fldsum numeric); ``` Then: ``` SELECT (unnest(ARRAY[ ('q001001'::text, sum(q001001)::numeric) ,('q001002'::text, sum(q001002)::numeric)]::fld_sum[])).* FROM stats; ``` Performance for all four variants is basically the same because the expensive part is the aggregation. [**SQL Fiddle**](http://sqlfiddle.com/#!15/c395e/1) demonstrating all variants (based on [fiddle provided by @klin](https://stackoverflow.com/a/23717374/939860)). ## Automate with PL/pgSQL function ### Quick & Dirty Build and execute code like outlined in the corresponding chapter above. ``` CREATE OR REPLACE FUNCTION f_list_of_sums1(_tbl regclass, _flds text[]) RETURNS TABLE (fieldname text, result numeric) AS $func$ BEGIN RETURN QUERY EXECUTE ( SELECT ' SELECT unnest ($1) ,unnest (ARRAY[sum(' || array_to_string(_flds, '), sum(') || ')])::numeric FROM ' || _tbl) USING _flds; END $func$ LANGUAGE plpgsql; ``` * Being "dirty", this is also ***not*** safe against SQL injection. Only use it with verified input. Below version is safe. Call: ``` SELECT * FROM f_list_of_sums1('stats', '{q001001, q001002}'); ``` ### Verbose & clean Build and execute code like outlined in the corresponding chapter above. ``` CREATE OR REPLACE FUNCTION f_list_of_sums2(_tbl regclass, _flds text[]) RETURNS TABLE (fieldname text, result numeric) AS $func$ BEGIN -- RAISE NOTICE '%', ( -- to get debug output uncomment this line .. RETURN QUERY EXECUTE ( -- .. and comment this one SELECT 'WITH cte AS ( SELECT ' || string_agg( format('sum(%I)::numeric AS s%s', _flds[i], i) ,E'\n ,') || ' FROM ' || _tbl || ' ) ' || string_agg( format('SELECT %L, s%s FROM cte', _flds[i], i) , E'\nUNION ALL\n') FROM generate_subscripts(_flds, 1) i ); END $func$ LANGUAGE plpgsql; ``` Call like above. ### Major points * Implements the efficient code path to aggregate all sums in a *single* scan. * Works for *any* table, not just `stats`. * Works for *any* numeric columns (not just `numeric`). * Safe against SQL injection which is a ***must*** for dynamic SQL. `format()` and `regclass` explained: [Table name as a PostgreSQL function parameter](https://stackoverflow.com/questions/10705616/table-name-as-a-postgresql-function-parameter/10711349#10711349) * About unnesting an array with row numbers: [PostgreSQL unnest() with element number](https://stackoverflow.com/questions/8760419/postgresql-unnest-with-element-number/8767450#8767450) * [Related answers demonstrating dynamic SQL in plpgsql built with `string_agg()`.](https://stackoverflow.com/search?q=%5Bplpgsql%5D%20string_agg%20format%20execute) [**SQL Fiddle**](http://sqlfiddle.com/#!15/c395e/1) demonstrating all variants. ## Aside: table definition The data type `numeric(9,0)` is a rather inefficient choice for a table definition. Since you are not storing fractional digits and no more than 9 decimal digits, use a plain **`integer`** instead. It does the same with only **4 bytes** of storage (instead of 8-12 bytes for `numeric(9,0)`). If you need numeric precision in calculations you can always cast the column at negligible cost. Also, [I don't use `varchar(n)` unless I have to. Just use `text`.](https://stackoverflow.com/questions/8524873/change-postgresql-columns-used-in-views/8527792#8527792) So I'd suggest: ``` CREATE TABLE stats ( name text ,q001001 int ,q001002 int , ... ); ```
Dynamic fieldnames in subquery?
[ "", "sql", "postgresql", "aggregate-functions", "dynamic-sql", "unpivot", "" ]
I have two query output as follow- ## Query-1 Output: A B C ## Query-2 Output: 1 2 3 4 5 Now I am looking forward to join these two outputs that will return me the following output- ## Combine Output: A | 1 B | 2 C | 3 NULL | 4 NULL | 5 Note: There is no relation between the output of Query 1 & 2 Thanks in advance, mkRabbani
The relation is based on the order of the values from table A and B, so we `LEFT JOIN` the results from A (containing the numbers) to the results from B (containing the characters) on the ordered index. ``` DECLARE @a TABLE (col int); DECLARE @b TABLE (col char(1)); INSERT INTO @a VALUES (1); INSERT INTO @a VALUES (2); INSERT INTO @a VALUES (3); INSERT INTO @a VALUES (4); INSERT INTO @a VALUES (5); INSERT INTO @b VALUES ('A'); INSERT INTO @b VALUES ('B'); INSERT INTO @b VALUES ('C'); SELECT B.col, A.col FROM ( SELECT col, ROW_NUMBER() OVER(ORDER BY col) AS RowNum FROM @a ) AS A LEFT JOIN ( SELECT col, ROW_NUMBER() OVER(ORDER BY col) AS RowNum FROM @b ) AS B ON A.RowNum = B.RowNum ```
You can get the desired result by using Row\_Number() and full outer join. Please check the SQLFiddler, in which I have reproduced the desired result. <http://sqlfiddle.com/#!3/21009/6/0>
Combine 2 query output in one result set
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "join", "" ]
I'm very new to Oracle. I'd like generate another table from the original one. Here is my original table. ``` Function | Machine | Value ============================ A | M1 | VALID A | M2 | INVALID B | M1 | VALID B | M2 | INVALID C | M1 | INVALID C | M2 | VALID ``` Here is the result table I want to generate. ``` Function | M1 | M2 ============================ A | VALID | INVALID B | VALID | INVALID C | INVALID | VALID ``` Is this possible? I appreciate any suggestion.
Use [pivot](http://www.oracle-developer.net/display.php?id=506) ``` select * from table1 pivot ( max("Value") for "Machine" in ('M1', 'M2') ) ``` [fiddle](http://www.sqlfiddle.com/#!4/234b4/3)
In case you are using a lower version of oracle say 10, where pivot function is not supported, the following query would be of help to you: ``` with tab as ( SELECT 'A' FUNCTION, 'M1' MACHINE, 'VALID' VALUE FROM DUAL UNION SELECT 'A' FUNCTION, 'M2' MACHINE, 'INVALID' VALUE FROM DUAL UNION SELECT 'B' FUNCTION, 'M1' MACHINE, 'VALID' VALUE FROM DUAL UNION SELECT 'B' FUNCTION, 'M2' MACHINE, 'INVALID' VALUE FROM DUAL UNION SELECT 'C' FUNCTION, 'M1' MACHINE, 'INVALID' VALUE FROM DUAL UNION SELECT 'C' FUNCTION, 'M2' MACHINE, 'VALID' VALUE FROM DUAL ) SELECT FUNCTION, max(case when MACHINE = 'M1' THEN VALUE ELSE ' ' END) M1, max(case when MACHINE = 'M2' THEN VALUE ELSE ' ' END) M2 FROM tab group by FUNCTION order by FUNCTION; ```
Rotate/Generate table in Oracle
[ "", "sql", "oracle", "" ]
My database structure looks like this: ``` CREATE TABLE categories ( name VARCHAR(30) PRIMARY KEY ); CREATE TABLE additives ( name VARCHAR(30) PRIMARY KEY ); CREATE TABLE beverages ( name VARCHAR(30) PRIMARY KEY, description VARCHAR(200), price NUMERIC(5, 2) NOT NULL CHECK (price >= 0), category VARCHAR(30) NOT NULL REFERENCES categories(name) ON DELETE CASCADE ON UPDATE CASCADE ); CREATE TABLE b_additives_xref ( bname VARCHAR(30) REFERENCES beverages(name) ON DELETE CASCADE ON UPDATE CASCADE, aname VARCHAR(30) REFERENCES additives(name) ON DELETE CASCADE ON UPDATE CASCADE, PRIMARY KEY(bname, aname) ); INSERT INTO categories VALUES ('Cocktails'), ('Biere'), ('Alkoholfreies'); INSERT INTO additives VALUES ('Kaliumphosphat (E 340)'), ('Pektin (E 440)'), ('Citronensäure (E 330)'); INSERT INTO beverages VALUES ('Mojito Speciale', 'Cocktail mit Rum, Rohrzucker und Minze', 8, 'Cocktails'), ('Franziskaner Weißbier', 'Köstlich mildes Hefeweizen', 6, 'Biere'), ('Augustiner Hell', 'Frisch gekühlt vom Fass', 5, 'Biere'), ('Coca Cola', 'Coffeeinhaltiges Erfrischungsgetränk', 2.75, 'Alkoholfreies'), ('Sprite', 'Erfrischende Zitronenlimonade', 2.50, 'Alkoholfreies'), ('Karaffe Wasser', 'Kaltes, gashaltiges Wasser', 6.50, 'Alkoholfreies'); INSERT INTO b_additives_xref VALUES ('Coca Cola', 'Kaliumphosphat (E 340)'), ('Coca Cola', 'Pektin (E 440)'), ('Coca Cola', 'Citronensäure (E 330)'); ``` [SqlFiddle](http://sqlfiddle.com/#!15/cc45a) What I am trying to achieve is to list all beverages and their attributes (`price`, `description` etc.) and add another column `additives` from the `b_additives_xref` table, that holds a concatenated string with all additives contained in each beverage. My query currently looks like this and is almost working (I guess): ``` SELECT beverages.name AS name, beverages.description AS description, beverages.price AS price, beverages.category AS category, string_agg(additives.name, ', ') AS additives FROM beverages, additives LEFT JOIN b_additives_xref ON b_additives_xref.aname = additives.name GROUP BY beverages.name ORDER BY beverages.category; ``` The output looks like: ``` Coca Cola | Coffeeinhaltiges Erfrischungsgetränk | 2.75 | Alkoholfreies | Kaliumphosphat (E 340), Pektin (E 440), Citronensäure (E 330) Karaffe Wasser | Kaltes, gashaltiges Wasser | 6.50 | Alkoholfreies | Kaliumphosphat (E 340), Pektin (E 440), Citronensäure (E 330) Sprite | Erfrischende Zitronenlimonade | 2.50 | Alkoholfreies | Kaliumphosphat (E 340), Pektin (E 440), Citronensäure (E 330) Augustiner Hell | Frisch gekühlt vom Fass | 5.00 | Biere | Kaliumphosphat (E 340)[...] ``` Which, of course, is wrong since only 'Coca Cola' has existing rows in the `b_additives_xref` table. Except for the row 'Coca Cola' all other rows should have 'null' or 'empty field' values in the column 'additives'. What am I doing wrong?
I believe you are looking for this ``` SELECT B.name AS name, B.description AS description, B.price AS price, B.category AS category, string_agg(A.name, ', ') AS additives FROM Beverages B LEFT JOIN b_additives_xref xref ON xref.bname = B.name Left join additives A on A.name = xref.aname GROUP BY B.name ORDER BY B.category; ``` Output ``` NAME DESCRIPTION PRICE CATEGORY ADDITIVES Coca Cola Coffeeinhaltiges Erfrischungsgetränk 2.75 Alkoholfreies Kaliumphosphat (E 340), Pektin (E 440), Citronensäure (E 330) ``` The problem was that you had a Cartesian product between your `beverages` and `additives` tables ``` FROM beverages, additives ``` Every record got places with every other record. They both need to be explicitly joined to th xref table.
Some advice on your ### Schema ``` CREATE TABLE category ( category_id int PRIMARY KEY ,category text UNIQUE NOT NULL ); CREATE TABLE beverage ( beverage_id serial PRIMARY KEY ,beverage text UNIQUE NOT NULL -- maybe not unique? ,description text ,price int NOT NULL CHECK (price >= 0) -- in Cent ,category_id int NOT NULL REFERENCES category ON UPDATE CASCADE -- not: ON DELETE CASCADE ); CREATE TABLE additive ( additive_id serial PRIMARY KEY ,additive text UNIQUE ); CREATE TABLE bev_add ( beverage_id int REFERENCES beverage ON DELETE CASCADE ON UPDATE CASCADE ,additive_id int REFERENCES additive ON DELETE CASCADE ON UPDATE CASCADE ,PRIMARY KEY(beverage_id, additive_id) ); ``` * Never use "name" as name. It's a terrible, non-descriptive name. * Use small surrogate primary keys, preferably [`serial`](http://www.postgresql.org/docs/current/interactive/datatype-numeric.html#DATATYPE-SERIAL) columns for big tables or simple `integer` for small tables. Chances are, the names of beverages and additives are not strictly unique and you want to change them from time to time, which makes them bad candidates for the primary key. `integer` columns are also smaller and faster to process. * If you only have a handful of categories with no additional attributes, consider an [`enum`](http://www.postgresql.org/docs/current/interactive/datatype-enum.html) instead. * It's good practice to use the same (descriptive) name for foreign key and primary key when they hold the same values. * I never use the plural form as table name unless a single row holds multiple instances of something. Shorter, just a meaningful, leaves the plural for actual plural rows. * [Just use `text` instead of `character varying (n)`.](https://stackoverflow.com/questions/8524873/change-postgresql-columns-used-in-views/8527792#8527792) * Think twice before you define a fk constraint to a look-up table with `ON DELETE CASCADE` Typically you do *not* want to delete all beverages automatically if you delete a category (by mistake). * Consider a plain `integer` column instead of `NUMERIC(5, 2)` (with the number of Cent instead of € / $). Smaller, faster, simpler. Format on output when needed. More advice and links in this closely related answer: [How to implement a many-to-many relationship in PostgreSQL?](https://stackoverflow.com/questions/9789736/how-to-implement-a-many-to-many-relationship-in-postgresql/9790225#9790225) ### Query Adapted to new schema and some general advice. ``` SELECT b.*, string_agg(a.additive, ', ' ORDER BY a.additive) AS additives -- order by optional for sorted list FROM beverage b JOIN category c USING (category_id) LEFT JOIN bev_add ba USING (beverage_id) -- simpler now LEFT JOIN additive a USING (additive_id) GROUP BY b.beverage_id, c.category_id ORDER BY c.category; ``` * You don't need a column alias if the column name is the same as the alias. * With the suggested naming convention you can conveniently use [`USING` in joins](http://www.postgresql.org/docs/current/interactive/queries-table-expressions.html#QUERIES-FROM). * You need to join to `category` and `GROUP BY category_id` or `category` in addition (drawback of suggested schema). * The query will still be faster for big tables, because tables are smaller and indexes are smaller and faster and fewer pages have to be read.
PostgreSQL - How to JOIN M:M table the right way?
[ "", "sql", "postgresql", "join", "database-design", "left-join", "" ]
SEO > SEO > Paid 1 Paid > Paid > Affiliate > Paid 1 SEO > Affiliate 1I have a query that results in a data containing customer id numbers, marketing channel, timestamp, and purchase date. So, the results might look something like this. ``` id marketingChannel TimeStamp Transaction_date 1 SEO 5/18 23:11:43 5/18 1 SEO 5/18 24:12:43 5/18 1 Paid 5/18 24:13:43 5/18 2 Paid 5/18 24:12:43 5/18 2 Paid 5/18 24:14:43 5/18 2 Affiliate 5/18 24:20:43 5/18 2 Paid 5/18 24:22:43 5/18 3 SEO 5/18 24:10:43 5/18 3 Affiliate 5/18 24:11:43 5/18 ``` I'm wondering if there is a query to aggregate this information in a fashion that show the count of marketing paths. For example. ``` Marketing Path Count SEO > SEO > Paid 1 Paid > Paid > Affiliate > Paid 1 SEO > Affiliate 1 ``` I'm thinking about writing a Python script to get this information, but am wondering if there is a simple solution in SQL - as I'm not as framiliar with SQL.
Some years ago I needed a similar result and I tested different ways to get a concatenated string in Teradata. Btw, all might fail if the number of rows is too high and the concatenated string exceeds 64000 chars. The most efficient was a User Defined Function (written in C): ``` SELECT PATH ,COUNT(*) FROM ( SELECT DelimitedBuildSorted(MARKETINGCHANNEL ,CAST(CAST(ts AS FORMAT 'yyyymmddhhmiss') AS VARCHAR(14)) ,'>') AS PATH FROM t GROUP BY id ) AS dt GROUP BY 1; ``` If you need to run that query frequently and/or on a large table you might talk to your DBA if a UDF is possible (most DBAs don't like them as they're written in a language they don't know, C). Recursion might be ok if the average number of rows per id is low. Joseph B's version can be a bit simplified, but the most important thing is to create a temporary table instead of using a View or Derived Table for the ROW\_NUMBER calculation. This results in a better plan (in SQL Server, too): ``` CREATE VOLATILE TABLE vt AS ( SELECT id ,MarketingChannel ,ROW_NUMBER() OVER (PARTITION BY id ORDER BY TS DESC) AS rn ,COUNT(*) OVER (PARTITION BY id) AS max_rn FROM t ) WITH DATA PRIMARY INDEX (id) ON COMMIT PRESERVE ROWS; WITH RECURSIVE cte(id, path, rn) AS ( SELECT id, -- modify VARCHAR size to fit your maximum number of rows, that's better than VARCHAR(64000) CAST(MarketingChannel AS VARCHAR(10000)) AS PATH, rn FROM vt WHERE rn = max_rn UNION ALL SELECT cte.ID, cte.PATH || '>' || vt.MarketingChannel, cte.rn-1 FROM vt JOIN cte ON vt.id = cte.id AND vt.rn = cte.rn - 1 ) SELECT PATH, COUNT(*) FROM cte WHERE rn = 1 GROUP BY path ORDER BY PATH ; ``` You might also try old school MAX(CASE): ``` SELECT PATH ,COUNT(*) FROM ( SELECT id ,MAX(CASE WHEN rnk = 0 THEN MarketingChannel ELSE '' END) || MAX(CASE WHEN rnk = 1 THEN '>' || MarketingChannel ELSE '' END) || MAX(CASE WHEN rnk = 2 THEN '>' || MarketingChannel ELSE '' END) || MAX(CASE WHEN rnk = 3 THEN '>' || MarketingChannel ELSE '' END) || MAX(CASE WHEN rnk = 4 THEN '>' || MarketingChannel ELSE '' END) || MAX(CASE WHEN rnk = 5 THEN '>' || MarketingChannel ELSE '' END) || MAX(CASE WHEN rnk = 6 THEN '>' || MarketingChannel ELSE '' END) || MAX(CASE WHEN rnk = 7 THEN '>' || MarketingChannel ELSE '' END) || MAX(CASE WHEN rnk = 8 THEN '>' || MarketingChannel ELSE '' END) || MAX(CASE WHEN rnk = 9 THEN '>' || MarketingChannel ELSE '' END) || MAX(CASE WHEN rnk = 10 THEN '>' || MarketingChannel ELSE '' END) || MAX(CASE WHEN rnk = 11 THEN '>' || MarketingChannel ELSE '' END) || MAX(CASE WHEN rnk = 12 THEN '>' || MarketingChannel ELSE '' END) || MAX(CASE WHEN rnk = 13 THEN '>' || MarketingChannel ELSE '' END) || MAX(CASE WHEN rnk = 14 THEN '>' || MarketingChannel ELSE '' END) || MAX(CASE WHEN rnk = 15 THEN '>' || MarketingChannel ELSE '' END) || MAX(CASE WHEN rnk = 16 THEN '>' || MarketingChannel ELSE '' END) || MAX(CASE WHEN rnk = 17 THEN '>' || MarketingChannel ELSE '' END) || MAX(CASE WHEN rnk = 18 THEN '>' || MarketingChannel ELSE '' END) || MAX(CASE WHEN rnk = 19 THEN '>' || MarketingChannel ELSE '' END) || MAX(CASE WHEN rnk = 20 THEN '>' || MarketingChannel ELSE '' END) || MAX(CASE WHEN rnk = 21 THEN '>' || MarketingChannel ELSE '' END) || MAX(CASE WHEN rnk = 22 THEN '>' || MarketingChannel ELSE '' END) || MAX(CASE WHEN rnk = 23 THEN '>' || MarketingChannel ELSE '' END) || MAX(CASE WHEN rnk = 24 THEN '>' || MarketingChannel ELSE '' END) || MAX(CASE WHEN rnk = 25 THEN '>' || MarketingChannel ELSE '' END) || MAX(CASE WHEN rnk = 26 THEN '>' || MarketingChannel ELSE '' END) || MAX(CASE WHEN rnk = 27 THEN '>' || MarketingChannel ELSE '' END) || MAX(CASE WHEN rnk = 28 THEN '>' || MarketingChannel ELSE '' END) || MAX(CASE WHEN rnk = 29 THEN '>' || MarketingChannel ELSE '' END) || MAX(CASE WHEN rnk = 30 THEN '>' || MarketingChannel ELSE '' END) || MAX(CASE WHEN rnk = 31 THEN '>' || MarketingChannel ELSE '' END) AS PATH FROM ( SELECT id ,TRIM(MarketingChannel) AS MarketingChannel ,RANK() OVER (PARTITION BY id ORDER BY TS) -1 AS rnk FROM t ) dt GROUP BY 1 ) AS dt GROUP BY 1; ``` I had up to concat 2048 rows with 30 chars each :-) ``` SELECT PATH ,COUNT(*) FROM ( SELECT id ,MAX(CASE WHEN rnk MOD 16 = 0 THEN path ELSE '' END) || MAX(CASE WHEN rnk MOD 16 = 1 THEN '>' || path ELSE '' END) || MAX(CASE WHEN rnk MOD 16 = 2 THEN '>' || path ELSE '' END) || MAX(CASE WHEN rnk MOD 16 = 3 THEN '>' || path ELSE '' END) || MAX(CASE WHEN rnk MOD 16 = 4 THEN '>' || path ELSE '' END) || MAX(CASE WHEN rnk MOD 16 = 5 THEN '>' || path ELSE '' END) || MAX(CASE WHEN rnk MOD 16 = 6 THEN '>' || path ELSE '' END) || MAX(CASE WHEN rnk MOD 16 = 7 THEN '>' || path ELSE '' END) AS PATH FROM ( SELECT id ,rnk / 16 AS rnk ,MAX(CASE WHEN rnk MOD 16 = 0 THEN path ELSE '' END) || MAX(CASE WHEN rnk MOD 16 = 1 THEN '>' || path ELSE '' END) || MAX(CASE WHEN rnk MOD 16 = 2 THEN '>' || path ELSE '' END) || MAX(CASE WHEN rnk MOD 16 = 3 THEN '>' || path ELSE '' END) || MAX(CASE WHEN rnk MOD 16 = 4 THEN '>' || path ELSE '' END) || MAX(CASE WHEN rnk MOD 16 = 5 THEN '>' || path ELSE '' END) || MAX(CASE WHEN rnk MOD 16 = 6 THEN '>' || path ELSE '' END) || MAX(CASE WHEN rnk MOD 16 = 7 THEN '>' || path ELSE '' END) || MAX(CASE WHEN rnk MOD 16 = 8 THEN '>' || path ELSE '' END) || MAX(CASE WHEN rnk MOD 16 = 9 THEN '>' || path ELSE '' END) || MAX(CASE WHEN rnk MOD 16 = 10 THEN '>' || path ELSE '' END) || MAX(CASE WHEN rnk MOD 16 = 11 THEN '>' || path ELSE '' END) || MAX(CASE WHEN rnk MOD 16 = 12 THEN '>' || path ELSE '' END) || MAX(CASE WHEN rnk MOD 16 = 13 THEN '>' || path ELSE '' END) || MAX(CASE WHEN rnk MOD 16 = 14 THEN '>' || path ELSE '' END) || MAX(CASE WHEN rnk MOD 16 = 15 THEN '>' || path ELSE '' END) AS path FROM ( SELECT id ,rnk / 16 AS rnk ,MAX(CASE WHEN rnk MOD 16 = 0 THEN path ELSE '' END) || MAX(CASE WHEN rnk MOD 16 = 1 THEN '>' || path ELSE '' END) || MAX(CASE WHEN rnk MOD 16 = 2 THEN '>' || path ELSE '' END) || MAX(CASE WHEN rnk MOD 16 = 3 THEN '>' || path ELSE '' END) || MAX(CASE WHEN rnk MOD 16 = 4 THEN '>' || path ELSE '' END) || MAX(CASE WHEN rnk MOD 16 = 5 THEN '>' || path ELSE '' END) || MAX(CASE WHEN rnk MOD 16 = 6 THEN '>' || path ELSE '' END) || MAX(CASE WHEN rnk MOD 16 = 7 THEN '>' || path ELSE '' END) || MAX(CASE WHEN rnk MOD 16 = 8 THEN '>' || path ELSE '' END) || MAX(CASE WHEN rnk MOD 16 = 9 THEN '>' || path ELSE '' END) || MAX(CASE WHEN rnk MOD 16 = 10 THEN '>' || path ELSE '' END) || MAX(CASE WHEN rnk MOD 16 = 11 THEN '>' || path ELSE '' END) || MAX(CASE WHEN rnk MOD 16 = 12 THEN '>' || path ELSE '' END) || MAX(CASE WHEN rnk MOD 16 = 13 THEN '>' || path ELSE '' END) || MAX(CASE WHEN rnk MOD 16 = 14 THEN '>' || path ELSE '' END) || MAX(CASE WHEN rnk MOD 16 = 15 THEN '>' || path ELSE '' END) AS path FROM ( SELECT id ,TRIM(MarketingChannel) AS PATH ,RANK() OVER (PARTITION BY id ORDER BY TS) -1 AS rnk FROM t ) dt GROUP BY 1,2 ) dt GROUP BY 1,2 ) dt GROUP BY 1 ) dt GROUP BY 1 ```
Here's is a query, which has been tested with SQL Server. The same syntax should work with Teradata as well: **EDIT**: Converted multiple CTE's to a single CTE: ``` WITH RECURSIVE Single_Path (CURRENT_ID, CURRENT_PATH, CURRENT_TS, rn) AS ( SELECT ID CURRENT_ID, CAST(MARKETINGCHANNEL AS VARCHAR(MAX)) CURRENT_PATH, TIMESTAMP CURRENT_TS, 1 RN FROM ( SELECT id, marketingChannel, TimeStamp, ROW_NUMBER() OVER (PARTITION BY id ORDER BY TimeStamp DESC) rn FROM T ) Ordered_Data WHERE RN = 1 UNION ALL SELECT ID, CAST(MARKETINGCHANNEL + ' > ' + CURRENT_PATH AS VARCHAR(MAX)), TIMESTAMP, sp.rn+1 FROM ( SELECT id, marketingChannel, TimeStamp, ROW_NUMBER() OVER (PARTITION BY id ORDER BY TimeStamp DESC) rn FROM T ) ORDERED_DATA od, Single_Path sp WHERE od.id = sp.Current_id AND od.rn = sp.rn + 1 ) SELECT sp2.CURRENT_PATH MARKETING_PATH, COUNT(*) COUNT FROM Single_Path sp2 INNER JOIN ( SELECT ID, MAX(rn) max_rn FROM Ordered_Data GROUP BY ID ) MR ON SP2.CURRENT_ID = MR.ID AND SP2.RN = MR.MAX_RN GROUP BY SP2.CURRENT_PATH ORDER BY sp2.CURRENT_PATH; ``` `SQL Fiddle demo` **References**: [Fun with Recursive SQL (Part 1) on Sharpening Stones blog](http://walkingoncoals.blogspot.com/2009/12/fun-with-recursive-sql-part-1.html)
Aggregation by timestamp
[ "", "sql", "aggregate-functions", "teradata", "" ]
I have a request which I can accomplish in code but am wondering if it is at all possible do do on SQL alone. I have a products table that has a Category column and a Price column. What I want to achieve is all of the products grouped together by Category, and then ordered by the cheapest to most expensive in both the category and all the categories combined. So for example : ``` Category | Price --------------|--------------------- Basin | 500 Basin | 700 Basin | 750 Accessories | 550 Accessories | 700 Accessories | 1000 Bath | 700 ``` As you can see the cheapest item is a basin for 500, then an Accessory for 550 then a bath for 700. So I need the categories of products to be sorted by their cheapest item, and then each category itself in turn to be sorted cheapest to most expensive. I have tried partitioning, grouping sets ( which i know nothing about ) but still no luck so eventually resorted to my strength ( C# ) but would prefer to do it straight in SQL if possible. One last side note : This query is hit quite often so performance is key so if possible i would like to avoid temp tables / cursors etc
I think using `MIN()` with a window ([`OVER`](http://technet.microsoft.com/en-us/library/ms189461.aspx)) makes it *clearest* what the intent is: ``` declare @t table (Category varchar(19) not null,Price int not null) insert into @t (Category,Price) values ('Basin',500), ('Basin',700), ('Basin',750), ('Accessories',550), ('Accessories',700), ('Accessories',1000), ('Bath',700) ;With FindLowest as ( select *, MIN(Price) OVER (PARTITION BY Category) as Lowest from @t ) select * from FindLowest order by Lowest,Category,Price ``` If two categories share the same lowest price, this will still keep the two categories separate and sort them alphabetically.
``` Select... Order by category, price desc ```
Query to order data while maintaining grouping?
[ "", "sql", "sql-server-2008", "group-by", "sql-order-by", "" ]
I would like to fetch the highest value (from the column named value) for the 7 past days. I have tried with this sql: ``` SELECT MAX(value) as value_of_week FROM events WHERE event_date > UNIX_TIMESTAMP() -(7 * 86400); ``` But it gives me 86.1 that is older than 7 days from today´s date. Given the rows below, I should get 55.2 with date 2014-05-16 07:07:00. ``` id value event_date 1 28. 2014-04-18 08:23:00 2 23.6 2014-04-22 06:43:00 3 86.1 2014-04-29 05:32:00 4 43.3 2014-05-03 08:12:00 5 55.2 2014-05-16 07:07:00 6 25.6 2014-05-19 06:11:00 ```
You are comparing unix time stamps to date. How about this? ``` SELECT MAX(value) as value_of_week FROM events WHERE event_date > date_add(now(), interval -7 day); ```
Im guessing this is MySQL and in that case you could do this: ``` select max(value) as value_of_week from events where event_date between date_sub(now(),INTERVAL 1 WEEK) and now(); ```
Get the highest value of the last 7 days with SQL
[ "", "sql", "" ]
Here is my table: ``` Start Time Stop time extension ---------------------------------------------------------- 2014-03-03 10:00:00 2014-03-03 11:00:00 100 2014-03-03 10:00:00 2014-03-03 12:00:00 100 2014-03-05 10:00:00 2014-03-05 11:00:00 200 2014-03-03 10:00:00 2014-03-03 13:00:00 100 2014-03-05 10:00:00 2014-03-05 12:00:00 200 2014-03-05 10:00:00 2014-03-05 13:00:00 200 ``` I want to get the smallest time interval for each extension: ``` Start Time Stop time Extension ------------------------------------------------------------- 2014-03-03 10:00:00 2014-03-03 11:00:00 100 2014-03-05 10:00:00 2014-03-05 11:00:00 200 ``` How can I write the sql?
To get the row (including all original columns) with the smallest time interval for each `extension` (according to your *updated* question) the Postgres specific `DISTINCT ON` should be most convenient: ``` SELECT DISTINCT ON (extension) start_time, stop_time, extension FROM tbl ORDER BY extension, (stop_time - start_time); ``` Details in this related answer: [Select first row in each GROUP BY group?](https://stackoverflow.com/questions/3800551/select-first-row-in-each-group-by-group/7630564#7630564)
not sure what exactly you are after, but the "smallest interval" would be ``` select min(stop_time - start_time) from the_table ``` If you also need the two columns with that: ``` select start_time, stop_time, duration from ( select start_time, stop_time, stop_time - start_time as duration, min(stop_time - start_time) as min_duration from the_table ) t where duration = min_duration; ``` The above would yield more than one row if multiple rows have the same duration. If you don't want that you can use: ``` select start_time, stop_time, duration from ( select start_time, stop_time, stop_time - start_time as duration, row_numer() over (order by stop_time - start_time) as rn from the_table ) t where rn = 1; ```
Get row with the smallest time interval per value in a table
[ "", "sql", "postgresql", "aggregate-functions", "greatest-n-per-group", "" ]
I have an application to deal with a file and fragment it to multiple segments, then save the result into sql server database. There are many duplicated file (maybe with different file path), so first I go through all these files and compute the Md5 hash for each file, and mark duplicated file by using the [Duplicated] column. Then everyday, I'll run this application and save the results into the [Result] table. The db schema is as below: ``` CREATE TABLE [dbo].[FilePath] ( [FilePath] NVARCHAR(256) NOT NULL PRIMARY KEY, [FileMd5Hash] binay(16) NOT NULL, [Duplicated] BIT NOT NULL DEFAULT 0, [LastRunBuild] NVARCHAR(30) NOT NULL DEFAULT 0 ) CREATE TABLE [dbo].[Result] ( [Build] NVARCHAR(30) NOT NULL, [FileMd5Hash] binay(16) NOT NULL , [SegmentId] INT NOT NULL, [SegmentContent] text NOT NULL PRIMARY KEY ([FileMd5Hash], [Build], [SegmentId]) ) ``` And I have a requirement to join these 2 table on FileMd5Hash. Since the number of rows of [Result] is very large, I'd like to add an int Identity column to join these to tables as below: ``` CREATE TABLE [dbo].[FilePath] ( [FilePath] NVARCHAR(256) NOT NULL PRIMARY KEY, [FileMd5Hash] binay(16) NOT NULL, **[Id] INT NOT NULL IDENTITY,** [Duplicated] BIT NOT NULL DEFAULT 0, [LastRunBuild] NVARCHAR(30) NOT NULL DEFAULT 0 ) CREATE TABLE [dbo].[Result] ( [Build] NVARCHAR(30) NOT NULL, **[Id] INT NOT NULL,** [SegmentId] INT NOT NULL, [SegmentContent] text NOT NULL PRIMARY KEY ([FileMd5Hash], [Build], [SegmentId]) ) ``` **So What's the Pros and cons of these 2 ways?**
An int key is simpler to implement and easier to use and understand. It's also smaller (4 bytes vs 16 bytes), so indexes will fit about double the number of entries per IO page, meaning better performance. The table rows too will be smaller (OK, not much smaller), so again you'll fit more rows per page = less IO. Hash can always produce collisions. Although exceedingly rare, nevertheless, as the [birthday problem](http://en.wikipedia.org/wiki/Birthday_problem) shows, collisions become more and more likely as record count increases. The number of items needed for a 50% chance of a collision with various bit-length hashes is as follows: ``` Hash length (bits) Item count for 50% chance of collision 32 77000 64 5.1 billion 128 22 billion billion 256 400 billion billion billion billion ``` There's also the issue of having to pass around non-ascii bytes - harder to debug, send over wire, etc. Use `int` sequential primary keys for your tables. Everybody else does.
Use ints for primary keys, not hashes. Everyone warns about hash collisions, but in practice they are not a big problem; it's easy to check for collisions and re-hash. Sequential IDs can collide as well if you merge databases. The big problem with hashes as keys is that you cannot change your data. If you try, your hash will change and all foreign keys become invalid. You have to create a “no, this is the real hash” column in your database and your old hash just becomes a big nonsequential integer. I bet your business analyst will say “we implement WORM so our records will never change”. They will be proven wrong.
Pros and cons of using MD5 Hash as the primary key vs. use a int identity as the primary key in SQL Server
[ "", "sql", "sql-server", "database", "hash", "" ]
I'm trying to do a dynamic table, in which, number of columns depends on range of dates. So, I'm trying to use a pivot table. Every time I run the query I've got this error: `Msg 241, Level 16, State 1, Line 18` `Conversion failed when converting date and/or time from character string.` This is the query (MSSQL): ``` DECLARE @StartDate AS DATETIME DECLARE @EndDate AS DATETIME DECLARE @Query NVARCHAR(MAX) DECLARE @Str_Dates NVARCHAR(MAX) SET @StartDate = '2014-05-01' SET @EndDate = '2014-05-16' SELECT @Str_Dates = STUFF(( SELECT DISTINCT '],[' + CONVERT(VARCHAR(10),CreateDate,111) FROM myDB.dbo.SaleTransaction WHERE CreateDate BETWEEN @StartDate AND @EndDate ORDER BY 1 FOR XML PATH('') ), 1, 2, '') + ']' SET @Query = 'SELECT * FROM ( SELECT CreateDate AS [DATE], ItemID, Description, SUM(Quantity) AS [QTY] FROM myDB.dbo.SaleTransactionDetails WHERE CreateDate BETWEEN '+@StartDate+' AND '+@EndDate+' GROUP BY CreateDate, ItemID, Description ) tpvt PIVOT (SUM(tpvt.QDE) FOR tpvt.DATE IN ('+@Str_Dates+')) AS pvt' EXECUTE (@Query) ``` If I remove `WHERE CreateDate BETWEEN '+@StartDate+' AND '+@EndDate+'` the query runs without problems. So, I try use `CONVERT` function in several ways to convert the variables into Dates but without success. Any idea what I can do to use this variables and don't have that error?
`WHERE CreateDate BETWEEN '+@StartDate+'` You cannot concatenate (+) a string to a datetime. Convert it to a **quoted** string in your dynamic SQL: ``` 'CreateDate BETWEEN ''' + CONVERT(VARCHAR(8), @StartDate, 112) + ''' AND ... ```
try this: ``` SET @StartDate = convert(datetime,'2014-05-01') SET @EndDate = convert(datetime,'2014-05-16') ```
Error in conversion date from character string
[ "", "sql", "sql-server", "pivot-table", "" ]
Suppose I know the day of the `DAYOFWEEK()`, and I know the `WEEK()` and `YEAR()` numbers. Is it possible to format a date out of these values in *mysql* ?
Here you go: ``` SELECT STR_TO_DATE('2014-20-2','%Y-%U-%w')-INTERVAL 1 DAY n; +------------+ | n | +------------+ | 2014-05-19 | +------------+ ``` The INTERVAL bit is to account for the fact that %w interprets days of the week as 0 (Sunday) to 6, whereas DAYOFWEEK goes from 1(Sunday) to 7 - go figure!!! It's possible that %U also works slightly differently from WEEK(); the above appears to give the right answer so I haven't looked into it further.
``` SELECT DATE_FORMAT(FROM_UNIXTIME(1400463204), '%Y-%m-%d 00:00:00') AS date, STR_TO_DATE(DATE_FORMAT(FROM_UNIXTIME(1400463204), CONCAT(YEAR(FROM_UNIXTIME(1400463204)-INTERVAL 1 YEAR),'-','%U-%w')), '%Y-%U-%w %H:%i:s') AS samedaylastyear, DAYOFWEEK(DATE_FORMAT(FROM_UNIXTIME(1400463204), '%Y-%m-%d 00:00:00')) AS check1, DAYOFWEEK(STR_TO_DATE(DATE_FORMAT(FROM_UNIXTIME(1400463204), CONCAT(YEAR(FROM_UNIXTIME(1400463204)-INTERVAL 1 YEAR),'-','%U-%w')), '%Y-%U-%w %H:%i:s')) AS check2 ```
Find date based on week + dayofweek mysql
[ "", "mysql", "sql", "date", "dayofweek", "" ]
hoping someone here can be of some help. I'm running a query that returns something like this. <https://i.stack.imgur.com/tyQxg.png> This is my current query: ``` SELECT i.prtnum, i.lodnum, i.lotnum, i.untqty, i.ftpcod, i.invsts FROM inventory_view i, locmst m WHERE i.stoloc = m.stoloc AND m.arecod = 'PART-HSY' ORDER BY i.prtnum ``` If you're looking at the picture, I need the query to exclude rows like the 3rd one. (00005-86666-000)
You didn't give much reasoning on why to exclude that row but you can exclude by `prtnum` like you've requested: ``` SELECT i.prtnum, i.lodnum, i.lotnum, i.untqty, i.ftpcod, i.invsts FROM inventory_view i, locmst m WHERE i.stoloc = m.stoloc AND m.arecod = 'PART-HSY' AND i.prtnum NOT IN(SELECT i2.prtnum FROM inventory_view i2, locmst m2 WHERE i2.stoloc = m2.stoloc AND m2.arecod = 'PART-HSY' GROUP BY i2.prtnum HAVING COUNT(*) = 1) ORDER BY i.prtnum ```
How about something like: ``` SELECT i.prtnum, i.lodnum, i.lotnum, i.untqty, i.ftpcod, i.invsts FROM inventory_view i, locmst.m WHERE i.stoloc = m.stoloc AND m.arecod = 'PART-HSY' AND i.prtnum IN ( SELECT prtnum FROM inventory_view j HAVING count(prtnum)>1 GROUP BY prtnum ) ```
SQL - How to exclude UNIQUE returned rows
[ "", "mysql", "sql", "unique", "rows", "" ]
I've got a query in SQL (2008) that I can't understand why it's taking so much longer to evaluate if I include a clause in a WHERE statement that shouldn't affect the result. Here is an example of the query: ``` declare @includeAll bit = 0; SELECT Id ,Name ,Total FROM MyTable WHERE @includeAll = 1 OR Id = 3926 ``` Obviously, in this case, the @includeAll = 1 will evaluate false; however, including that increases the time of the query as if it were always true. The result I get is correct with or without that clause: I only get the 1 entry with Id = 3926, but (in my real-world query) including that line increases the query time from < 0 seconds to about 7 minutes...so it seems it's running the query as if the statement were true, even though it's not, but still returning the correct results. Any light that can be shed on why would be helpful. Also, if you have a suggestion on working around it I'd take it. I want to have a clause such as this one so that I can include a parameter in a stored procedure that will make it disregard the Id that it has and return all results if set to true, but I can't allow that to affect the performance when simply trying to get one record.
You'd need to look at the query plan to be sure, but using OR will often make it scan like this in some DBMS. Also, read @Bogdan Sahlean's response for some great details as why this happening. This may not work, but you can try something like if you need to stick with straight SQL: ``` SELECT Id ,Name ,Total FROM MyTable WHERE Id = 3926 UNION ALL SELECT Id ,Name ,Total FROM MyTable WHERE Id <> 3926 AND @includeAll = 1 ``` If you are using a stored procedure, you could conditionally run the SQL either way instead which is probably more efficient. Something like: ``` if @includeAll = 0 then SELECT Id ,Name ,Total FROM MyTable WHERE Id = 3926 else SELECT Id ,Name ,Total FROM MyTable ```
> Obviously, in this case, the @includeAll = 1 will evaluate false; > however, including that increases the time of the query as if it were > always true. This happens because those two predicates force SQL Server to choose an `Index|Table Scan` operator. Why ? The execution plan is generated for all possible values of `@includeAll` variable / parameter. So, the same execution plan is used when `@includeAll = 0` and when `@includeAll = 1`. If `@includeAll = 0` is true and if there is an index on `Id` column then SQL Server *could use* an `Index Seek` or `Index Seek` + `Key|RID Lookup` to find the rows. But if `@includeAll = 1` is true the optimal data access operator is `Index|Table Scan`. So if the execution plan must be *usable* for all values of `@includeAll` variable what is the data access operator used by SQL Server: Seek or Scan ? The answer is bellow where you can find a similar query: ``` DECLARE @includeAll BIT = 0; -- Initial solution SELECT p.ProductID, p.Name, p.Color FROM Production.Product p WHERE @includeAll = 1 OR p.ProductID = 345 -- My solution DECLARE @SqlStatement NVARCHAR(MAX); SET @SqlStatement = N' SELECT p.ProductID, p.Name, p.Color FROM Production.Product p ' + CASE WHEN @includeAll = 1 THEN '' ELSE 'WHERE p.ProductID = @ProductID' END; EXEC sp_executesql @SqlStatement, N'@ProductID INT', @ProductID = 345; ``` These queries have the following execution plans: ![enter image description here](https://i.stack.imgur.com/Ysz8x.png) As you can see, first execution plan includes an `Clustered Index Scan` with two `not optimized` predicates. My solution is based on dynamic queries and it generates two different queries depending on the value of `@includeAll` variable thus: **[ 1 ]** When `@includeAll = 0` the generated query (`@SqlStatement`) is ``` SELECT p.ProductID, p.Name, p.Color FROM Production.Product p WHERE p.ProductID = @ProductID ``` and the execution plan includes an `Index Seek` (as you can see in image above) and **[ 2 ]** When `@includeAll = 1` the generated query (`@SqlStatement`) is ``` SELECT p.ProductID, p.Name, p.Color FROM Production.Product p ``` and the execution plan includes an `Clustered Index Scan`. These two generated queries have different optimal execution plan. Note: I've used [Adventure Works 2012](http://msftdbprodsamples.codeplex.com/downloads/get/165399) sample database
Why is SQL evaluating a WHERE clause that is False?
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I would like to ask you guys how would you do a query to show the data of this table: ``` week name total ==== ====== ===== 1 jon 15.2 1 jon 10 1 susan 10 1 howard 9 1 ben 10 3 ben 30 3 susan 10 3 mary 10 5 jon 10 6 howard 12 7 tony 25.1 8 tony 7 8 howard 10 9 susan 6.2 9 howard 9 9 ben 10 11 howard 10 11 howard 10 ``` like this: ``` week name total ==== ====== ===== 1 ben 10 1 howard 9 1 jon 25.2 1 mary 0 1 susan 10 1 tony 0 3 ben 30 3 howard 0 3 jon 0 3 mary 10 3 susan 10 3 tony 0 5 ben 0 5 howard 0 5 jon 10 5 mary 0 5 susan 0 5 tony 0 6 ben 0 6 howard 12 6 jon 0 6 mary 0 6 susan 0 6 tony 0 7 ben 0 7 howard 0 7 jon 0 7 mary 0 7 susan 0 7 tony 25.1 8 ben 0 8 howard 10 8 jon 0 8 mary 0 8 susan 0 8 tony 7 9 ben 10 9 howard 9 9 jon 0 9 mary 0 9 susan 6.2 9 tony 0 11 ben 0 11 howard 20 11 jon 0 11 mary 0 11 susan 0 11 tony 0 ``` I tried something like: ``` select t1.week_id , t2.name , sum(t1.total) from xpto as t1 , xpto as t2 where t1.week_id = t2.week_id group by t1.week_id, t2.name order by t1.week_id, t2.name ``` But I'm failing to understand the "sum" part and I can't figure out why... Any help would be very appreciated. Thanks in advance, and sorry for my english.
you might try something like the following: ``` select week = w.week , name = n.name , sum_total = coalesce( sum( d.total ) , 0 ) from ( select distinct week from my_table ) w cross join ( select distinct name from my_table ) n left join my_table d on d.week = w.week and d.name = n.name group by w.week , n.name order by 1,2 ``` The `cross join` of first two derived tables constructs their *cartesian product*: all week and all names from the table, regardless of whether or not a particular week/name combination exists. We then take that, join it against the actual data rows and summarize it, using `coalesce()` to collapse any `null` results from the aggregate function `sum()` to 0.
I've edited my answer because I now understand what you want to do a bit better. I prefer doing things in several steps rather than trying to do several transformations of data with a single join or subquery. So I would approach this like this: ``` ; with Weeks as ( select distinct Week_id from xpto ) , Names as ( select distinct Name from xpto ) , Scores as ( select week_id , name , score = sum(t1.score) from xpto t1 group by t1.week_id , t1.name ) , WeeksAndNames as ( select week_id , name from Weeks cross join Names ) -- The final query! select wan.week_id , wan.name , total = COALESCE(s.total,0) from WeeksAndNames wan left join Scores s on wan.week_id = s.week_id and wan.name = s.name order by wan.week_id , wan.name ``` Lengthy, I'll grant you, and you can probably condense it. But this shows each step you need to go through to transform your data into the list you want.
SQL Server: how to get this result from this table (example inside)
[ "", "sql", "sql-server-2008", "" ]
I am trying to detach a local database, but getting errors because there are existing connections to it. How do I close existing connections on my local database? PS. Running on SQL Server 2008 Express
Take it offline first. THe dialog for that allows a force option. Then you can detach it safely.
**Disconnect All the Other Users to Your Database** ``` ALTER DATABASE [YourDbName] SET SINGLE_USER WITH ROLLBACK IMMEDIATE; ``` **Disconnect your SSMS Connections to Database** If you are the only User connected to a database and want to disconnect all the connection made by you in SSMS you can goto any of the `Query windows` open on SSMS and Right Click and see below: ![enter image description here](https://i.stack.imgur.com/KJAe2.png)
How do you close all connections to a local database in SQL Server Management Studio?
[ "", "sql", "sql-server", "database", "" ]
I need to query my DB for the latest UUID (meaning the one that was inserted last). This value is generated by the application. So if I simply do.. ``` select uuid from Run where <some condition> ``` then it returns multiple UUIDs. How do I get the latest one? There is an auto increment primary surrogate ID column on this table as well as Create Date, so I could just do... ``` select max(id),uuid from Run where ``` But this forces me to include that ID column but in my result set. Which I guess is not too bad but just wondering if there is an elegant way I could return just the UUID in the result set and still get the latest. I am using MySQL. Thanks.
Just sort yourself and limit the output. ``` SELECT uuid FROM Run WHERE <some_condition> ORDER BY id DESC LIMIT 1; ```
The clearest/most explicit way is to write: ``` SELECT uuid FROM Run WHERE id = ( SELECT MAX(id) FROM Run WHERE <some condition> ) ; ``` Also, please be aware that you **cannot** write what you suggested: ``` select max(id),uuid from Run where -- Bad! Will not work! ``` because this will arbitrarily select a `uuid` from a record that matches your condition — it will *not*, in general, select the `uuid` that actually corresponds to the `max(id)`. (This is explained, -ish, at [in §12.17.3 "MySQL Extensions to `GROUP BY`" of the *MySQL 5.7 Reference Manual*](http://dev.mysql.com/doc/refman/5.7/en/group-by-extensions.html), though that page makes it sound like you can only get this problem if you have a `GROUP BY` clause.)
SQL : How to get the latest value of an unordered column
[ "", "mysql", "sql", "" ]
course has\_many tags by has\_and\_belongs\_to, now given two id of tags, [1, 2], how to find all courses that have those both two tags `Course.joins(:tags).where("tags.id IN (?)" [1, 2])` will return record that have one of tags, not what I wanted ``` # app/models/course.rb has_and_belongs_to_many :tags # app/models/tag.rb has_and_belongs_to_many :courses ```
This is not a single request, but might still be as quick as other solutions, and can work for any arbitrary number of tags. ``` tag_ids = [123,456,789,876] #this will probably come from params @tags = Tags.find(tag_ids) course_ids = @tags.inject{|tag, next_tag| tag.course_ids & next_tag.course_ids} @courses = Course.find(course_ids) ```
Since you're working with PostgreSQL, instead of using the IN operator you can use the ALL operator, like so: ``` Course.joins(:tags).where("tags.id = ALL (?)", [1, 2]) ``` this should match all ids with an AND instead of an OR.
Rails how to find record by association's ids contain array
[ "", "sql", "ruby-on-rails", "" ]
I have a table like below: ``` ------------- ID | NAME ------------- 1001 | A,B,C 1002 | D,E,F 1003 | C,E,G ------------- ``` I want these values to be displayed as: ``` ------------- ID | NAME ------------- 1001 | A 1001 | B 1001 | C 1002 | D 1002 | E 1002 | F 1003 | C 1003 | E 1003 | G ------------- ``` I tried doing: ``` select split('A,B,C,D,E,F', ',') from dual; -- WILL RETURN COLLECTION select column_value from table (select split('A,B,C,D,E,F', ',') from dual); -- RETURN COLUMN_VALUE ```
Try using below query: ``` WITH T AS (SELECT 'A,B,C,D,E,F' STR FROM DUAL) SELECT REGEXP_SUBSTR (STR, '[^,]+', 1, LEVEL) SPLIT_VALUES FROM T CONNECT BY LEVEL <= (SELECT LENGTH (REPLACE (STR, ',', NULL)) FROM T) ``` Below Query with ID: ``` WITH TAB AS (SELECT '1001' ID, 'A,B,C,D,E,F' STR FROM DUAL ) SELECT ID, REGEXP_SUBSTR (STR, '[^,]+', 1, LEVEL) SPLIT_VALUES FROM TAB CONNECT BY LEVEL <= (SELECT LENGTH (REPLACE (STR, ',', NULL)) FROM TAB); ``` **EDIT:** Try using below query for multiple IDs and multiple separation: ``` WITH TAB AS (SELECT '1001' ID, 'A,B,C,D,E,F' STR FROM DUAL UNION SELECT '1002' ID, 'D,E,F' STR FROM DUAL UNION SELECT '1003' ID, 'C,E,G' STR FROM DUAL ) select id, substr(STR, instr(STR, ',', 1, lvl) + 1, instr(STR, ',', 1, lvl + 1) - instr(STR, ',', 1, lvl) - 1) name from ( select ',' || STR || ',' as STR, id from TAB ), ( select level as lvl from dual connect by level <= 100 ) where lvl <= length(STR) - length(replace(STR, ',')) - 1 order by ID, NAME ```
There are multiple options. See [**Split comma delimited strings in a table in Oracle**](https://lalitkumarb.wordpress.com/2015/03/04/split-comma-delimited-strings-in-a-table-in-oracle/). Using **REGEXP\_SUBSTR:** ``` SQL> WITH sample_data AS( 2 SELECT 10001 ID, 'A,B,C' str FROM dual UNION ALL 3 SELECT 10002 ID, 'D,E,F' str FROM dual UNION ALL 4 SELECT 10003 ID, 'C,E,G' str FROM dual 5 ) 6 -- end of sample_data mimicking real table 7 SELECT distinct id, trim(regexp_substr(str, '[^,]+', 1, LEVEL)) str 8 FROM sample_data 9 CONNECT BY LEVEL <= regexp_count(str, ',')+1 10 ORDER BY ID 11 / ID STR ---------- ----- 10001 A 10001 B 10001 C 10002 D 10002 E 10002 F 10003 C 10003 E 10003 G 9 rows selected. SQL> ``` Using **XMLTABLE:** ``` SQL> WITH sample_data AS( 2 SELECT 10001 ID, 'A,B,C' str FROM dual UNION ALL 3 SELECT 10002 ID, 'D,E,F' str FROM dual UNION ALL 4 SELECT 10003 ID, 'C,E,G' str FROM dual 5 ) 6 -- end of sample_data mimicking real table 7 SELECT id, 8 trim(COLUMN_VALUE) str 9 FROM sample_data, 10 xmltable(('"' 11 || REPLACE(str, ',', '","') 12 || '"')) 13 / ID STR ---------- --- 10001 A 10001 B 10001 C 10002 D 10002 E 10002 F 10003 C 10003 E 10003 G 9 rows selected. ```
Split comma separated values of a column in row, through Oracle SQL query
[ "", "sql", "oracle", "split", "" ]
I came across a scenario,I will explain it with some dummy data. See the table Below ``` Select * from LUEmployee empId name joiningDate 1049 Jithin 3/9/2009 1017 Surya 1/2/2008 1089 Bineesh 8/24/2009 1090 Bless 7/15/2009 1014 Dennis 1/5/2008 1086 Sus 9/10/2009 ``` **I need to increment the year column by 1, only If the months are Jan, Mar, July Or Dec.** ``` empId name joiningDate derived Year 1049 Jithin 3/9/2009 2010 1017 Surya 1/2/2008 2009 1089 Bineesh 8/24/2009 2009 1090 Bless 7/15/2009 2010 1014 Dennis 1/5/2008 2009 1086 Sus 9/10/2009 2009 ``` derived Year is the required column We were able to achieve this easily with a case statement like below ``` Select *, YEAR(joiningDate) + CASE WHEN MONTH(joiningDate) in (1,3,7,12) THEN 1 ELSE 0 END from LUEmployee ``` But there came an added condition from onsite PM, Dont use CASE statement, CASE is inefficient. Insearch of a soultion, We resulted in a following solution, a solution using binary K-map, As follows --- If number 1 to 12 represents months from Jan to Dec, See the binary result ![enter image description here](https://i.stack.imgur.com/jD8op.png) the Karnaugh Map way of expressing is given below. ![enter image description here](https://i.stack.imgur.com/RI1Ww.png) the result will be ![enter image description here](https://i.stack.imgur.com/nTtQq.png) We need to realize the expression with sql server binary operations ``` eg: binary of 12 = 1100 in the k-map, a = 1, b = 1, c = 0, d = 0 Similarly, binary of 7 = 0111 in the k-map, a = 0, b = 1, c = 1, d = 1 ``` to get the left most bit (d), we will have to shift the bit towards right by 3 positions and the mask all the bits except LSB. ``` eg: ((MONTH(joiningDate)/8)&1) ``` Similarly, second bit from left (c), we need to shift the bit towards right by 2 positions and then mask all the bits except LSB ``` eg: ((MONTH(joiningDate)/4)&1) ``` Finally, each bit can be represented as ``` so a = ((MONTH(joiningDate)/8)&1) b = ((MONTH(joiningDate)/4)&1) c = ((MONTH(joiningDate)/2)&1) d = (MONTH(joiningDate)&1) a inverse = (((MONTH(joiningDate)/8)&1)^1) b inverse = (((MONTH(joiningDate)/4)&1)^1) c inverse = (((MONTH(joiningDate)/2)&1)^1) d inverse = ((MONTH(joiningDate)&1)^1) ``` The final code will be ``` SELECT *, YEAR(joiningDate) + CAST( ((MONTH(joiningDate)/8)&1)*((MONTH(joiningDate)/4)&1)*(((MONTH(joiningDate)/2)&1)^1)*((MONTH(joiningDate)&1)^1) | (((MONTH(joiningDate)/8)&1)^1)*(((MONTH(joiningDate)/4)&1)^1)*(MONTH(joiningDate)&1) | (((MONTH(joiningDate)/8)&1)^1)*((MONTH(joiningDate)/2)&1)*(MONTH(joiningDate)&1) AS INT) [derivedYear] FROM LUEmployee ``` Result will be ![enter image description here](https://i.stack.imgur.com/50JYa.png) --- **Question: There may be simple and less complex ideas, please share it.** I like to find a simpler one ,as well as share the idea.Here the possible conditions are 12 (12 months). We can use k-map for even bigger number of conditions..Felt like k-map is convenient for up to 64 conditions.
My first reaction would be to defend the use of the `case` clause in this case. But if you are absolutely not allowed to use it, maybe you could simply add a table with the month and *increment* values: ``` LUMonthIncrement Month Increment 1 1 2 0 3 1 4 0 5 0 6 0 7 1 8 0 9 0 10 0 11 0 12 1 ``` Then you can join in that table and just add the increment: ``` Select LUEmployee.*, YEAR(joiningDate) + LUMonthIncrement.Increment as derivedYear from LUEmployee join LUMonthIncrement on MONTH(LUEmployee.joiningDate) = LUMonthIncrement.Month ``` This is unlikely to be much more performant though, because in order to join to `LUMonthIncrement` the `MONTH(LUEmployee.joiningDate)` expression must be evaluated for each row in the `LUEmployee` table.
In this specific case you could do a `UNION` as you got 2 distinct subsets of your input set that don't depend on each other and the split criteria are well defined. So you could do something like: ``` Select *, YEAR(joiningDate) + 1 as derived_year from LUEmployee WHERE MONTH(joiningDate) = 1 OR MONTH(joiningDate) = 3 OR MONTH(joiningDate) = 7 OR MONTH(joiningDate) = 12 UNION Select *, YEAR(joiningDate) as derived_year from LUEmployee WHERE NOT (MONTH(joiningDate) = 1 OR MONTH(joiningDate) = 3 OR MONTH(joiningDate) = 7 OR MONTH(joiningDate) = 12) ```
Use Boolean algebra in tsql to avoid CASE statement or deal complex WHERE conditions
[ "", "sql", "sql-server", "t-sql", "case", "boolean-operations", "" ]
I have similar table structure as shown below ![enter image description here](https://i.stack.imgur.com/R2Mb6.png) Now here I want write a **query** (*without using inner query*) and find all those number who have condition like (value 1 = A and value 2 = B) and (value 1 = B and value 2 = A). Means Kind of vice versa case where a number is having both value1 and value2 having A and B. Thus for the given case my **query** output would be 1 and 4.
If you don't want to repeat the duplicate rows, following should work: ``` mysql> select a.* from tbl_so_q23676640 a -> join tbl_so_q23676640 b -> on a.v1 = b.v2 and a.v2 = b.v1 -> where a.n=b.n -> group by a.n, a.v1, a.v2 -> ; +------+------+------+ | n | v1 | v2 | +------+------+------+ | 1 | a | b | | 1 | b | a | | 4 | a | b | | 4 | b | a | +------+------+------+ 4 rows in set (0.00 sec) ```
Please try this query: ``` select t1.NUMBER from mytable t1 join mytable t2 on t1.NUMBER = t2.NUMBER where t1.Value1 = 'A' and t1.Value2 = 'B' and t2.Value1 = 'B' and t2.Value2 = 'A' ``` You can check the result: <http://sqlfiddle.com/#!2/a5e7ae/1>
SQL query for vice versa case
[ "", "mysql", "sql", "join", "" ]
I use the following: ``` DECLARE @ConstraintName varchar(255); SELECT @ConstraintName = CONSTRAINT_NAME FROM INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE WHERE TABLE_NAME = 'TheTable' AND COLUMN_NAME = 'TheColumn'; alter table TheTable drop constraint @ConstraintName; ``` But, it has incorrect syntax near @ConstraintName. It might be a small thing but I can't figure it out. What should I change so the constraint will be dropped? SQLFiddle: <http://sqlfiddle.com/#!2/6709e/3>
``` DECLARE @constraintName VARCHAR(50); DECLARE @runString VARCHAR(2000); select @constraintName = CONSTRAINT_NAME from INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE WHERE TABLE_NAME = 'TheTable' AND COLUMN_NAME = 'TheColumn'; SET @runString = CONCAT('ALTER TABLE TheTable DROP CONSTRAINT ', @constraintName); PREPARE stmt1 FROM @runString ; EXECUTE stm1; DEALLOCATE PREPARE stmt1; ```
Try this : ``` DECLARE @ConstraintName VARCHAR(255); SELECT CONSTRAINT_NAME INTO @ConstraintName FROM INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE WHERE TABLE_NAME = 'TheTable' AND COLUMN_NAME = 'TheColumn'; alter table TheTable drop constraint @ConstraintName; ``` Or : ``` DECLARE @ConstraintName VARCHAR(255); SELECT @ConstraintName:=CONSTRAINT_NAME FROM INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE WHERE TABLE_NAME = 'TheTable' AND COLUMN_NAME = 'TheColumn'; alter table TheTable drop constraint @ConstraintName; ```
Dropping foreign key using variable as name
[ "", "mysql", "sql", "" ]
``` declare @A varchar(5000) declare @B varchar(5000) select @A=value from drp.Parameter where parameteridname='IV01' --result of this query ('DIM003966','DIM000736','DIM025297', -- 'DIM025302','DIM027583') select [InventLocationId],[WMSLocationId] from ItemKit.InventDim where inventdimid in (@A) ``` i am not getting any result out of second query but if i run them individually and use result of 1st query in second i am getting it. is there any way around to run them together
That is because it evaluates `@a` as is (as text, not a list of values). You should create a sql statement on the fly and execute it: ``` declare @sqlStatement nvarchar(4000) @sqlStatement = 'select [InventLocationId],[WMSLocationId] from ItemKit.InventDim where inventdimid in (' + @A + ')' execute sp_executesql @sqlStatement ``` But as said by others, don't use unless really necessary.
You should use this code: ``` select [InventLocationId],[WMSLocationId] from ItemKit.InventDim where inventdimid in (select value from drp.Parameter where parameteridname='IV01') ```
using user variable in 'IN' clause of sql
[ "", "sql", "sql-server", "" ]
i need an sql which whould do somethig as follows: ``` SELECT col1, col2, cond4 as cond4_is_exclusive FROM table WHERE (cond1 AND cond2 AND cond3) OR cond4 ``` Where cond4 = (col3 IN (...)) I NEED cond4\_is\_exclusive to be TRUE only if condition is met by cond4 itself and not by cond1 and cond2 and cond3 since there is an OR between them. Is there a such solution? I need an elegant and optim solution since the query is much larger and there are more conditions and more complicated. my current work is like: ``` SELECT col1, col2, (cond1 AND cond2 AND cond3) as c1, cond4 c2 FROM table WHERE (cond1 AND cond2 AND cond3) OR cond4 ``` And later c2 and c2 are checked with php which is not so elegant
The following should work on Postgres (not sure about MySQL though - I hardly ever use it) ``` select * from ( SELECT col1, col2, (cond1 AND cond2 AND cond3) as c1, cond4 c2 FROM table ) t WHERE c1 OR c2 ```
A solution that works in both databases is: ``` SELECT col1, col2, cond4 as cond4_is_exclusive FROM table WHERE (cond1 AND cond2 AND cond3) and (not cond4) OR (not (cond1 AND cond2 AND cond3) and cond4); ``` You could also express this with a `case` so you only have to repeat the first conditions once: ``` WHERE 1 = (case when (cond1 AND cond2 AND cond3) then (case when cond4 then 1 else 0 end) when cond4 then 1 else 0 end) ```
SQL detect elements in OR clause
[ "", "mysql", "sql", "postgresql", "" ]
I have a table that contains a number of records. The columns are `Directory, Full Name, Server Name`. I would like a query to return a list of records that are sorted by the number of duplicate values of the Directory column. **For Example:** ``` Directory Full Name Server Name oneill Liam O'Neill server1 oneill Liam O'Neill server6 oneill Liam O'Neill server12 oneill Liam O'Neill server2 brown Joe Brown server4 brown Joe Brown server5 brown Joe Brown server2 walsh Sarah Walsh server1 ```
``` SELECT full_name, server_name, COUNT(directory) FROM some_table GROUP BY full_name, server_name ORDER BY 3 DESC ```
You can use it directly in the `ORDER BY`: ``` SELECT * FROM dbo.Directory d ORDER BY (SELECT COUNT(*) FROM dbo.Directory d2 WHERE d.Directory = d2.Directory) DESC ``` `Demo` Change `DESC` to `ASC` if you want to sort ascending, so that the directories with less duplicates come first. Your sample not as good since you've already sorted it by duplicates(desc).
Order By Count (someRecord)
[ "", "sql", "sql-server-2008", "" ]
I try to implement a table which will be used as a apply operation. Customers apply for a loan and employees will approve them. The corresponding sql is: ``` create table approve( employee_ID char(8), customer_ID char(8), loanID char(8) primary key, foreign key(employee_ID) references employee(ID), foreign key(customer_ID) references customer(ID) ) ``` However the problem is when I try to insert an element it asks me to enter the loanID. However loan id's should be displayed and employee should determine based on this loanID. Isn't it logical? If it's logical how I can modify the code?
If `loanID` comes from another table, you should also address it as foreign key, just like you do with `employee` and `customer`: ``` create table approve( employee_ID char(8), customer_ID char(8), loan_ID char(8), foreign key(loan_ID) references loan(ID), foreign key(employee_ID) references employee(ID), foreign key(customer_ID) references customer(ID) ) ``` If you want to prevent the same loan to have multiple entries in the table `approve`, create an index: ``` create unique index approve_loan_ID on approve (loan_ID) ```
If it is a one-to-one relationship then the structure is fine. Loan and approve can share primary key Creating an approvement post requires a loan post. And a loan can only be approved once? All you need to do is setting the primary key of approve to be a foreign key of loan. ``` foreign key(loan_ID) references loan(ID) ```
Restructuring the sql
[ "", "sql", "" ]
I have 2 tables in SQL Server 2008: `Address`: ``` nameid | e-mail --------------- 1 | xyz@abc.com 2 | fgh@asdf.com 3 | 123@doremi.com ``` `Member`: ``` nameid | memberid --------------- 1 | 456 2 | 457 3 | 458 ``` I need to set e-mail to `[memberid]@test.com` keeping in mind that e-mail is `varchar` and `memberid` is `int`.
``` SELECT A.nameid ,CAST(M.memberid AS NVARCHAR(20)) + RIGHT([e_mail], LEN([e_mail]) - CHARINDEX('@', [e_mail])+1) AS New_Column FROM [address] A INNER JOIN [member] M ON A.nameid = M.nameid ``` To simply hardcode `@test.com` with their MemberID is fairly simple ``` SELECT A.nameid ,CAST(M.memberid AS VARCHAR(20)) + '@test.com' AS New_Column FROM [address] A INNER JOIN [member] M ON A.nameid = M.nameid ```
Is this what you are looking for? ``` update a set email = cast(m.memberid as varchar(255)) + '@test.com' from address a join member m on a.nameid = m.nameid; ```
Set column to a concatenated value from another table
[ "", "sql", "sql-server", "sql-server-2008", "join", "concatenation", "" ]
SQL Server 2008: Supposing a table of customers, and a column called "Shipping\_State". I want to split the $10,000 spent on shipping costs equally amongst all customers who have Shipping\_State = Ohio value, so if there's 2 in Ohio 1 month, it'll be 5,000 a piece, if there's 100 the next month, it'll be 100 a piece. I have a blank column in the table named Cost for that calculated value. Cost is a decimal(18,4) data type. I'd like to be able to use the query for any data types (usually nchar). How would I accomplish this? My incorrect code in SQL Server Mgmt Studio returns the message: > Msg 157, Level 15, State 1, Line 1 An aggregate may not appear in the > set list of an UPDATE statement. ``` UPDATE CustomerTable SET Cost = (10000 / COUNT(CustomerTable.Shipping_State)) WHERE CustomerTable.Shipping_State = 'Ohio'; ```
Use nested `SELECT`. ``` UPDATE CustomerTable SET Cost = (SELECT 10000.0 / count(*) FROM CustomerTable WHERE CustomerTable.Shipping_state = 'Ohio') WHERE CustomerTable.Shipping_State = 'Ohio'; ```
You would need to do a sub-query to get the count, and then update based on this value, something like this should work: ``` UPDATE CustomerTable SET Cost = (10000 / CTCount.Shipping_State_Count) FROM CustomerTable CT INNER JOIN ( SELECT Shipping_State, COUNT(Shipping_State) AS Shipping_State_Count FROM CustomerTable GROUP BY Shipping_State) CTCount ON CT.Shipping_State = CTCount.Shipping_State WHERE CT.Shipping_State = 'Ohio'; ```
SQL UPDATE: integer divided by count of a specified value
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I am trying to create a view but get the following error: > **View's SELECT contains a subquery in the FROM clause** I am running the following command. I can't seem to figure out how to substitute the nested selects with joins. Any help would be much appreciated! ``` create view student_fee_basic as select fsbc.*, ffp.name, ffp.amount 'fee' from (select sbc.*, ffc.name 'fname', ffc.id 'fid' from (select s.admission_no, s.first_name, bc.id 'bid', bc.code, bc.name from (select b.id, b.name, c.code from batches b, courses c where b.name = '2014-2015' and b.course_id = c.id) bc left join students s on bc.id = s.batch_id) sbc left join finance_fee_categories ffc on ffc.batch_id = sbc.bid where ffc.name = 'Basic Monthly') fsbc left join finance_fee_particulars ffp on ffp.finance_fee_category_id = fsbc.fid; ```
MySQL does not support subqueries in views: > Subqueries cannot be used in the FROM clause of a view. The documentation is [here](http://dev.mysql.com/doc/refman/5.7/en/view-restrictions.html). The easiest fix is to use a series of different views for each level. You can probably rewrite this query to remove the subqueries. However, I find it very hard to help without explicit joins.
Version 5.7 supports it. So one way to fix it is to migrate your database to newer version
MySQL Error Code: 1349. View's SELECT contains a subquery in the FROM clause
[ "", "mysql", "sql", "database", "" ]
As a disclaimer, I'm no professional powershell scripter, but more of a 'script-kiddie' level scripter who was assigned something far outside the scope of his job, tech support. :P I've been working on a migration script to bring some really old data into a new software app. As a part of this, I'm using the following code to 'Join' two objects (which were imported from csv files). Because of the way the data is presented to me, I'm having to run this join script several times, against rather large data sets, and it's taking a very long time to complete. The code I'm using is: ``` Import-Csv extract-accounts.csv | ForEach-Object -Begin { $Jobs = @{} } -Process { $Jobs.Add($_.NAME,$_.ID) } Import-Csv Job.csv | ForEach-Object { $_ | Add-Member -MemberType NoteProperty -Name ContactID -Value $Jobs."$($_.DisplayName)" -PassThru } | Export-Csv -NoTypeInformation Joined-AccountJob.csv ``` What I'm wondering is, can I use SQL-like commands on objects in powershell to simplify and speed-up this process? Or, would I be better to take the CSV files, push them into an SQL database (I have no knowledge of this process), perform my joins, then export them back out to csv files? Thanks!
SQL Commands, no not exactly. Although I did find a site that described a way to do it here. <http://social.technet.microsoft.com/Forums/windowsserver/en-US/e242e1e9-6a8f-4a46-a884-8bd6811b5e35/combining-multiple-csv-files-with-powershell> Basically what they describe, and what sounds like the right way to do it to me, is to grab each column out of the CSVs you want into a variable then dump them into a text file with each variable separated by a comma. If that doesn't work for you and you want to get them into SQL anyway this article describes the process of getting CSVs into SQL through several methods. <http://blogs.technet.com/b/heyscriptingguy/archive/2011/11/28/four-easy-ways-to-import-csv-files-to-sql-server-with-powershell.aspx>
After looking at the options, I didn't see a way to simplify exactly what I was doing, however I did discover where-object, which helped in other areas of my project. Here's an example: ``` Import-Csv JobContact.csv | Select-Object JobContactRole,JobExternalReference,ContactExternalReference | Where-Object {$_.JobContactRole -ne "Policyholder" -and $_.JobContactrole -ne "Broker"} | Export-Csv Cleaned-JobContact.csv -NoTypeInformation ``` I hope this helps someone!
Can I use SQL commands (such as join) on objects in powershell, without any SQL server/database involved?
[ "", "sql", "powershell", "csv", "" ]
I feel like this is a fairly common scenario, though I haven't been able to find a solution that directly addresses this issue. Consider the table below: ``` Username HatSize Tim 4 Julie 3 Mark 3 Susan 4 ``` Let's say that I entered the values in to the "Hat Size" column incorrectly. I want to update the table so that a HatSize of 4 becomes 3 and a HatSize of 3 becomes 4 (effectively swapping the values of 3 and 4), like in the table below: ``` Username HatSize Tim 3 Julie 4 Mark 4 Susan 3 ``` If I were to run a simple update query: ``` UPDATE table SET HatSize = '3' WHERE HatSize = '4' UPDATE table SET HatSize = '4' WHERE HatSize = '3' ``` It would just make all the values in the HatSize column 4. I considered running it as a transaction, but I can't seems to find anything that suggests that running concurrent update queries like the ones above would work correctly. I realize that I could use an intermediary value, but is there a more elegant way to achieve something like this?
``` UPDATE table SET HatSize = CASE WHEN HatSize = '3' THEN '4' WHEN HatSize = '4' THEN '3' END ``` ## [`SQL FIDDLE`](http://sqlfiddle.com/#!3/f16b7/1)
I know one technique for dealing with conflicting changes called optimistic concurrency. You can create a Timestamp column in your table that changes to the current time automatically every time a row is updated. Suppose you have a client that pulled in information from a row of this table, changed some of it, and tried writing the row values back to the database. In the update query's "WHERE" clause, update only rows where the timestamp is equal to the timestamp that the client originally pulled in. The client's timestamp value should be the same as the value currently in the row. If it isn't, the update query will affect 0 rows. This is when a concurrency exception should be thrown. Then you'd catch the exception, update the values for the client that is trying to make changes, and inform them that someone else updated that record.
Simultaneous conflicting update queries
[ "", "sql", "sql-server", "" ]
I am trying to search every 100 record so the first bulk will be 1 to 100 second will be 101 to 200 etc. I am also using joins to combine two tables. When I execute the query I get error saying: `The column 'ID' was specified multiple times for 'cte'.` This is my query: ``` WITH cte AS ( SELECT ROW_NUMBER() OVER ( ORDER BY TableOne.Name ) AS ROW , * FROM DatabaseNameOne FULL JOIN DatabaseNameOne ON DatabaseNameTwo.ID= DatabaseNameOne.ID WHERE CONVERT(DATE,DatabaseNameOne.dateone) BETWEEN '2013-12-01' AND '2014-05-20' ) SELECT * FROM cte WHERE ROW BETWEEN '1' AND '100' ``` Can some one tell me what I am doing wrong here? I am using server 2008. I can select 1 to 100 record with out join, but soon as I join a table I get this error
I am going to assume you meant to join to DatabaseNameTwo and that each table has three fields. What your current cte is doing if you expand the select \*: ``` WITH cte AS ( SELECT ROW_NUMBER() OVER ( ORDER BY TableOne.Name ) AS ROW , DatabaseNameOne.id, DatabaseNameOne.SomeOtherField, DatabaseNameOne.YetAnotherField, DatabaseNameTwo.id, DatabaseNameTwo.AnotherField, DatabaseNameTwo.HowManyMoreFieldsAreThere FROM DatabaseNameOne FULL JOIN DatabaseNameOne ON DatabaseNameTwo.ID= DatabaseNameOne.ID WHERE CONVERT(DATE,DatabaseNameOne.dateone) BETWEEN '2013-12-01' AND '2014-05-20' ) ``` As you can see there are two columns with teh name ID. That is what si scauseing teh error. Since both IDs are the same, you only need to list one (one of the many reasons why you should never use select \* is the repetition of data that is wasteful of precious database and network resources when you have joins). So this should work (After you change to the real column names and table names and fix the select \* in the final query which I was too lazy to do): ``` WITH cte AS ( SELECT ROW_NUMBER() OVER ( ORDER BY TableOne.Name ) AS ROW , DatabaseNameOne.id, DatabaseNameOne.SomeOtherField, DatabaseNameOne.YetAnotherField, DatabaseNameTwo.AnotherField, DatabaseNameTwo.HowManyMoreFieldsAreThere FROM DatabaseNameOne FULL JOIN DatabaseNameOne ON DatabaseNameTwo.ID= DatabaseNameOne.ID WHERE CONVERT(DATE,DatabaseNameOne.dateone) BETWEEN '2013-12-01' AND '2014-05-20' ) SELECT <List the fies here) FROM cte WHERE ROW BETWEEN '1' AND '100' ```
This is your cte: ``` WITH cte AS ( SELECT ROW_NUMBER() OVER(ORDER BY TableOne.Name) AS ROW, * FROM DatabaseNameOne FULL JOIN DatabaseNameOne ON DatabaseNameTwo.ID = DatabaseNameOne .ID WHERE convert(date, DatabaseNameOne.dateone) between '2013-12-01' and '2014-05-20' ) ``` You are joining a table to itself and using `*` -- all the columns are duplicated. Even if the tables are different, the `join` is on an `ID` field, so that column will be duplicated. A CTE/subquery/table cannot have duplicate names? How would SQL Server know which column you are referring to in a reference? Instead, you need to list the specific columns you want from the `join`.
How to load specific rows 100 at a time
[ "", "sql", "sql-server", "sql-server-2008", "" ]
This is how I retrieve data ``` select myPlan from myTable ``` this is result , ``` myPlan ===================== True , False , True False , True , False False , True , True ``` In each row , I have three Boolean value separated by comma . The first Boolean value represent Breakfast , the second is Lunch and the last is Dinner . I just want to replace the result to ``` myPlan ===================== BreakFast , Dinner Lunch Lunch , Dinner ``` Can I make this replacement by using only `sql query` ? I'm using `SQL server 2008 R2` !
Since you have just 3 booleans you could use a case statement for all 8 possibilities. ``` SELECT (CASE WHEN myPlan = 'True , True , True' THEN 'Breakfast, Lunch, Dinner' WHEN myPlan = 'True , True , False' THEN 'Breakfast, Lunch' WHEN myPlan = 'True , False , False' THEN 'Breakfast' WHEN myPlan = 'True , False , True' THEN 'Breakfast, Dinner' WHEN myPlan = 'False , False , True' THEN 'Dinner' WHEN myPlan = 'False , True , True' THEN 'Lunch, Dinner' WHEN myPlan = 'False , True , False' THEN 'Lunch' ELSE '' END) myPlan FROM myTable ```
You can use string functions as follows. ``` DECLARE @TEMP AS TABLE(MyPlan VARCHAR(100)) INSERT INTO @TEMP(MyPlan) VALUES ('True , False , True') INSERT INTO @TEMP(MyPlan) VALUES ('False , True , False') INSERT INTO @TEMP(MyPlan) VALUES ('False , True , True') SELECT CASE WHEN RTRIM(SUBSTRING(MyPlan,1,5)) = 'TRUE' THEN 'BREAKFAST' ELSE '' END +CASE WHEN RTRIM(SUBSTRING(MyPlan,9,5)) = 'TRUE' THEN ',LUNCH' ELSE '' END +CASE WHEN RTRIM(SUBSTRING(MyPlan,17,5)) = 'TRUE' THEN ',DINNER' ELSE '' END FROM @TEMP ```
Replacing string in sql query
[ "", "sql", "string", "sql-server-2008", "replace", "" ]
I'm working on following query: ``` INSERT INTO a (id, value) VALUES (_id, (SELECT value FROM b WHERE b.id = _id)); ``` Table a: id, value (has a default value) Table b: id, value Table b does not contain all requested values. So the SELECT query sometimes returns 0 rows. In this case it should use the default value of a.value. Is this somehow possible? **Edit:** Solution for empty columns in comments of the post marked as solved.
The following query would work. First the max(value) is looked up from table b for \_id. It would be either NULL or equal to b.value. If it is NULL (checked using the COALESCE function), then the default value of the value column of table a is set as the value. The default value of the value column of table a is accessed using the DEFAULT function (please refer Reference 1). ``` INSERT INTO a SELECT _id, COALESCE(max(value), (SELECT DEFAULT(value) FROM a LIMIT 1)) value FROM b WHERE id = _id; ``` `SQL Fiddle demo` **Reference**: 1. [How to SELECT DEFAULT value of a field on SO](https://stackoverflow.com/questions/8266771/how-to-select-default-value-of-a-field)
you can wrap the value in `coalesce(max(value), default_value)` ``` INSERT INTO a (id, value) VALUES (_id, (SELECT coalesce(max(value), default_value)) FROM b WHERE b.id = _id)); ```
MySQL INSERT INTO ... SELECT or default value
[ "", "mysql", "sql", "select", "insert", "" ]
I have a column in sql server 2012 which contain white spaces. I want to replace these empty spaces with NULL. I have written the following query but its not working. ``` SELECT replace(COLUMN1, '',NULL) FROM Orders; ``` How to achieve the above functionality. Thanks in advance.
Use [`nullif`](http://technet.microsoft.com/en-us/library/ms177562.aspx) ``` select nullif(Column1, '') from Orders ```
If you want to handle the situation where the value consists of spaces or has zero length, then use `ltrim()` or `like`: ``` select (case when value like '%[^ ]%' then value end) ``` Similarly, ``` select (case when ltrim(value) <> '' then value end) ```
How to replace empty spaces with NULL
[ "", "sql", "sql-server", "" ]
I have found something really weird today while doing a work converting a datetime to text in excel and using the number generated by it to convert to datetime in SQL Server. What is weird about it? Different results. Two days difference to be precise. I assumed the date of today *(20/05/2014 dd/MM/yyyy )* in Excel and got 41779 as result in text. ![Datetime input](https://i.stack.imgur.com/DJkX5.png) ![Convert to text](https://i.stack.imgur.com/nezBv.png) I got the text value and I use SQL convert to `datetime` to retrieve the value as date and I did not get the result I wanted. ![SQL Convert to datetime](https://i.stack.imgur.com/07WmI.png) I even tested with `datetime2` but I learned that I can't convert `int` to `datetime2` ![enter image description here](https://i.stack.imgur.com/Ce3O1.png) I'm not a MS Excel expert nor a SQL Server expert, but what is going on? I can make it work by doing the number generated by MS Excel and removing 2, but still doesn't make sense to me.
hehe ;) one day ages ago I wondered the same thing... do a simple exercise: compare `Select Cast(0 as DateTime)` vs. `=DATEVALUE("1900-01-01")` *which explains 1 day difference* and find the one extra leap year by reading the [father of VBA, Joel Spolsky, explanation](http://www.joelonsoftware.com/items/2006/06/16.html) tl;dr check out the difference - *which exlpains the 2nd day* `=DateValue("1900-02-28")` and `=DateValue("1900-03-01")`
Concentrating specifically on DATETIME, where the casts from int are allowed, there are two reasons for the discrepancy. 1. Excel uses a base of 1 for dates, SQL Server uses 0, i.e. `01/01/1900` when converted to a number in excel is 1, however, in SQL it is 0: `SELECT CAST(CAST('19000101' AS DATETIME) AS INT);` Will give 0. 2. There is a deliberate error in excel to allow portability from Lotus where the bug was not deliberate\*. Excel considers 29th February 1900 a valid date, but 1900 was not a leap year. SQL does not have this issue, so this means there is an extra day in the excel calendar. \*(*further reading on this suggests it might have been deliberate, or considered inconsequential)* --- **ADDENDUM** There is a [Microsoft Support Item](http://support.microsoft.com/kb/214326) that sates: > When Lotus 1-2-3 was first released, the program assumed that the year 1900 was a leap year, even though it actually was not a leap year. This made it easier for the program to handle leap years and caused no harm to almost all date calculations in Lotus 1-2-3. > > When Microsoft Multiplan and Microsoft Excel were released, they also assumed that 1900 was a leap year. This assumption allowed Microsoft Multiplan and Microsoft Excel to use the same serial date system used by Lotus 1-2-3 and provide greater compatibility with Lotus 1-2-3. Treating 1900 as a leap year also made it easier for users to move worksheets from one program to the other.
Difference between datetime converts in MSExcel and SQL Server
[ "", "sql", "sql-server", "excel", "datetime", "" ]
I am considering replacing select statements with stored procedures or Table Valued Functions. Currently, data source has huge select statements. I think that using stored procedures or table valued functions could bring following benefits: 1. Easy to maintain & manage code. 2. Code can be shared among teams (developers/testers/analysts) 3. Performance could benefit from stored execution plans. 4. Packages could be maintained without opening BIDS/data tools. When is it a good practice to replace **select** queries with stored procedures or table valued functions? In general, SELECT queries are 100+ rows long using CAST, ISNULL, CASE, REPLACE, COALESCE and joining other 4 tables
It is usually considered to be a good practice to use stored procedure in place of lengthy SELECT statement in the OLEDB Data source. There are few other disadvantage as well: 1. The SQL query editor in the OLEDB source has a limitation in parsing the query if it is too long or complex with many variables. 2. Limitation on number of variables (specially on large). For example: a 200+ line query gets parsed properly with 2 variables, but if you introduce one more variable - it stops working I had the connect bug references somewhere but they weren't fixed and the workaround is provided by using stored procedure or table valued function. In short: \*\*If the SELECT statement is big/complex/changeable - use stored procedure \*\* -- While the question is valid in nature - usually this type of open question are discouraged in Stackoverflow community
Why hasn't anyone mentioned a `VIEW` yet? Just push your select statements into a database view. Then when there is a change you don't have to open up your package you can just change your view. Only use stored procedures or table valued functions if you have some procedural logic required.
SQL Server Integration Services (SSIS) - Replacing Select Statements with Stored Procedures or Table Valued Functions
[ "", "sql", "sql-server", "stored-procedures", "ssis", "" ]
I have the following query: ``` with abby as (SELECT u.Name as 'UserId1' , count(distinct b.id) as 'Total Count' , '' as 'ediCount' FROM abprot.[FC10y].[dbo].[Batch] b with(nolock) inner join abprot.[FC10y].[dbo].[Principal] u with(nolock) on u.Id = b.CreatorId where b.CreationDate >= getdate() - 7 and u.name <> 'abbyyservice' group by u.Name) , edimon as (select userId , '' as 'Total Count' , count(*) as 'esubCount' from ESubmitTrackingTBL where DateCopied >= getdate() - 7 and userid <> abbyyservice group by UserId ) select * from abby union all select * from edimon ``` I need to sum the totals from each cte into another field by user. I have tried to include another cte but get a warning that 'UserID' and 'Total Count' are specified more than once. If I just run the query as unions, the result is shown below: ![enter image description here](https://i.stack.imgur.com/60Hpc.png) End result should look like the following: ![enter image description here](https://i.stack.imgur.com/kmQcW.png) If I have left anything out that may help, my apologies -- please let me know what I can add to clarify this if need be.
I'd dispense with the CTEs and just use a derived table. To avoid problems when query or stored procedure executions cross date boundaries, I prefer to use T-SQL variables and establish a consistent sense of *Now-ness* for the entire run. Crossing a midnight boundary and having "today" change in mid-run can cause ... subtle problems, if you're not careful about things. Don't ask me how I know this. So... ``` declare @now datetime = current_timestamp -- current instant in time declare @today date = @now -- today's date declare @cutoff_date date = dateadd(day,-7,@today) -- 1 week ago ``` Your `cutoff_date` value might vary depending on whether your spec requires looking at the previous 7 *calendar days* or the previous 168 hours (7\*24) relative to the current moment in time). So my query would then look something like this: ``` declare @now datetime = current_timestamp -- current instant in time declare @today date = @now -- today's date declare @cutoff_date date = dateadd(day,-7,@today) -- 1 week ago select user_id = x.user_id , total_count = sum( x.total_count ) , edi_count = sum( x.edi_count ) , grand_total = sum( x.total_count ) + sum( x.edi_count ) , esub_count_pct = 100.0 * sum( x.edi_count ) / ( sum( x.total_count ) + sum( x.edi_count ) ) from ( select user_id = u.Name , total_count = count( distinct b.id ) , esub_count = 0 from abprot.FC10y.dbo.Batch b join abprot.FC10y.dbo.Principal u on u.Id = b.Creator.Id and u.name <> 'abbyyservice' where b.CreationDate >= @cutoff_date group by u.Name UNION ALL select user_id = t.userId , total_count = 0 , esub_count = 1 from dbo.ESubmitTrackingTBL t where t.DateCopied >= @cutoff_date ) x group by x.user_id ```
``` with abby as ( SELECT u.Name as UserId, count(distinct b.id) as [Total Count], '' as ediCount FROM abprot.[FC10y].[dbo].[Batch] b with(nolock) inner join abprot.[FC10y].[dbo].[Principal] u with(nolock) on u.Id = b.CreatorId where b.CreationDate >= getdate() - 7 and u.name <> 'abbyyservice' group by u.Name ), edimon as ( select userId, '' as [Total Count], count(*) as esubCount from ESubmitTrackingTBL where DateCopied >= getdate() - 7 and userid <> 'abbyyservice' ) select UserId, TotalCount, esubCount, convert(decimal, esubCount)/(convert(decimal, TotalCount) + convert(decimal, esubCount)) percentesubCount from (select * from abby union select * from edimon) x ```
adding two columns together to get a sum from a union query
[ "", "sql", "sql-server", "t-sql", "" ]
I want to create an SQL view based on the sum of some columns in another table. This is quite easy, however, I want the new view to contain *multiple* different sums based on the data in the other table: ``` Table1: ID: Integer (PK, Autoenumerated) CompanyID: Integer (Not the PK!) Amount: Integer ``` Each CompanyID can have multiple different Amounts in the table, i want to sum all of these amounts. ``` View1: TotalAmount: Integer CompanyID: Integer ``` The CompanyID in the view should match the CompanyID as in Table1, but the TotalAmount should be the sum of all of the amounts found for that CompanyID in Table1. Also any advice for simply getting better at queries? This one seems kind of simple now that I see it, but of course I could not envision it originally.
This query would sum per company. Is this what you want in your view? ``` CREEATE VIEW SumsView AS SELECT CompanyID , SUM(Amount) TotalAmount FROM Table1 GROUP BY CompanyID ``` In the question you mention that you want the `ID` column to be included in the view but how do you relate the ID to the aggregated sum? The only way I can think of is that you care to duplicate the sums, like this: ``` CREEATE VIEW SumsView AS SELECT T.ID , T.CompanyID , A.TotalAmount FROM Table1 T INNER JOIN (SELECT CompanyID , SUM(Amount) TotalAmount FROM Table1 GROUP BY CompanyID) A ON T.CompanyID = A.CompanyID ```
``` create view V_AmountByCompany as select CompanyID, sum(Amount) as sumAmount from YourTable group by CompanyID ``` It wouldn't make much sense to get the Id by the way, as you have many Id by companyID : so which one would be kept ? If you really want it, you could choose `MIN(ID)` or `MAX(ID)` but once again, what for ?
Creating an SQL View from the sum of certain values in another table
[ "", "sql", "view", "" ]
Is it possible to populate a second table when I insert into the first table? Insert post to table1 -> table 2 column recieves table1 post's unique id. What I got so far, am I on the right track? ``` CONSTRAINT [FK_dbo.Statistics_dbo.News_News_NewsID] FOREIGN KEY ([News_NewsID]) REFERENCES [dbo].[News] ([NewsID]) ```
Lots of ways: 1. an `insert` trigger 2. read `SCOPE_IDENTITY()` after the first `insert`, and use it to do a second 3. use the `output` clause to do an insert Examples: 1: ``` create trigger Foo_Insert on Foo after insert as begin set nocount on insert Bar(fooid) select id from inserted end go insert Foo (Name) values ('abc'); ``` 2: ``` insert Foo (Name) values ('abc'); declare @id int = SCOPE_IDENTITY(); insert Bar(fooid) select @id ``` 3: ``` insert Bar(fooid) select id from ( insert Foo (Name) output inserted.id values ('abc')) x ```
The only thing I can think of is that you can use a trigger to accomplish this. There is nothing "built in" to SQL Server that would do it. Why not just do it from your .NET code?
Inserting to one table, insert the ID to second table
[ "", "sql", "sql-server", "" ]
Say I have to following: ``` Select OrderID = Case OrderID When 1 Then 'Customer1' When 2 Then 'Customer2' When 3 Then 'Customer2' Else 'Unknown Customer' End From OrdersPlaced ``` Is it possible to add an or and do something along the lines of: ``` Select OrderID = Case OrderID When 1 Then 'Customer1' When 2 Or 3 Then 'Customer2' Else 'Unknown Customer' End From OrdersPlaced ```
There are two forms of [`CASE`](http://msdn.microsoft.com/en-us/library/ms181765.aspx) expression, 'searched' and 'simple'. You can't use an `OR` with a 'simple' `CASE` expression, but you can with the 'searched' form: ``` Case When OrderID = 1 Then 'Customer1' When OrderID = 2 Or OrderID = 3 Then 'Customer2' Else 'Unknown Customer' End ``` Or even ``` Case When OrderID = 1 Then 'Customer1' When OrderID IN (2, 3) Then 'Customer2' Else 'Unknown Customer' End ```
You can use alternative form of CASE ``` Select OrderID = Case When OrderID = 1 Then 'Customer1' When OrderID = 2 Or OrderID = 3 Then 'Customer2' Else 'Unknown Customer' End From OrdersPlaced ```
SQL CASE statement with OR?
[ "", "sql", "sql-server-2008", "" ]
When I run the following SQL statement in SQL Server Management Studio it returns a count of 2 ``` SELECT COUNT(*) FROM Daisy_Copy2 WHERE ChargeCode = '1'; ``` But for some reason when I run the following VB.net code the `result` variable returns a 0 and doesn't identify that duplicate codes exist. ``` Dim result As Integer Using cmdb = New SqlCommand("SELECT COUNT(*) FROM Daisy_Copy2 WHERE ChargeCode = '1'", conn) Int(result = cmdb.ExecuteScalar()) If result > 1 Then MessageBox.Show("Duplicate Codes Exist!", "Billing", _ MessageBoxButtons.OK, MessageBoxIcon.Information) Else MsgBox(result) End If End Using ``` Can anyone help me understand why? Any help greatly appreciated.
Instead of [ExecuteNonQuery](http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlcommand.executenonquery%28v=vs.110%29.aspx) you should use [ExecuteScalar](http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlcommand.executescalar%28v=vs.110%29.aspx) ``` Dim result As Integer = CInt(cmd.ExecuteScalar()) ```
[ExecuteNonQuery](http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlcommand.executenonquery%28v=vs.110%29.aspx) is normally used for updates or inserts that don't leave a result, so it returns an integer telling you how many rows were affected, not the result itself. What you most likely are meaning to use is [ExecuteScalar](http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlcommand.executescalar%28v=vs.110%29.aspx) which returns the first column of the first row in the result set returned by the query, in this case the integer containing your count.
SQL variable returning 0
[ "", "sql", "sql-server", "vb.net", "" ]
I have three tables, user/team/user\_team(many-to-many) ``` user ----------- id name 1 Tom 2 Jerry 3 John team ------------ id name 1 t1 2 t2 3 t3 user_team --------------------- userid teamid isdeleted 1 t1 0 <----(0 means not deleted record, which can be searched out) 2 t2 1 <----(1 means deleted record, which can not be searched out) ``` I want to get all team records information with associated user information like below ``` -------------- tid tname username 1 t1 Tom 2 t2 3 t3 ``` Can you tell me how to write the sql statement? Sorry for my mistake. I've updated my question by adding one more record t3 in team table.
try this: ``` Select a.id as tid a.name as tname b.name as username from team a LEFT JOIN user_team c on a.name = c.teamid and c.isdeleted = 0 LEFT JOIN user b on b.id = c.userid ```
Give it a try. See a demo fiddle here <http://sqlfiddle.com/#!2/d918a/2> ``` select t.id as tid, t.name as tname, case when ut.isdeleted = 0 then u.name else '' end as username from team t left join user_team ut on t.name = ut.teamid left join user u on ut.userid = u.id; ``` Which will result in ![enter image description here](https://i.stack.imgur.com/NFIIG.png)
How to get another table field information
[ "", "mysql", "sql", "" ]
I need to make a query but get the value in every field empty. Gordon Linoff give me the clue to this need here: [SQL Empty query results](https://stackoverflow.com/questions/23740036/sql-empty-query-results/23740215#23740215) which is: ``` select t.* from (select 1 as val ) v left outer join table t on 1 = 0; ``` This query wors perfectly on PostgreSQL but gets an error when trying to execute it in Microsoft Access, it says that 1 = 0 expression is not admitted. How could it be fixed to work on microsoft access? Regards,
If the table has a numeric primary key column whose values are non-negative then the following query will work in Access. The primary key field is [ID]. ``` SELECT t2.* FROM myTable AS t2 RIGHT JOIN ( SELECT TOP 1 (ID * -1) AS badID FROM myTable AS t1 ) AS rowStubs ON t2.ID = rowStubs.badID ``` This was tested with Access 2010.
I am offering this answer here, even though you didn't think it worked in my edit to your original question. What is the problem? ``` select t.* from (select max(col) as maxval from table as t ) as v left join table as t on v.val < t.col; ```
SELECT query to return a row from a table with all values set to Null
[ "", "sql", "ms-access", "expression", "left-join", "" ]
Using Oracle SQL, how can I replace certain numbers with an 'X'. For example, if I have a random mobile number: `0400 100 200 or 0400100200` I would like to mask out the number to: ``` 0400 XXX XXX and 0400XXXXXX ``` I have tried using TRANSLATE but unsure if this is the best approach.
You can use [REGEX\_REPLACE](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions130.htm), e.g. ``` SELECT REGEXP_REPLACE(SUSBTR(PhoneNumber, 5), '[0-9]', 'X') ``` Will replace all numbers after the 4th character with `X', so a full example would be: ``` SELECT SUSBTR(PhoneNumber, 1, 4) || REGEXP_REPLACE(SUSBTR(PhoneNumber, 5), '[0-9]', 'X') AS Masked FROM T; ``` **[Example on SQL Fiddle](http://sqlfiddle.com/#!4/3bbde/3)** As has been pointed out in a comment, you can also use `TRANSLATE` as follows: ``` SELECT TRANSLATE(SUBSTR(PhoneNumber, 5), '0123456789', 'XXXXXXXXXX') AS TRANSLATE; ``` I have very little practical experience with Oracle so can't even guess at which one would perform better, the commenter (Alex Poole) does however have a score of 2.3k in Oracle compared to my 53. So if he is suggesting TRANSLATE will run faster, I would not argue. I'd suggest trying both and picking the one that works fastest on your set of data.
Sample mobile number: > 1234567890 My output: > 12xxxxxx90 ``` SELECT CONVERT(VARCHAR(20), SUBSTRING(CONVERT(VARCHAR(20),x.MobileNo),1,2)) + 'xxxxx' + CONVERT(VARCHAR(20), SUBSTRING(CONVERT(VARCHAR(20),x.MobileNo),LEN(x.MobileNo) - 1, LEN(x.MobileNo))) AS MobileNo FROM TABLENAME x ```
How to mask a mobile phone number portion with 'X'
[ "", "sql", "oracle11g", "" ]
I want to merge adjacent repeated rows into one , for example , I have a table demo with two columns , ``` data | order ------------- A | 1 A | 2 B | 3 B | 4 A | 5 ``` I want the result to be : ``` A B A ``` How to achieve this by one select SQL query in oracle ?
please, try something like this ``` select * from table t1 where not exists(select * from table t2 where t2.order = t1.order - 1 and t1.data = t2.data) ```
The answer suggested by Dmitry above is working in SQL, to make it work in oracle you need to do some modifications. `order` is a reserved keyword you need to escape it as follows. ``` select * from Table1 t1 where not exists( select * from Table1 t2 where t2."order" = t1."order" - 1 and t1."data" = t2."data" ) order by "order" ``` Working Fiddle at <http://sqlfiddle.com/#!4/cc816/3>
merge adjacent repeated rows into one
[ "", "sql", "oracle", "" ]
The answer escapes me...maybe because it is not possible... Example that works... ``` SELECT * FROM TABLEA WHERE FIELD1 IN ('aaa','bbb','ccc') ``` Example that does not work... Attempt to leverage variable so that I can define the values once in a string of statements ``` DECLARE @ListValues VARCHAR(50) SET @ListValues = '''aaa'',''bbb'',''ccc''' SELECT * FROM TABLEA WHERE FIELD1 IN (@ListValues) ``` This is is obviously only a small part of the equation and for other reasons... I cannot leverage a table for the values and change this to a true sub-query The closest question I could find was this one... but does not cover my requirements obviously... [Storing single quotes in varchar variable SQL Server 2008](https://stackoverflow.com/questions/7745645/storing-single-quotes-in-varchar-variable-sql-server-2008) Thanks in advance.
It doesn't work because the `IN` operator expects a **list** of items - here strings. What you're supplying with your `@ListValues` variable however is a **single** string - not a list of strings. What you could do is use a table variable and store your values in it: ``` DECLARE @ListOfValues TABLE (ItemName VARCHAR(50)) INSERT INTO @ListOfValues(ItemName) VALUES('aaa'), ('bbb'), ('ccc') SELECT * FROM TABLEA WHERE FIELD1 IN (SELECT ItemName FROM @ListOfValues) ```
You can do this using dynamic SQL: ``` DECLARE @ListValues VARCHAR(MAX) ,@SQL VARCHAR(MAX) SELECT @ListValues = '''aaa'',''bbb'',''ccc''' ,@SQL = 'SELECT * FROM TABLEA WHERE FIELD1 IN ('+@ListValues+')' EXEC (@SQL) ```
Using a string of quoted values in a variable for a SQL WHERE CLAUSE
[ "", "sql", "sql-server", "variables", "where-clause", "quotes", "" ]
I have a table of cars where each car belongs to a company. In another table I have a list of company locations by city. I want to select all cars from the cars table whose company has locations on all cities passed into the stored procedure, otherwise exclude those cars all together even if it falls short of one city. So, I've tried something like: ``` select id, cartype from cars where companyid in ( select id from locations where cityid in ( select id from cities ) ) ``` This doesn't work as it obviously satisfies the condition if ANY of the cities are in the list, not all of them. It sounds like a group by count, but can't make it work with what I tried. I"m using MS SQL 2005
One example: ``` select id, cartype from cars c where ( select count(1) from cities where id in (...)) = ( select count(distinct cityid) from locations where c.companyid = locations.id and cityid in (...) ) ```
Maybe try counting all the cities, and then select the car if the company has the same number of distinct location cities are there are total cities. ``` SELECT id, cartype FROM cars WHERE --Subquery to find the number of locations belonging to car's company (SELECT count(distinct cities.id) FROM cities INNER JOIN locations on locations.cityid = cities.id WHERE locations.companyId = cars.companyId) = --Subquery to find the total number of locations (SELECT count(distinct cities.id) FROM cities) ``` I haven't tested this, and it may not be the most efficient query, but I think this might work.
sql query to select matching rows for all or nothing criteria
[ "", "sql", "sql-server-2005", "" ]
Here is a rough schema: ``` create table images ( image_id serial primary key, user_id int references users(user_id), date_created timestamp with time zone ); create table images_tags ( images_tag_id serial primary key, image_id int references images(image_id), tag_id int references tags(tag_id) ); ``` The output should look like this: ``` {"images":[ {"image_id":1, "tag_ids":[1, 2, 3]}, .... ]} ``` The user is allowed to filter images based on user ID, tags, and offset `image_id`. For instance, someone can say `"user_id":1, "tags":[1, 2], "offset_image_id":500`, which will give them all images that are from `user_id` 1, have both tags 1 AND 2, and an `image_id` of 500 or less. The tricky part is the "have both tags 1 AND 2". It is more straight-forward (and faster) to return all images that have either 1, 2, or both. I don't see any way around this other than aggregating, but it is much slower. Any help doing this quickly? Here is the current query I am using which is pretty slow: ``` select * from ( select i.*,u.handle,array_agg(t.tag_id) as tag_ids, array_agg(tag.name) as tag_names from ( select i.image_id, i.user_id, i.description, i.url, i.date_created from images i where (?=-1 or i.user_id=?) and (?=-1 or i.image_id <= ?) and exists( select 1 from image_tags t where t.image_id=i.image_id and (?=-1 or user_id=?) and (?=-1 or t.tag_id in (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)) ) order by i.image_id desc ) i left join image_tags t on t.image_id=i.image_id left join tag using (tag_id) --not totally necessary left join users u on i.user_id=u.user_id --not totally necessary group by i.image_id,i.user_id,i.description,i.url,i.date_created,u.handle) sub where (?=-1 or sub.tag_ids @> ?) limit 100; ```
When the execution plan of this statement is determined, at prepare time, the PostgresSQL planner doesn't know which of these `?=-1` conditions will be true or not. So it has to produce a plan to maybe filter on a specific `user_id`, or maybe not, and maybe filter on a range on `image_id` or maybe not, and maybe filter on a specific set of `tag_id`, or maybe not. It's likely to be a dumb, unoptimized plan, that can't take advantage of indexes. While your current strategy of a big generic query that covers all cases is OK for correctness, for performance you might need to abandon it in favor or generating the minimal query given the parametrized conditions that are actually filled in. In such a generated query, the `?=-1 or ...` will disappear, only the joins that are actually needed will be present, and the dubious `t.tag_id in (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)` will go or be reduced to what's strictly necessary. If it's still slow given certain sets of parameters, then you'll have a much easier starting point to optimize on. --- As for the gist of the question, testing the exact match on all tags, you might want to try the idiomatic form in an inner subquery: ``` SELECT image_id FROM image_tags WHERE tag_id in (?,?,...) GROUP BY image_id HAVING count(*)=? ``` where the last `?` is the number of tags passed as parameters. (and completely remove `sub.tag_ids @> ?` as an outer condition).
Among other things, your `GROUP BY` clause is likely wider than any of your indices (and/or includes columns in unlikely combinations). I'd probably re-write your query as follows (turning @Daniel's subquery for the tags into a CTE): ``` WITH Tagged_Images (SELECT Image_Tags.image_id, ARRAY_AGG(Tag.tag_id) as tag_ids, ARRAY_AGG(Tag.name) as tag_names FROM Image_Tags JOIN Tag ON Tag.tag_id = Image_Tags.tag_id WHERE tag_id IN (?, ?) GROUP BY image_id HAVING COUNT(*) = ?) SELECT Images.image_id, Images.user_id, Images.description, Images.url, Images.date_created, Tagged_Images.tag_ids, Tagged_Images.tag_names, Users.handle FROM Images JOIN Tagged_Images ON Tagged_Images.image_id = Images.image_id LEFT JOIN Users ON Users.user_id = Images.user_id WHERE Images.user_id = ? AND Images.date_created < ? ORDER BY Images.date_created, Images.image_id LIMIT 100 ``` (Untested - no provided dataset. note that I'm assuming you're building the criteria dynamically, to avoid condition flags) Here's some other stuff: * Note that `Tagged_Images` will have *at minimum* the indicated tags, but might have more. If you want images with **only** those tags (exactly 2, no more, no less), an additional level needs to be added to the CTE. * There's a number of examples floating around of stored procs that turn comma-separated lists into virtual tables (heck, I've done it with recursive CTEs), which you could use for the `IN()` clause. It doesn't matter that much here, though, due to needing dynamic SQL anyways... * Assuming that `Images.image_id` is auto-generated, doing ranges searches or ordering by it is largely pointless. There are relatively few cases where **humans** care about the value held here. Except in cases where you're searching for one specific row (for updating/deleting/whatever), *conceptual data sets* don't really care either; the value of itself is largely meaningless. What does `image_id < 500` actually tell me? Nothing - just that a given number was assigned to it. Are you using it to restrict based on "early" versus "late" images? Then use the proper data for that, which would be `date_created`. For pagination? Well, you have to do that *after* all the other conditions, or you get weird page lengths (like `0` in some cases). Generated keys should be relied on for one property only: uniqueness. This is the reason I stuck it at the end of the `ORDER BY` - to ensure a consistent ordering. Assuming that `date_created` has a high enough resolution as a timestamp, even this is unnecessary. * I'm fairly certain your `LEFT JOIN` to `Users` should probably be a regular `(INNER) JOIN`, but you didn't provide enough information for me to be sure.
Only return rows that match all criteria
[ "", "sql", "postgresql", "" ]
I have MySQL `5.6.17_1` and have query that is used for `5.1.xx` ``` select schtermid,idfptnsubid,d.idfptnid,idflinksetid, cast(count(distinct cthr) as real)/cast(count(distinct s.schid) as real)*100 as ctr, cast(sum(status) as real)/cast(count(*) as real)*100 as pfiled, count(distinct s.schid) as schcount from source.kdm_session as ``` This complains You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ``` 'real)/cast(count(distinct s.schid) as real)*100 as ctr, cast(sum(status) as' at line 3 ``` Which part is wrong? What should I change to get the same thing to happen?
As [per the docs](http://dev.mysql.com/doc/refman/5.0/en/cast-functions.html#function_cast), `real` is not a valid type: > The type for the result can be one of the following values: > > ``` > BINARY[(N)] > CHAR[(N)] > DATE > DATETIME > DECIMAL[(M[,D])] > SIGNED [INTEGER] > TIME > UNSIGNED [INTEGER] > ```
Try this for your calcs: ``` (count(distinct cthr) * 1.0)/(count(distinct s.schid) * 1.0)*100 as ctr, (sum(status) * 1.0)/(count(*) * 1.0)*100 as pfiled, ```
MySQL Syntax error for 5.6.17_1
[ "", "mysql", "sql", "" ]
I think I will be asking a very confusing question so I hope I can make this very clear for you guys, so, thanks in advance, and I'm sorry if this is a bit long because I want to make sure that I am being clear enough. I have two tables (in SQL 2005) which are kind of dependent to each other (let's name them TableA and TableB.) They have all the same columns, meaning, all TableA's columns are also TableB's columns, except that TableA has two more columns than TableB (it will be explained later why). In the 'saving' process of my application (created using VB.Net), those records that has values for column1 and column2 gets inserted into TableA. But if there is no any input for column1 and column2, the record gets saved in Table2 (this is why TableA has two more columns than TableB: the records with column1 and column2 gets into TableA while those that don't have gets into TableB.) Both tables have an identity primary key (auto-increment +1, starts at 1000), which we'll call KeyID\_A for TableA and KeyID\_B for TableB, and if you can already imagine the scenario, their key IDs have the same values (TableA has a row where the KeyID\_A is '1000', and TableB also has a row (a different one) where KeyID\_B is '1000'). In the application, the form (where the records being inserted in TableA/TableB are inputted) is auto-numbered (I set this using `SELECT nextID=MAX(KeyID_A)+1 FROM TableA`, yes, only from TableA, and that leads to my problem). If I saved a record with values for column1 and column2, the record gets inserted in TableA, with KeyID\_A's value as the key. But what if I don't have values for column1 and column2, and the record gets saved in TableB with KeyID\_A's value as the key, how am I going to get the next ID (note: `SELECT nextID=MAX(KeyID_A)+1 FROM TableA`) if the supposed to be next ID for TableA is already used in TableB? I would want to get the next ID based on the existing records from both TableA and TableB, and not only based on TableA (because I thought, what if I have already used TableA's nextID for TableB (let's say '1001', and I want to save another record into TableB, but the auto-numbering that appears is still '1001' because it is based only from TableA's KeyID\_A. I am really confused on how to do this, if this is even possible. I hope I made my question clear enough.
Normally you should never do this the way you do. Either you make it right which is: solution 1: ``` TableA Id : int PK (identity) col1 col2 ... TableB Id: int PK FK on TableA.Id col8 col9 ``` and create a view to make things simpler for you. Solution 2: Create table TableB with nullable col8 and col9 and a view called TableA if needed. or the wrong way: ``` TableIdForAandB Id : int PK identity TableA Id : int PK FK on TableIdForAandB.Id col1 col2 ... TableB Id : int PK FK on TableIdForAandB.Id col1 col2 ... col8 col9 ``` in which case you have to insert row to table TableIdForAandB and then insert row to table TableA or to TableB and set the Id to the new Id from TableIdForAandB.
Think about using ONE sequencer for both tables or if you don't have one think about using always ``` nextID = MAX(MAX(KeyID_A), MAX(KeyID_B)) + 1 ``` in addition your IDs will be unique (very much better :-)
Set auto-increment of primary ID manually or depending on another table
[ "", "sql", "vb.net", "" ]
I want to get every record from my MySQL database which is greater than today. Sample: ``` "Go to Lunch","2014-05-08 12-00-00" "Go to Bed","2014-05-08 23-00-00" ``` Output should only: ``` "Go to Bed","2014-05-08 23-00-00" ``` I use the DateTime for the Date Column Already searched: * [MySQL Where date is greater than one month?](https://stackoverflow.com/questions/4237594/mysql-where-date-is-greater-than-one-month) * [Datetime equal or greater than today in MySQL](https://stackoverflow.com/questions/5182275/datetime-equal-or-greater-than-today-in-mysql) But this does not work for me. QUERY(FOR PHP): ``` SELECT `name`,`date` FROM `tasks` WHERE `tasks`.`datum` >= DATE(NOW()) ``` OR (FOR PhpMyAdmin) ``` SELECT `name`,`date` FROM `tasks` WHERE `tasks`.`datum` >= 2014-05-18 15-00-00; ``` How can I write the working query?
Remove the `date()` part ``` SELECT name, datum FROM tasks WHERE datum >= NOW() ``` and if you use a specific date, don't forget the quotes around it and use the proper format with `:` ``` SELECT name, datum FROM tasks WHERE datum >= '2014-05-18 15:00:00' ```
I guess you looking for `CURDATE()` or `NOW()` . ``` SELECT name, datum FROM tasks WHERE datum >= CURDATE() ``` LooK the rsult of NOW and CURDATE ``` NOW() CURDATE() 2008-11-11 12:45:34 2008-11-11 ```
MySQL Where DateTime is greater than today
[ "", "mysql", "sql", "date", "" ]
What is best way to check if value is null or empty string in Postgres sql statements? Value can be long expression so it is preferable that it is written only once in check. Currently I'm using: ``` coalesce( trim(stringexpression),'')='' ``` But it looks a bit ugly. `stringexpression` may be `char(n)` column or expression containing `char(n)` columns with trailing spaces. What is best way?
The expression `stringexpression = ''` yields: `true`   .. for `''` (or for *any* string consisting of only spaces with the data type `char(n)`) `null`   .. for `null` `false` .. for anything else ### *"`stringexpression` is either null or empty"* To check for this, use: ``` (stringexpression = '') IS NOT FALSE ``` Or the reverse approach (may be easier to read): ``` (stringexpression <> '') IS NOT TRUE ``` Works for any [character type](https://www.postgresql.org/docs/current/datatype-character.html) including `char(n)`. [The manual about comparison operators.](https://www.postgresql.org/docs/current/functions-comparison.html) **Or** use your original expression without [`trim()`](https://www.postgresql.org/docs/current/functions-string.html), which would be costly noise for `char(n)` (see below), or incorrect for other character types: strings consisting of only spaces would pass as empty string. ``` coalesce(stringexpression, '') = '' ``` But the expressions at the top are faster. ### *"`stringexpression` is neither null nor empty"* Asserting the opposite is simpler: ``` stringexpression <> '' ``` Either way, document your exact intention in an added comment if there is room for ambiguity. ## About [`char(n)`](https://www.postgresql.org/docs/current/datatype-character.html) The data type `char(n)` is short for `character(n)`. `char` / `character` are short for `char(1)` / `character(1)`. `bpchar` is an internal alias of `character`. (Think "**b**lank-**p**added **char**acter".) This data type is supported for historical reasons and for compatibility with the SQL standard, but its use is [**discouraged in Postgres**](https://www.postgresql.org/docs/current/datatype-character.html): > In most situations `text` or `character varying` should be used instead. Do not confuse `char(n)` with other, useful, character types [`varchar(n)`, `varchar`, `text` or `"char"`](https://www.postgresql.org/docs/current/datatype-character.html) (with double-quotes). In `char(n)` an *empty string* is not different from any other string consisting of only spaces. All of these are folded to *n* spaces in `char(n)` per definition of the type. It follows logically that the above expressions work for `char(n)` as well - just as much as these (which wouldn't work for other character types): ``` coalesce(stringexpression, ' ') = ' ' coalesce(stringexpression, '') = ' ' ``` ### Demo Empty string equals any string of spaces when cast to `char(n)`: ``` SELECT ''::char(5) = ''::char(5) AS eq1 , ''::char(5) = ' '::char(5) AS eq2 , ''::char(5) = ' '::char(5) AS eq3; ``` Result: ``` eq1 | eq2 | eq3 ----+-----+---- t | t | t ``` Test for "null or empty string" with `char(n)`: ``` SELECT stringexpression , stringexpression = '' AS base_test , (stringexpression = '') IS NOT FALSE AS test1 , (stringexpression <> '') IS NOT TRUE AS test2 , coalesce(stringexpression, '') = '' AS coalesce1 , coalesce(stringexpression, ' ') = ' ' AS coalesce2 , coalesce(stringexpression, '') = ' ' AS coalesce3 FROM ( VALUES ('foo'::char(5)) , ('') , (' ') -- not different from '' in char(n) , (null) ) sub(stringexpression); ``` Result: ``` stringexpression | base_test | test1 | test2 | coalesce1 | coalesce2 | coalesce3 ------------------+-----------+-------+-------+-----------+-----------+----------- foo | f | f | f | f | f | f | t | t | t | t | t | t | t | t | t | t | t | t null | null | t | t | t | t | t ``` Test for "null or empty string" with `text`: ``` SELECT stringexpression , stringexpression = '' AS base_test , (stringexpression = '') IS NOT FALSE AS test1 , (stringexpression <> '') IS NOT TRUE AS test2 , coalesce(stringexpression, '') = '' AS coalesce1 , coalesce(stringexpression, ' ') = ' ' AS coalesce2 , coalesce(stringexpression, '') = ' ' AS coalesce3 FROM ( VALUES ('foo'::text) , ('') , (' ') -- different from '' in sane character types , (null) ) sub(stringexpression); ``` Result: ``` stringexpression | base_test | test1 | test2 | coalesce1 | coalesce2 | coalesce3 ------------------+-----------+-------+-------+-----------+-----------+----------- foo | f | f | f | f | f | f | t | t | t | t | f | f | f | f | f | f | f | f null | null | t | t | t | t | f ``` *db<>fiddle [here](https://dbfiddle.uk/?rdbms=postgres_9.6&fiddle=c1a384467b20eb691906c83a6c41a5bc)* Old [sqlfiddle](http://sqlfiddle.com/#!15/d41d8/2066) Related: * [Any downsides of using data type "text" for storing strings?](https://stackoverflow.com/questions/20326892/any-downsides-of-using-data-type-text-for-storing-strings/20334221#20334221)
To check for null and empty: ``` coalesce(string, '') = '' ``` To check for null, empty and spaces (trim the string) ``` coalesce(TRIM(string), '') = '' ```
Best way to check for "empty or null value"
[ "", "sql", "database", "postgresql", "null", "coalesce", "" ]
I'm stuck with creating a MySQL query. Below is my database structure. `authors` (author\_id and author\_name) `books` (book\_id and book\_title) `books_authors` is the link table (book\_id and author\_id) Result of all books and authors: ![img](https://i.stack.imgur.com/VLzXn.png) I need to get all the books for certain author, but if a book has 2 authors the second one must be displayed also. For example the book "Good Omens" with book\_id=2 has two authors. When I run the query I get the books for the author\_id=1 but I can not include the second author - "Neil Gaiman" in the result. The query is: ``` SELECT * FROM books LEFT JOIN books_authors ON books.book_id=books_authors.book_id LEFT JOIN authors ON books_authors.author_id=authors.author_id WHERE books_authors.author_id=1 ``` And below is the result: ![](https://i.stack.imgur.com/PypqZ.png)
You don't need a subquery for this: ``` SELECT * FROM book_authors ba JOIN books b ON b.book_id = ba.book_id JOIN book_authors ba2 ON ba2.book_id = b.book_id JOIN authors a ON a.author_id = ba2.author_id WHERE ba.author_id = 1 ```
You need to change the WHERE clause to execute a subselect like this: ``` SELECT b.*, a.* FROM books b LEFT JOIN books_authors ba ON ba.book_id = b.book_id LEFT JOIN authors a ON a.author_id = ba.author_id WHERE b.book_id IN ( SELECT book_id FROM books_authors WHERE author_id=1) ``` The problem with your query is that the WHERE clause is not only filtering the books you are getting in the result set, but also the book-author associations. With this subquery you first use the author id to filter books, and then you use those book ids to fetch all the associated authors. As an aside, I do think that the suggestion to substitute the OUTER JOINs with INNER JOINs in this specific case should apply. The first LEFT OUTER JOIN on books\_authors is certainly useless because the WHERE clause guarantees that at least one row exists in that table for each selected book\_id. The second LEFT OUTER JOIN is *probably* useless as I expect the author\_id to be primary key of the authors table, and I expect the books\_authors table to have a foreign key and a NOT NULL constraint on author\_id... which all means you should not have a books\_authors row that does not reference a specific authors row. If this is true and confirmed, then the query should be: ``` SELECT b.*, a.* FROM books b JOIN books_authors ba ON ba.book_id = b.book_id JOIN authors a ON a.author_id = ba.author_id WHERE b.book_id IN ( SELECT book_id FROM books_authors WHERE author_id=1) ``` Notice that INNER JOINs may very well be more efficient than OUTER JOINs in most cases (they give the engine more choice on how to execute the stament and fetch the result). So you should avoid OUTER JOINs if not strictly necessary. I added aliases and removed the redundant columns from the result set.
Can not determine what the WHERE clause should be
[ "", "mysql", "sql", "database", "join", "" ]
So I have a date field in my database. Its of the type "Date". The value of the field is `2014-05-04` (yyyy-dd-mm). How do I check, in sql, if the date is before or after today? So far I have tied with: ``` SELECT * FROM table WHERE theDate > GetDate() SELECT * FROM table WHERE CAST(theDate as Date) > CAST(GetDate() as Date) ``` How do you check if "2014-05-04" is before today?
The pointy bit of less than / greater than points to the lesser side. So you want ``` SELECT * FROM table WHERE theDate < convert(date, GetDate()) ```
``` SELECT * FROM table WHERE CONVERT(DATE, colDateTime)> CAST(GETTDATE() AS DATE) ``` This should work
compare dates in mssql
[ "", "sql", "sql-server", "date", "" ]
I would like to write a SQL query that would list all teachers that have more than three (3) students in their class (`Mrs. Smith` in this case). I originally thought that the `HAVING` clause would be the correct way to accomplish this but I am not coming up with `Mrs. Smith` as expected. ``` Teacher Student ------------------------- Mrs. Smith Danny Mrs. Smith Emily Mrs. Smith Todd Mrs. Smith Paul Mr. French Sam Mr. French Carol Mr. French Patty SELECT DISTINCT Teacher FROM Students HAVING (COUNT(Teacher) > 3) GROUP BY Teacher, Student ```
Using HAVING is correct, you just need to use it correctly ``` SELECT Teacher FROM Students GROUP BY Teacher HAVING COUNT(Student) > 3 ``` Basically you're grouping *Teacher* records together while counting how many *Students* each teacher has. And filtering on that count.
Yes, HAVING is correct. Try: ``` SELECT Teacher FROM Students GROUP BY Teacher HAVING (COUNT(1) > 3) ```
Using HAVING with COUNT to restrict results
[ "", "sql", "sql-server", "" ]
This is my query: ``` SELECT DR.name as tradername, convert(varchar,DR.Receiveddate,103) as recdate, DR.Recamount, DR.Chequeno, DR.Remark, DR.Updatedby FROM K_HM_ChicksaleDueReport DR where DR.Receiveddate between @fromdate and @todate and DR.name=@name union all Select CS.name as tradername, convert(varchar,CS.chicksplaceddate,103) as recdate, CS.Recamount, '' as chequeno, 'Direct' as Remark, CS.Updatedby from K_HM_ChickSales CS where CS.Recamount>0 and CS.chicksplaceddate between @fromdate and @todate and CS.name=@name order by recdate desc ``` Using this I am getting report by only day wise descending order. But what I want is total **dd/mm/yyyy** wise descending order. Something like: ``` 22/05/2014 21/05/2014 10/04/2014 5/03/2014 ```
Try this ``` select tradername,convert(varchar,recdate,103)as recdate,Recamount,chequeno, remark,updatedby from ( SELECT name as tradername,Receiveddate as recdate,Recamount,Chequeno, Remark,Updatedby FROM K_HM_ChicksaleDueReport where Receiveddate between @fromdate and @todate and name like '%'+@name+'%' union all Select name as tradername,chicksplaceddate as recdate,Recamount,''as chequeno,'Direct' as Remark,Updatedby from K_HM_ChickSales where Recamount>0 and chicksplaceddate between @fromdate and @todate and name like '%'+@name+'%')a order by a.recdate desc end else select tradername,convert(varchar,recdate,103)as recdate,Recamount,chequeno,remark,updatedby from( SELECT name as tradername,Receiveddate as recdate,Recamount, Chequeno, Remark,Updatedby FROM K_HM_ChicksaleDueReport where Receiveddate between @fromdate and @todate and name=@name union all Select name as tradername,chicksplaceddate as recdate,Recamount,''as chequeno, 'Direct' as Remark,Updatedby from K_HM_ChickSales where Recamount>0 and chicksplaceddate between @fromdate and @todate and name=@name)a order by a.recdate desc ```
Because `recdate` is a VARCHAR it orders by individual characters. Try casting / converting `recdate` back to `DATE` first, so like this: ``` ORDER BY CAST(recdate AS DATETIME) DESC ```
Report order by date column descending in sql server?
[ "", "sql", "sql-server", "stored-procedures", "" ]
I have a Customer table which has an ID. Each Customer entry has a Design which is stored in a Design table (it contains the CustomerID to reference). In my scenario, a Customer can have several Designs and sometimes no Designs. How could I select Customers that only have Designs? I've tried doing an Inner Join like this but I still get too many records since a Customer can have many Designs: ``` Select * from Customer Inner Join Design On Design.CustomerID = Customer.ID Where Design.CustomerID is not null ```
`*` select all records of all tables. Use `tablename.*` to select only all records of a specific table. ``` Select Customer.* from Customer Inner Join Design On Design.CustomerID = Customer.ID ``` But actually you are always better off by explicitly defining which columns you need. So use ``` Select Customer.ID, Customer.Col2, Customer.Col3 from Customer Inner Join Design On Design.CustomerID = Customer.ID group by Select Customer.ID, Customer.Col2, Customer.Col3 ``` And when you use an `inner join` then only the records will be returned that actually have a link to the joined table - so your `where` clause is obsolete.
i guess you storing an empty strings in your database . try that ``` Select * from Customer Inner Join Design On Design.CustomerID = Customer.ID Where Design.CustomerID is not null AND Design.CustomerID != '' GROUP BY Customer.ID ```
How can I select items from one sql table that don't appear in another table
[ "", "mysql", "sql", "" ]
I have a function that takes any date and translates it to another date within a different year (the point of it is to preserve the weekday, week number and month, but thats not really relevant). I use it to translate a list of dates and create a temporary table that maps the original date to the mapped date using the function. The query looks like this: ``` Select InputDates.Date as InputDate, dbo.GetFutureDate(InputDates.Date,2012) as PastDate INTO #DateMap FROM InputDates ``` InputDates is the list of dates that I need to translate. dbo.GetFutureDate is the translation function. As you can see the year is hardcoded, which is what I am trying to change. I have a list of years in another table. I want to create a dynamic sql statement with a series of SELECT statements like the one above, changing the year based on the list of years I have and then combine them together using Union All. What's the best way for me to do this?
Hopefully I am understanding you correctly, but why don't you create a cross join with the year list you have and pass the function the year column from the year list. ``` Select InputDates.Date as InputDate, dbo.GetFutureDate(InputDates.Date,YearInYearList) as PastDate INTO #DateMap FROM InputDates, YearList ``` YearList is your table name with the years in them, YearInYearList is the column name from the table. This should produce exactly what you want without the UNION overhead.
You can use the APPLY operator in SQL Server to achieve what you want. Basically, the APPLY operator allows you to reference a column from the outer query and it converts to a join or outer join. Ex: select y.year, t.\* into #DateMap from years as y cross apply ( Select InputDates.Date as InputDate, dbo.GetFutureDate(InputDates.Date, y.year) as PastDate FROM InputDates ) as t See Books Online for more details on the APPLY operator.
Create dynamic sql statement based on the output of a select in sql server
[ "", "sql", "sql-server", "sql-server-2008", "date", "select", "" ]
I have I query which is writing in excel file. Query looks like this ``` EXEC sp_makewebtask @outputfile = 'e:\Testing.xls', @query = ' SELECT top 10 * FROM [myDB].[dbo].[TOTALS_DAY] WHERE [START_DATETIME] < '20140501'', @FixedFont=0, @lastupdated=0, @resultstitle = 'Testing details' ``` If I comment out where clause its generated, however I have to use date and when I specify date its throwing me error. > Msg 102, Level 15, State 1, Line 5 > Incorrect syntax near '20140501'. how to fix this problem?
You have written 2 '', try this: ## Edit ``` DECLARE @date DATETIME set @date = '20140501' EXEC sp_makewebtask @outputfile = 'e:\Testing.xls', @query = ' SELECT top 10 * FROM [myDB].[dbo].[TOTALS_DAY] where [START_DATETIME] < @date, @FixedFont=0, @lastupdated=0, @resultstitle='Testing details' ``` i hope this helps
It looks like you have unintentionally matched quotes. Since you are surrounding you SQL statement in single quotes, then query is interpreted as ``` ' SELECT top 10 * FROM [myDB].[dbo].[TOTALS_DAY] where [START_DATETIME] < ' ``` Which means nothing. Since the date is supposed to be in single quotes, is it possible to put your value for `@query` in double quotes? Like this: ``` EXEC sp_makewebtask @outputfile = 'e:\Testing.xls', @query = " SELECT top 10 * FROM [myDB].[dbo].[TOTALS_DAY] where [START_DATETIME] < '20140501'", @FixedFont=0, @lastupdated=0, @resultstitle='Testing details' ```
Incorrect syntax in query which is loading data into excel file
[ "", "sql", "excel", "sql-server-2008-r2", "" ]
I have two tables with two different select statements. These tables contain only one column. I would like to subtract the rows from `table2` from rows in `table1` only once. In other words: I would like to remove only one occurence, not all. table1: ``` apple apple orange ``` table2: ``` apple pear ``` result: ``` apple orange ```
Basically FYI If `A={A,A,O},B={A,P}` then `A-B` is logically ``` select * from t1 except select * from t2 ``` try this ! ``` create table #t(id varchar(10)) create table #t1(id1 varchar(10)) insert into #t values('apple'),('apple'),('orange') insert into #t1 values('apple'),('pear') select * from ( select *,rn=row_number()over(partition by id order by id) from #t except select *,rn1=row_number()over(partition by id1 order by id1) from #t1 )x ``` # [SEE DEMO](http://www.sqlfiddle.com/#!6/d41d8/17594)
Here is an answer for an Oracle dbms. The trick is to number records per fruit, so to get apple 1, apple 2, etc. Then subtract the sets to stay with apple 2 whereas apple 1 was removed for instance. (The row\_number function needs a sort order which is not important for us, but we must specify it for syntax reasons.) ``` select fruit from ( select fruit, row_number() over (partition by fruit order by fruit) from table1 minus select fruit, row_number() over (partition by fruit order by fruit) from table2 ); ```
Subtract one table from another
[ "", "sql", "database", "" ]
Assume I have a Table like ``` Room Day People Theme A 14/05/2014 12 Water A 12/05/2014 245 Mathematics A 05/04/2014 215 Nature B 10/09/2013 252 Water B 10/05/2012 221 Cinema B 05/10/2011 215 Cinema C 10/10/2013 224 Mathematics C 02/06/2013 245 Cooking C 05/03/2013 15 Cooking ``` and want to obtain a Table with a row for each Room A, B and C, together with columns stating the last date it was used, the people attending and the Theme of the conference. Result will be a Table like that: ``` Room Day People Theme A 14/05/2014 12 Water B 10/09/2013 252 Water C 10/10/2013 224 Mathematics ``` This is just an example, the real table contains a lot of rows. Is there a quick SQL code for obtaining that without stating all the rooms?
You can try this query: ``` SELECT ROOM, MAX(DAY), PEOPLE, THEME FROM TABLE_NAME GROUP BY ROOM, PEOPLE, THEME; ```
Get the maximum date per room. Then join your table to get the full records: ``` select rooms.* from rooms join ( select room, max(day) as max_day from rooms group by room ) last_used on last_used.room = rooms.room and last_used.max_day = room.day; ```
SQL query help needed - Tale with max date value for each row group
[ "", "sql", "database", "" ]
I have a MySQL table called `key_uniqueWord`. This is how it looks like. ![enter image description here](https://i.stack.imgur.com/TlWGk.png) This is how the table is created.. ``` CREATE TABLE `key_uniqueword` ( `p_id` int(11) NOT NULL, `word` varchar(45) NOT NULL, `uniqueWordCount` int(11) NOT NULL, KEY `conv4` (`uniqueWordCount`,`p_id`,`word`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 ``` I can get the list of each unique individual record in `word` column from the following command. ``` SELECT DISTINCT `word` from `key_uniqueword` ``` I can check how many times a "particular" individual element in `word` column appeared in the `key_uniqueword` table by using the below command. For an example, lets check for the word "about" ``` SELECT COUNT(`word`) FROM `key_uniqueword` WHERE `word`="about" ``` Just like this, I need to know how many times "each" unique element in `word` column appears in the `key_uniquword` table. I am expecting the output to be like the below table structure. It has 2 columns, **word** and **count** ``` word | count _ _ _ _ _ _ _ _ _ about | 99,450 a | 1000 talk | 7980 men | 20,915 ``` How can I do this in a MySQL query? **Important** Please note that `p_id` not same for every row.
Correct query is: SELECT COUNT(`word`) AS count FROM `key_uniqueword` group by `word`
``` select word, count(*) as count from key_uniqueword group by word ```
Counting how many times each unique element appeared in the table
[ "", "mysql", "sql", "" ]
I need to create reference numbers that are easily communicated verbally. They should not be assigned sequentially nor duplicated. (by this I mean they will be assigned randomly within the range) I will not be using them for computations. They are strictly for usage as identifiers. I am thinking a lookup table with all possible 8 digit integers will be the most straightforward way to accomplish this. E.g. from 10000000 to 99999999. This would allow assigning randomly from the remaining pool instead of randomly generating in batches to get around collisions. Would it be more efficient to store them as a char(8) or 32bit int primary key? Would the answer be different if I used 9 digit or 10 digit values instead? I have considered using alphanumeric identifiers as well, but I think it would be less error prone to rely on numeric digits as verbal communication is a priority. -- edited to clarify the assignment needs to appear random --
(It's not clear that eight-digit numbers are especially significant here.) A potentially big problem with using a randomized table is that it can quickly become a hot spot. Applications must a) select the next usable value from that table, b) update a column in that table (to make sure that value doesn't get selected again), c) insert that value into your main table. If I were using a dbms that supported sequences, I might try something like this. * Use whatever integer data type gives you the required range. * Use a sequence generator with the right range to generate sequential integers in the target range. * Use the [multiplicative inverse](http://ericlippert.com/2013/11/14/a-practical-use-of-multiplicative-inverses/) to obfuscate the integers. Here's an example using PostgreSQL. (Not rigorously tested.) ``` create sequence wibble_seq increment by 1 minvalue 10000000 maxvalue 99999999 start with 10000000 owned by none ; ``` You *could* just as easily start with 1. These numbers aren't as important as their multiplicative inverses. Unless, that is, you're doing something like obfuscating invoice numbers, in which case you want the sequence to generate internal invoice numbers, and the multiplicative inverse to generate external invoice numbers. ``` create table wibble ( reference_num integer primary key default (nextval('wibble_seq') * 387420489 % 1000000000), added_time timestamp default current_timestamp ); ``` After inserting three rows, this is what I get. ``` select *, reference_num::bigint * 513180409 % 1000000000 as original_num from wibble; ``` ``` reference_num added_time original_num -- 890000000 2014-05-19 22:25:43.912445 10000000 277420489 2014-05-19 22:26:18.791284 10000001 664840978 2014-05-19 22:26:23.342876 10000002 ```
int, faster look up, less bytes 10 digits int32 doesn't cut it then due to 4.32 being around the max so forced to use char(8) though I'd probably just use a bigint datatype Edit: 10 digit char(8) wouldn't work either unless you're doing scientific notation you would need to update both data types to a bigger format.
Would it be better to use char(8) or int column type to represent all integers from 10000000 to 99999999
[ "", "sql", "" ]
I have a postgres database with several tables that I want to watch for updates on, and if there's any updates, I want to fire a "hey, something changed" update. This works in the basic case, but now it's time to improve things. ``` CREATE FUNCTION notify_update() RETURNS trigger AS $notifyfunction$ BEGIN PERFORM pg_notify('update_watchers', $${"event":"update", "type": "$$ || TG_TABLE_NAME || $$", "payload": {"id": $$ || new.id || $$}}$$); RETURN new; END; $notifyfunction$ LANGUAGE plpgsql; ``` works just fine. I attach it to the table like so: ``` CREATE TRIGGER document_update_body AFTER UPDATE ON documents FOR EACH ROW EXECUTE PROCEDURE notify_update(); ``` (As a side-question: if there's any better / easier way to json.stringify my trigger result than the mess'o'$$ in the trigger function, please let me know. Balancing quotation marks isn't fun). What I want to do is attach to the pg\_notify call a list of the columns that have changed. It doesn't *seem* like there's any simple way to do this other than iterating over the columns in the table and checking if NEW.col is distinct from OLD.col. The bad way to do this would be to hard-code the column names in my notify procedure (fragile, another thing to update if I change my schema, etc). I'm also out of my depth on writing plpgsql, really, so I'm not sure where to look for help. Ideally, (if there's no updated\_columns block variable that I didn't see in the documentation) there'd be a way to get the table's schema inside the notification block without causing too serious of a performance overhead (as these tables will get updated a fair bit).
Read up on the [hstore](http://www.postgresql.org/docs/current/static/hstore.html) extension. In particular you can create a hstore from a row, which means you can do something like: ``` changes := hstore(NEW) - hstore(OLD); ...pg_notify(... changes::text ...) ``` That's slightly more information than you wanted (includes new values). You can use `akeys(changed)` if you just want the keys.
<http://www.postgresql.org/docs/9.3/static/plpython-trigger.html> ``` TD["table_name"] ``` I do exactly the same type of notify, I loop through all of the columns like this: ``` for k in TD["new"]: if TD["old"][k] != TD["new"][k]: changed.append(k) ``` changed.append(k) builds my notification string. Somewhere else I do a listen, then broadcast the results out pub/sub to web socket clients. -g
Detecting column changes in a postgres update trigger
[ "", "sql", "postgresql", "plpgsql", "" ]
I have a column inside my SQL Server 2012 table which contains following Json data. ``` [{"bvin":"145a7170ec1247cfa077257e236fad69","id":"b06f6aa5ecd84be3aab27559daffc3a4"}] ``` Now I want to use this column data in my query like ``` select * from tb1 left join tb2 on tb1.(this bvin inside my column) = tb2.bvin. ``` Is there a way to query JSON data in SQL Server 2012?
Honestly, this is a terrible architecture for storing the data, and can result in some serious performance issues. If you truly don't have control to change the database, you *can* accomplish this by parsing out the value with `SUBSTRING` like below, but it's leading down a very unhappy path: ``` SELECT * FROM tb1 JOIN tb2 on tb2.bvin = SUBSTRING( tb1.json ,CHARINDEX('"bvin":"', tb1.json) + LEN('"bvin":"') ,CHARINDEX('"', tb1.json, CHARINDEX('"bvin":"', tb1.json) + LEN('"bvin":"')) - CHARINDEX('"bvin":"', tb1.json) - LEN('"bvin":"') ) ``` And sadly, that's as easy as it can be.
Another solution is [JSON Select](https://jsonselect.joshuahealy.net/) which providers a `JsonNVarChar450()` function. Your example would be solved like so: ``` select * from tb1 left join tb2 on dbo.JsonNVarChar450(tb1.YourColumnName, 'bvin') = tb2.bvin ``` as someone mentioned, this could be a bit slow, however you could add an index using the JSON Select function like so: ``` alter table tb2 add bvin as dbo.JsonNVarChar450(YourColumnName, 'bvin') persisted go create index IX_tb2_bvin on tb2(bvin) ``` And from then on you can query using the index over the computed column bvin, like so: ``` select * from tb1 left join tb2 on tb1.bvin = tb2.bvin ``` DISCLOSURE: I am the author of JSON Select, and as such have an interest in you using it :)
Query JSON inside SQL Server 2012 column
[ "", "sql", "json", "sql-server-2012", "" ]
I need to update values in a table by removing their last char if they ends with a `+` **Example:** `John+Doe` and `John+Doe+` should both become `John+Doe`. What's the best way to achieve this?
``` UPDATE table SET field = SUBSTRING(field, 1, CHAR_LENGTH(field) - 1) WHERE field LIKE '%+' ```
If you are trying to display the field instead of update the table, then you can use a `CASE` statement: ``` select case when right(yourfield,1) = '+' then left(yourfield,length(yourfield)-1) else yourfield end from yourtable ``` * [SQL Fiddle Demo](http://sqlfiddle.com/#!2/71533/3)
Remove last char if it's a specific character
[ "", "mysql", "sql", "phpmyadmin", "" ]
``` select (age('2012-11-30 00:00:00'::timestamp, '2012-10-31 00:00:00'::timestamp)), (age('2012-12-31 00:00:00'::timestamp, '2012-10-31 00:00:00'::timestamp)), (age('2013-01-31 00:00:00'::timestamp, '2012-10-31 00:00:00'::timestamp)), (age('2013-02-28 00:00:00'::timestamp, '2012-10-31 00:00:00'::timestamp)) ``` which gives the followings: ``` 0 years 0 mons 30 days 0 hours 0 mins 0.00 secs 0 years 2 mons 0 days 0 hours 0 mins 0.00 secs 0 years 3 mons 0 days 0 hours 0 mins 0.00 secs 0 years 3 mons 28 days 0 hours 0 mins 0.00 secs ``` But I want to have the following month definition , how can I do it? ``` 0 years 1 mons 0 days 0 hours 0 mins 0.00 secs 0 years 2 mons 0 days 0 hours 0 mins 0.00 secs 0 years 3 mons 0 days 0 hours 0 mins 0.00 secs 0 years 4 mons 0 days 0 hours 0 mins 0.00 secs ```
The expression ``` age('2012-11-30 00:00:00'::timestamp, '2012-10-31 00:00:00'::timestamp) ``` gives `30 days`. We are expecting `1 month` as both values point to last days of month. If we add 1 day to the values we shall get first days of next month and ``` age('2012-12-01 00:00:00'::timestamp, '2012-11-01 00:00:00'::timestamp) ``` will give us 1 month as expected. So let us check if we have two last days of month and in this case return age interval of the next days. In other cases we shall return age interval of original values: ``` create or replace function age_m (t1 timestamp, t2 timestamp) returns interval language plpgsql immutable as $$ declare _t1 timestamp = t1+ interval '1 day'; _t2 timestamp = t2+ interval '1 day'; begin if extract(day from _t1) = 1 and extract(day from _t2) = 1 then return age(_t1, _t2); else return age(t1, t2); end if; end $$; ``` Some examples: ``` with my_table(date1, date2) as ( values ('2012-11-30 00:00:00'::timestamp, '2012-10-31 00:00:00'::timestamp), ('2012-12-31 00:00:00'::timestamp, '2012-10-31 00:00:00'::timestamp), ('2013-01-31 00:00:00'::timestamp, '2012-10-31 00:00:00'::timestamp), ('2013-02-28 00:00:00'::timestamp, '2012-10-31 00:00:00'::timestamp) ) select *, age(date1, date2), age_m(date1, date2) from my_table date1 | date2 | age | age_m ---------------------+---------------------+----------------+-------- 2012-11-30 00:00:00 | 2012-10-31 00:00:00 | 30 days | 1 mon 2012-12-31 00:00:00 | 2012-10-31 00:00:00 | 2 mons | 2 mons 2013-01-31 00:00:00 | 2012-10-31 00:00:00 | 3 mons | 3 mons 2013-02-28 00:00:00 | 2012-10-31 00:00:00 | 3 mons 28 days | 4 mons (4 rows) ```
It *seems* like you always use the last day of the month. What you are trying to do works flawlessly with the **first day of the month**. So use that instead. You can always subtract a single day to get the last day of the previous month. [@klin's function is based on that.](https://stackoverflow.com/a/23727917/939860) For dates (instead of timestamps), simplify: ``` _t1 date = t1 + 1; _t2 date = t2 + 1; ``` One can just add / subtract integer values from dates (but not timestamps). If you want to **add "a month"**, don't just increase the month field, since this can fail like you have experienced. And there is also the wrap around at the end of the year. Add an `interval '1 month'` instead. ``` SELECT (mydate + interval '1 month')::date AS mydate_next_month; ``` I cast back to date because the result of `date + interval` is a `timestamp`. This "rounds down" automatically, if the last day of the next month is *before* the day in the original date. Note that it does *not* "round up" in the opposite case. If you want that, operate with the first of the month instead as explained above. [SQL Fiddle.](http://sqlfiddle.com/#!15/d41d8/2015)
Find out number of months between 2 dates
[ "", "sql", "postgresql", "" ]
Following is my query: ``` CREATE TABLE report_invoice( 'ID' INT(10) NOT NULL AUTO_INCREMENT, 'Invoice_No' VARCHAR(30) NOT NULL, 'Invoice_Type' INT(10) NOT NULL, 'Operator' VARCHAR(50) NOT NULL, 'Customer' VARCHAR(50) NOT NULL, 'Invoice_Date' DATE NOT NULL, 'Total' DECIMAL('10,2'), PRIMARY KEY ('ID')); ``` I keep getting this error: ``` ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''ID' INT(10) NOT NULL AUTO_INCREMENT, 'Invoice_No' VARCHAR(30) NOT NULL, 'Invoic' at line 2 ```
You're using single quotes around your field names, use backticks instead: ``` CREATE TABLE report_invoice( `ID` INT(10) NOT NULL AUTO_INCREMENT, `Invoice_No` VARCHAR(30) NOT NULL, `Invoice_Type` INT(10) NOT NULL, `Operator` VARCHAR(50) NOT NULL, `Customer` VARCHAR(50) NOT NULL, `Invoice_Date` DATE NOT NULL, `Total` DECIMAL(10,2), PRIMARY KEY (`ID`)); ```
Don't use simple quotes You may replace them with backticks, or suppress them. They would be usefull if you want to use reserved keywords as column names (bad idea anyway), or completely numeric column names, or special characters in column names. So not in your case. Don't put quotes around decimal scale and precision, too. So this would do the job. ``` CREATE TABLE report_invoice( ID INT(10) NOT NULL AUTO_INCREMENT, Invoice_No VARCHAR(30) NOT NULL, Invoice_Type INT(10) NOT NULL, Operator VARCHAR(50) NOT NULL, Customer VARCHAR(50) NOT NULL, Invoice_Date DATE NOT NULL, Total DECIMAL(10,2), PRIMARY KEY (ID)); ```
What's wrong with my query
[ "", "mysql", "sql", "" ]
I found how to remove after certain character, but how can I remove after certain word in SQL DEV 2012? Say I have a table called MyTable and this Column is MyTextColumn which contains My Name is Peter the Developer this part needs to be removed. So when it finds 'Peter the Developer' remove Peter the Developer and the rest of it that follows and the only thing that remains is 'My Name is' Thanks ``` UPDATE MyTable SET MyText = LEFT(MyText, CHARINDEX(';', MyText) - 1) WHERE CHARINDEX(';', MyText) > 0 ``` Update: Since my question was to remove meaning update here is what I created from responses below to make it work. ``` update myTable set myTextColumn = SUBSTRING(myTextColumn , 0 , CHARINDEX('Peter the developer', myTextColumn, 0)) where ID = 2 ```
Not exactly sure what is the expected result If you expect to remove everything and just keep "My Name is", you can do as below ``` declare @Word varchar(100)='My Name is Peter the Developer this part needs to be removed' declare @Match varchar(100)='Peter the Developer' SELECT SUBSTRING(@Word , 0 , CHARINDEX(@Match, @Word, 0)) ``` If you need keep "My Name is Peter the Developer" ``` declare @Word varchar(100)='My Name is Peter the Developer this part needs to be removed' declare @Match varchar(100)='Peter the Developer' declare @len int set @len = LEN(@Match) SELECT SUBSTRING(@Word , 0 , CHARINDEX(@Match, @Word, 0)+@len) ```
Example ``` declare @Seed nvarchar(32) = 'is ' declare @String nvarchar(128) = 'My Name is Peter' select Left(@String,PATINDEX('%'+@Seed+'%',@String)+len(@seed)) ``` Your case' ``` declare @Seed nvarchar(32) = ';' UPDATE MyTable SET MyText = Left(MyText,PATINDEX('%'+@Seed+'%',MyText)+len(@seed)) WHERE PATINDEX('%'+@Seed+'%',MyText) > 0 ```
SQL Remove everything after certain varchar value or word
[ "", "sql", "sql-server", "sql-server-2008", "sql-server-2008-r2", "sql-server-2012", "" ]
My table looks like this: ``` Supplier ReferenceID Description Total ------------------------------------------------------------ smiths BP657869510L Order 67543 42 smiths BP657869510L Order 67543B 42 smiths BP654669517L No. 5621 13 smiths BP654669517L No. 56211 13 corrigan 15:51 Order 23542 23 corrigan 15:51 Order 235422 23 williams 14015 Block B 19 williams 14015 Block B2 19 ``` I would like to write a T-SQL query to return the list of transactions with each supplier, eliminating duplicate entries based on the `ReferenceID` column. As you can see from the table, the `Description` value may be different in two columns with the same `ReferenceID` (due to data entry error). In this case, if possible, I would like to return one of these `Description` values (I don't care which one). So the results I would want to return based on the data above would be (I picked the `Description` values randomly - I don't have a preference as to which one is returned as long as it is tied to the `ReferenceID` in the original table.) ``` Supplier ReferenceID Description Total -------------------------------------------------------- smiths BP657869510L Order 67543 42 smiths BP654669517L No. 5621 13 corrigan 15:51 Order 23542 23 williams 14015 Block B 19 ``` I realise this is quite complex but any suggestions appreciated!
You just need to `GROUP BY ReferenceID`, if the rest is arbitrary/equal you can use `MAX` or `MIN`: ``` SELECT Supplier = MIN(Supplier), ReferenceID, Description = MIN(Description), Total = MIN(Total) FROM dbo.Tablename GROUP BY ReferenceID ``` `Demo`
``` SELECT Supplier ,ReferenceID ,MAX(Description) [Description] ,SUM(TOTAL) [TOTAL] --or MIN/MAX/AVG etc... FROM TABLE GROUP BY Supplier ,ReferenceID ```
SQL Server query to eliminate duplicates based on single column where another column may differ
[ "", "sql", "sql-server", "" ]
Let's say I have three tables A, B, and C. Each has two columns: a primary key and some other piece of data. They each have the same number of rows. If I `JOIN` A and B on the primary key, I should end up with the same number of rows as are in either of them (as opposed to A.rows \* B.rows). Now, if I `JOIN` `A JOIN B` with `C`, why do I end up with duplicate rows? I have run into this problem on several occasions and I do not understand it. It seems like it should produce the same result as `JOIN`ing `A` and `B` since it has the same number of rows but, instead, duplicates are produced. Queries that produce results like this are of the format ``` SELECT * FROM M INNER JOIN S on M.mIndex = S.mIndex INNER JOIN D ON M.platformId LIKE '%' + D.version + '%' INNER JOIN H ON D.Name = H.Name AND D.revision = H.revision ``` Here are schemas for the tables. H contains is a historic table containing everything that was ever in D. There are many M rows for each D and one S for each M. Table M ``` [mIndex] [int] NOT NULL PRIMARY KEY, [platformId] [nvarchar](256) NULL, [ip] [nvarchar](64) NULL, [complete] [bit] NOT NULL, [date] [datetime] NOT NULL, [DeployId] [int] NOT NULL PRIMARY KEY REFERENCES D.DeployId, [source] [nvarchar](64) NOT NULL PRIMARY KEY ``` Table S ``` [order] [int] NOT NULL PRIMARY KEY, [name] [nvarchar](64) NOT NULL, [parameters] [nvarchar](256) NOT NULL, [Finished] [bit] NOT NULL, [mIndex] [int] NOT NULL PRIMARY KEY, [mDeployId] [int] NOT NULL PRIMARY KEY, [Date] [datetime] NULL, [status] [nvarchar](10) NULL, [output] [nvarchar](max) NULL, [config] [nvarchar](64) NOT NULL PRIMARY KEY ``` Table D ``` [Id] [int] IDENTITY(1,1) NOT NULL PRIMARY KEY, [branch] [nvarchar](64) NOT NULL, [revision] [int] NOT NULL, [version] [nvarchar](64) NOT NULL, [path] [nvarchar](256) NOT NULL ``` Table H ``` [IdDeploy] [int] IDENTITY(1,1) NOT NULL, [name] [nvarchar](64) NOT NULL, [version] [nvarchar](64) NOT NULL, [path] [nvarchar](max) NOT NULL, [StartDate] [datetime] NOT NULL, [EndDate] [datetime] NULL, [Revision] [nvarchar](64) NULL, ``` I didn't post the tables and query initially because I am more interested in understanding this problem for myself and avoiding it in the future.
If one of the tables `M`, `S`, `D`, or `H` has more than one row for a given `Id` (if just the `Id` column is not the Primary Key), then the query would result in "duplicate" rows. If you have more than one row for an `Id` in a table, then the other columns, which would uniquely identify a row, also must be included in the JOIN condition(s). **References**: [Related Question on MSDN Forum](http://social.msdn.microsoft.com/Forums/sqlserver/en-US/48bb4a3b-5c71-4e33-913b-7f577754f870/when-join-two-tables-with-composite-primary-keys-what-will-be-best-performance-use-some-fields-or?forum=transactsql)
When you have related tables you often have one-to-many or many-to-many relationships. So when you join to TableB each record in TableA many have multiple records in TableB. This is normal and expected. Now at times you only need certain columns and those are all the same for all the records, then you would need to do some sort of group by or distinct to remove the duplicates. Let's look at an example: ``` TableA Id Field1 1 test 2 another test TableB ID Field2 field3 1 Test1 something 1 test1 More something 2 Test2 Anything ``` So when you join them and select all the files you get: ``` select * from tableA a join tableb b on a.id = b.id a.Id a.Field1 b.id b.field2 b.field3 1 test 1 Test1 something 1 test 1 Test1 More something 2 another test 2 2 Test2 Anything ``` These are not duplicates because the values of Field3 are different even though there are repeated values in the earlier fields. Now when you only select certain columns the same number of records are being joined together but since the columns with the different information is not being displayed they look like duplicates. ``` select a.Id, a.Field1, b.field2 from tableA a join tableb b on a.id = b.id a.Id a.Field1 b.field2 1 test Test1 1 test Test1 2 another test Test2 ``` This appears to be duplicates but it is not because of the multiple records in TableB. You normally fix this by using aggregates and group by, by using distinct or by filtering in the where clause to remove duplicates. How you solve this depends on exactly what your business rule is and how your database is designed and what kind of data is in there.
Why do multiple-table joins produce duplicate rows?
[ "", "sql", "join", "" ]
im not sure if im doing well with this field **calendar.type\_event = NULL**. Now works well to me because I need to differentiate the second SELECT as a third type. I want to do something like **calendar.type\_event = 'shared\_event'** to differentiate it. But returns a 0, with NULL returns nothing. This is not the problem only I need to know if I can assign my own value: 'shared\_event'. Thanks a lot The entire query is: ``` (SELECT id, type_event FROM calendar WHERE user_id = '.$user_id.' ) UNION (SELECT calendar.id, calendar.type_event = NULL FROM calendar RIGHT JOIN avisos ON avisos.app_id = calendar.id WHERE avisos.user_destiny_id = '.$user_id.' ) ORDER BY fecha_evento ASC ```
> *I need to know if I can assign my own value: `'shared_event'`.* Yes, you can. In place of column `calendar.type_event` just use literal `'shared_event'` **Example**: ``` SELECT calendar.id, 'shared_event' FROM calendar RIGHT JOIN avisos ON avisos.app_id = calendar.id WHERE avisos.user_destiny_id = '.$user_id.' ```
You should be able to do something like this: ``` (SELECT id, type_event FROM calendar WHERE user_id = '.$user_id.' ) UNION (SELECT calendar.id, "shared_event" FROM calendar RIGHT JOIN avisos ON avisos.app_id = calendar.id WHERE avisos.user_destiny_id = '.$user_id.' ) ORDER BY fecha_evento ASC ```
Assign a value to a field on SELECT statement
[ "", "mysql", "sql", "" ]
I am writing a stored procedure to create a player: ``` CREATE PROCEDURE `create_player` ( firstName TEXT CHARACTER SET utf8, lastName TEXT CHARACTER SET utf8, username TEXT CHARACTER SET utf8, password TEXT CHARACTER SET utf8, email TEXT CHARACTER SET utf8, gender ENUM('m','f'), avatar INTEGER, OUT result INTEGER) BEGIN DECLARE insertVal INTEGER DEFAULT 0; INSERT INTO `Players` (`PlayerFirstName`, `PlayerLastName`, `PlayerName`, `PlayerPassword`, `PlayerEmail`, `PlayerGender`, `PlayerAvatar`, `PlayerJoinDate`) VALUES ( firstName,lastName,username,player_hash_password(password),email,gender,avatar,NOW()); END ``` Say I wanted to let the user know if this insert succeeded or not, how can I find out if it succeeded or not (number of rows affected). I tried to set an integer variable = to the insert statement but that did not work. What is the typical way of error checking for stored procedures?
As per MySQL documentation on [*`ROW_COUNT()`*](http://dev.mysql.com/doc/refman/5.0/en/information-functions.html#function_row-count), > *`ROW_COUNT()` returns the number of rows changed, deleted, or inserted by the last statement if it was an `UPDATE`, `DELETE`, or `INSERT`. For other statements, the value may not be meaningful* After `insert` statement you can read `row_count()` into `OUT` parameter `result`. Use the same `result` value in the calling program to let the user know the number of rows affected. **Change your procedure as follows**: ``` DROP PROCEDURE IF EXISTS `create_player`; delimiter // CREATE PROCEDURE `create_player` ( firstName TEXT CHARACTER SET utf8, lastName TEXT CHARACTER SET utf8, username TEXT CHARACTER SET utf8, password TEXT CHARACTER SET utf8, email TEXT CHARACTER SET utf8, gender ENUM('m','f'), avatar INTEGER, OUT result INTEGER ) BEGIN DECLARE insertVal INTEGER DEFAULT 0; INSERT INTO `Players`( `PlayerFirstName`, `PlayerLastName`, `PlayerName`, `PlayerPassword`, `PlayerEmail`, `PlayerGender`, `PlayerAvatar`, `PlayerJoinDate`) VALUES ( firstName, lastName, username, player_hash_password( password ), email, gender, avatar, NOW() ); SELECT ROW_COUNT() INTO result; END; // delimiter ; ```
When you insert data through `ExecuteNonQuery()` method. This method returns number of rows affected in database as integer. For Example. ``` int i=0; i= cmd.ExecuteNonQuery(); if( i >0) { msg ="Inserted Successfully"; } else msg="Not Inserted"; ```
How to know if insert query succeeded in stored proceedure?
[ "", "mysql", "sql", "stored-procedures", "" ]
I'm trying to get a report of Work Items which their Due Date is postponed. I have access to Tfs\_Warehouse database which has a table named DimWorkItem. In this table TFS keeps all the history of every item. So my data is like this; ``` System_Id | DueDate ------------------------------- 6130 | 2014-02-13 00:00:00.000 6130 | 2014-02-13 00:00:00.000 6130 | 2014-03-06 00:00:00.000 6130 | 2014-03-11 00:00:00.000 6130 | 2014-03-11 00:00:00.000 6130 | 2014-03-21 00:00:00.000 6131 | 2014-03-11 00:00:00.000 6131 | 2014-03-11 00:00:00.000 6131 | 2014-03-11 00:00:00.000 ``` I need to write such a query that, query should return the System\_Id's which their `DueDate` has been postponed to further date. So query should return only System\_Id = 6130 Thank you in advance,
Use this: ``` SELECT DISTINCT System_Id FROM ( SELECT System_Id, DueDate, RANK()OVER(PARTITION BY System_Id ORDER BY DueDate) AS Ranking FROM TFS_Warehouse ) Z WHERE Ranking > 1 ``` Results: ``` 6130 ```
If I'm understanding correctly, another way of stating the problem is finding the System\_ids that have at least two different DueDates associated with them. If that's the case, try this: ``` select w.System_id, count(w.DueDate) DateCount from Tfs_Warehouse w group by w.System_id having COUNT(distinct w.DueDate) > 1 ```
How to select changed modified date of an item in t-sql
[ "", "sql", "sql-server", "t-sql", "tfs", "" ]
I have two tables `th_Therapy_Note` and `th_Approved`. When a note in `th_Therapy_Note` gets approved, the application inserts a record to `th_Approved`. A note can get rejected for several reasons after it has been approved (don't ask me why, as I did not design this app, lol). So if a note is rejected after being approved, another entry to `th_Approved` is inserted. `th_Approved.th_approved_isApproved` is a boolean (bit) column, so depending on the status, the note entry in this table for this column in true or false So multiple lines for the same note can exist in `th_Approved` with different `th_Approved.th_approved_isApproved` status, the last entry being the most recent one and correct status The main purpose for the below query is to select notes that are ready to be 'finalized'. The issue with the below query is in the last `inner join` filter '`AND th_Approved.th_approved_isApproved = 1`' This is selecting notes that effectively have been approved, meaning they should have an entry in `th_Approved` and `th_Approved.th_approved_isApproved` is true. This works perfect for notes with single entries in `th_Approved`, but notes with multiple entries in `th_Approved` (as explained above) represent an issue if the last entry for that particular note is false. The query will still pick it up because there is at least one entry with `th_Approved.th_approved_isApproved` as true, even when last correct status is false. I need to only look at this last entry to be able to determine the correct status for a note and select it or not depending on the status. Last part of the query (`and th_Therapy_Note.th_note_id=16239`) is just for my testing as this note has multiple entries, but the final will not have this. How can I solve my issue? I have been looking at several strategies with no luck.....Hopefully I made sense :) thanks ``` SELECT Distinct Convert(varchar,th_Therapy_Note.th_note_id) as NOTEID, '054' as PROGCODE, Rtrim(ch.child_caseNumber) as CASEID, Case th_TherapyType.shortname when 'ST' then 'SP' else rtrim(th_TherapyType.shortname) end as SERVTYPE, Convert(varchar,th_Therapy_Note.th_note_dateofservice,101) as DELSERVDATE, Cast(((Select sum(th_TherapyServiceProvided.units) From th_TherapyServiceProvided where th_DirectServices.th_ds_id = th_TherapyServiceProvided.th_ds_id)/60) as varchar) as SERVHRS, Cast(((Select sum(th_TherapyServiceProvided.units) From th_TherapyServiceProvided where th_DirectServices.th_ds_id = th_TherapyServiceProvided.th_ds_id)%60) as varchar) as SERVMIN, '1' as METHOD, isnull(th_Users.trad_id, ' ') as SPROVNUM, th_Users.th_user_lname, '' as COVISIT FROM th_Therapy_Note INNER JOIN child_tbl AS ch ON th_Therapy_Note.child_id = ch.child_recordId INNER JOIN th_DirectServices ON th_Therapy_Note.th_note_id = th_DirectServices.th_note_id INNER JOIN LookUp_contactType ON th_Therapy_Note.contact_type_id = LookUp_contactType.recId INNER JOIN th_Users ON th_Therapy_Note.service_coordinator = th_Users.th_user_email INNER JOIN th_TherapyType ON th_Therapy_Note.therapy_type = th_TherapyType.id INNER JOIN th_Approved ON th_Therapy_Note.th_note_id = th_Approved.th_note_id AND th_Approved.th_approved_isApproved = 1 WHERE (ch.child_recordId = (SELECT MAX(child_recordId) AS Expr1 FROM child_tbl WHERE (child_caseNumber = ch.child_caseNumber))) and th_Therapy_Note.th_note_dateofservice > '4/22/2014' and th_Therapy_Note.th_note_id=16239 ```
Since you have a sequential ID on th\_Approved, then I'd use that. Integer comparison on id is perfect. Date/Datetime comparison can sometimes add problems. So I'd try this: ``` SELECT Distinct Convert(varchar,th_Therapy_Note.th_note_id) as NOTEID, '054' as PROGCODE, Rtrim(ch.child_caseNumber) as CASEID, Case th_TherapyType.shortname when 'ST' then 'SP' else rtrim(th_TherapyType.shortname) end as SERVTYPE, Convert(varchar,th_Therapy_Note.th_note_dateofservice,101) as DELSERVDATE, Cast((( Select sum(th_TherapyServiceProvided.units) From th_TherapyServiceProvided where th_DirectServices.th_ds_id = th_TherapyServiceProvided.th_ds_id)/60) as varchar ) as SERVHRS, Cast((( Select sum(th_TherapyServiceProvided.units) From th_TherapyServiceProvided where th_DirectServices.th_ds_id = th_TherapyServiceProvided.th_ds_id)%60) as varchar ) as SERVMIN, '1' as METHOD, isnull(th_Users.trad_id, ' ') as SPROVNUM, th_Users.th_user_lname, '' as COVISIT FROM th_Therapy_Note INNER JOIN child_tbl AS ch ON th_Therapy_Note.child_id = ch.child_recordId INNER JOIN th_DirectServices ON th_Therapy_Note.th_note_id = th_DirectServices.th_note_id INNER JOIN LookUp_contactType ON th_Therapy_Note.contact_type_id = LookUp_contactType.recId INNER JOIN th_Users ON th_Therapy_Note.service_coordinator = th_Users.th_user_email INNER JOIN th_TherapyType ON th_Therapy_Note.therapy_type = th_TherapyType.id INNER JOIN th_Approved ON th_Approved.th_approved_id=( SELECT MAX(th_approved_id) FROM th_Approved WHERE th_Therapy_Note.th_note_id = th_Approved.th_note_id) WHERE ch.child_recordId = ( SELECT MAX(child_recordId) FROM child_tbl WHERE child_caseNumber = ch.child_caseNumber) AND th_Therapy_Note.th_note_dateofservice > '4/22/2014' AND th_Approved.th_approved_isApproved = 1 AND th_Therapy_Note.th_note_id=16239 ```
You can use a "MAX" trick (or "MIN" or similar). On a date or unique column is typical. Here is a generic Northwind example that uses the MAX(OrderDate) (where a customer has more than one order). The logic below falls apart if there are 2 orders with the same order-date, and those dates are the "max" date. So a unique identifier that is orderable is preferred) ``` Use Northwind GO Select cust.* , ords.* from dbo.Customers cust LEFT OUTER JOIN dbo.Orders ords ON ( ords.CustomerID = cust.CustomerID AND ords.OrderDate = (SELECT MAX(OrderDate) FROM dbo.Orders innerords WHERE innerords.CustomerID = cust.CustomerID ) ) where cust.CustomerID = 'ALFKI' ```
How to select a single row where multiple rows exist from a table
[ "", "sql", "sql-server", "t-sql", "select", "" ]
``` Stocks Id size qty 100 90 80 123 180 100 100 90 100 100 180 10 Prices Id size price priceDt 100 90 100 2014-05-10 19:00:00 123 180 150 2014-05-22 19:10:19 100 180 180 2014-05-20 19:10:19 100 90 120 2014-05-22 19:10:19 Sales Id size qtySold 100 90 15 100 90 5 123 180 10 ``` Now i need to retrieve LATEST price and quantitiy available (Sum(Stocks.qty) - Sum(Sales.qtySold)) on Id & size So for Id = '100' & size = '90' current price is 120 available quantity is 160
Sub query to get the latest price date, and join to prices:- ``` SELECT stocks.id, stocks.size, prices.price, SUM(stocks.qty) - sales.qtySold FROM stocks INNER JOIN ( SELECT id, size, MAX(priceDT) AS MaxPriceDate FROM prices GROP BY id, size ) Sub1 ON stocks.id = Sub1.id AND stocks.size = Sub1.size INNER JOIN prices ON Sub1.id = prices.id AND Sub1.size = prices.size AND Sub1.MaxPriceDate = prices.priceDT INNER JOIN sales ON stocks.id = sales.id AND stocks.size = sales.size GROUP BY stocks.id, stocks.size ``` My concern is that sales has multiple rows for each id / size EDIT - to cope with multiple rows on sales for an id / size using an additional subquery:- ``` SELECT stocks.id, stocks.size, prices.price, SUM(stocks.qty) - Sub2.tot_qtySold FROM stocks INNER JOIN ( SELECT id, size, MAX(priceDT) AS MaxPriceDate FROM prices GROUP BY id, size ) Sub1 ON stocks.id = Sub1.id AND stocks.size = Sub1.size INNER JOIN prices ON Sub1.id = prices.id AND Sub1.size = prices.size AND Sub1.MaxPriceDate = prices.priceDT INNER JOIN ( SELECT id, size, SUM(qtySold) AS tot_qtySold FROM sales GROUP BY id, size ) Sub2 ON stocks.id = Sub2.id AND stocks.size = Sub2.size GROUP BY stocks.id, stocks.size ``` ON sqlfiddle:- <http://www.sqlfiddle.com/#!2/f7d37/2> EDIT - in answer to a question posted in the comment:- The reason for this is that there are 2 matching records on the stocks table. So for brandid 100 and size of 90 there are these 2 records from stocks:- ``` brandId size qtyArr (100 , 90 , 10), (100 , 90 , 100), ``` and this one from sales:- ``` brandId size qtySold (100, 90, 35), ``` So MySQL will build up table initially containing a set of 2 rows. The first row will contain the first row from stocks and the only matching row from sales. The 2nd row will have the 2nd row from stocks and (again the matching row from sales). ``` brandId size qtyArr brandId size qtySold (100, 90, 10, 100, 90, 35), (100, 90, 100, 100, 90, 35), ``` It then performs the SUM of qtySold, but the quantities are counted twice (ie, once for each match records on stocks). To get around this will likely need a sub query to get the total qtysold for each brand / size, then join the results of that sub query against the stocks table ``` SELECT SUM(s.qtyArr), SUM(l.qtySold) FROM stocks s INNER join ( SELECT brandId, size, sum(l.qtySold) FROM sales GROUP BY brandId, size ) l ON l.brandId = s.brandId AND l.size = s.size WHERE s.brandId='100' AND s.size='90'; ```
I think this is what you require ``` SELECT b.id, sum(b.qty), sum(b.qty)-s.qtysold, max(a.price ) FROM stocks b INNER JOIN (select * from prices where pricedt in( select max(priceDT) as priceDt from prices group by id, size))a ON (a.Id = b.Id and b.size=a.size) inner join sales s on (b.id=s.id and b.size=s.size) group by b.id, b.size ORDER BY a.priceDt ``` [Fiddle](http://sqlfiddle.com/#!2/ee892/14)
SQL selecting a column, SUM and ORDER BY using three tables
[ "", "mysql", "sql", "" ]
I have a primary table FXTB which contains columns; currency, rate, date(today's date only). I want to generate ONE! SQL to select from table FXTB but if I provide it an older date (which would return nothing from FXTB, it will select from another table called FXTBHistory with the same column names but containing data for different dates. Please let me know if it is not clear enough.
To complement the approaches using `union`, in this constellation a `LEFT OUTER JOIN` is also possible. The `ifnull` will return the `FXTB` values, and only if they are `null`, the `FXTBHistory` values will be returned: ``` SELECT ifnull(FXTB.currency, FXTBHistory.currency) as currency, ifnull(FXTB.rate, FXTBHistory.rate) as rate, ifnull(FXTB.date, FXTBHistory.date) as date FROM FXTBHistory LEFT OUTER JOIN FXTB ON FXTBHistory.currency = FXTB.currency AND FXTBHistory.date = FXTB.date WHERE ifnull(FXTB.date, FXTBHistory.date) = @date UNION SELECT ifnull(FXTB.currency, FXTBHistory.currency) as currency, ifnull(FXTB.rate, FXTBHistory.rate) as rate, ifnull(FXTB.date, FXTBHistory.date) as date FROM FXTBHistory RIGHT OUTER JOIN FXTB ON FXTBHistory.currency = FXTB.currency AND FXTBHistory.date = FXTB.date WHERE ifnull(FXTB.date, FXTBHistory.date) = @date; ``` See <http://sqlfiddle.com/#!2/c97e23/6> to see it in action.
Do a UNION ``` SELECT currency, rate, date FROM FXTB WHERE date = @date UNION SELECT currency, rate, date FROM FXTBHistory WHERE date = @date ``` If the first result is empty - second will provide the result. If for some reason there're duplicate entries between 2 results - UNION will eliminate the duplicates In case you receive not quite duplicates (different rate, for the same date according your comments), but you're interested in getting on record only - you can try something like this (not tested) ``` WITH FXTBAll AS (SELECT currency, rate, date, ROW_NUMBER() OVER (PARTITION BY currency, date ORDER BY currency, date) RN FROM (SELECT currency, rate, date FROM FXTB WHERE date = @date UNION SELECT currency, rate, date FROM FXTBHistory WHERE date = @date) T ) SELECT currency, rate, date FROM FXTBAll WHERE RN = 1 ``` It still uses the same UNION, but builts a CTE on top of it and using ROW\_NUMBER() for partitioning selects 1st record for every group
SQL - SELECT from table B only if table A is empty
[ "", "sql", "sql-server", "select", "" ]
If i have 2 tables say TABLE\_1 ``` EMP_ID EMP_NAME EMP_COUNTRY 100 John Russia 101 Mitchell UK 102 Sarah Japan ``` TABLE\_2 ``` EMP_ID EMP_NAME EMP_COUNTRY 200 Sunil India 201 Clanton Germany 202 XYZ Australia ``` I want to check whether EMP\_ID exists in [table\_1 OR table\_2] if it exists in one of the tables then based on that set some flag, How to check this.
``` select count(*) from (select emp_id from table_1 union select emp_id from table_2) t where t.emp_id = <id_value> ```
You can also try: ``` SELECT DECODE(( SELECT SUM(CNT) FROM (SELECT COUNT(1) CNT FROM TABLE1 WHERE EMP_ID = yr_emp_id UNION SELECT COUNT(1) CNT FROM TABLE2 WHERE EMP_ID = yr_emp_id)), 0,'FALSE','TRUE') FROM DUAL; ```
How to check if a value exists in multiple tables
[ "", "sql", "database", "oracle", "" ]
I need to get the latest price of an item (as part of a larger select statement) and I can't quite figure it out. Table: ``` ITEMID DATE SALEPRICE 1 1/1/2014 10 1 2/2/2014 20 2 3/3/2014 15 2 4/4/2014 13 ``` I need the output of the select to be '20' when looking for item 1 and '13' when looking for item 2 as per the above example. I am using Oracle SQL
The most readable/understandable SQL (in my opinion) would be this: ``` select salesprice from `table` t where t.date = ( select max(date) from `table` t2 where t2.itemid = t.itemid ) and t.itemid = 1 -- change item id here; ``` assuming your table's name is `table` and you only have one price per day and item (else the where condition would match more than one row per item). Alternatively, the subselect could be written as a self-join (should not make a difference in performance). I'm not sure about the `OVER`/`PARTITION` used by the other answers. Maybe they could be optimized to better performance depending on the DBMS.
Try this! In `sql-server` may also work in `Oracle sql` ``` select * from ( select *,rn=row_number()over(partition by ITEMID order by DATE desc) from table )x where x.rn=1 ``` You need Row\_number() to allocate a `number` to all records which is partition by ITEMID so each group will get a RN,then as you are ordering by `date` desc to get Latest record # [SEE DEMO](http://www.sqlfiddle.com/#!6/d41d8/17723)
Get a single value where the latest date
[ "", "sql", "oracle", "" ]
I am getting always 0 when using `@@ROWCOUNT` after IF EXISTS, **why**? Here is my code: ``` IF EXISTS (SELECT TOP 1 1 FROM MyTable) --Returns one row. SELECT @@ROWCOUNT; --Returns always 0 ``` I know that the statement doesn't logical right now, because I expect it to print always 1, so I can also `SELECT 1`, but it is just for demonstration of the problem. The problem it doesn't print 1, it always prints 0.
``` SELECT TOP 1 1 FROM Provider.Site SELECT @@ROWCOUNT; --Returns 1 IF EXISTS (SELECT TOP 1 1 FROM Provider.Site) --Returns one row. SELECT @@ROWCOUNT; --Returns always 0 ``` Here first `@@ROWCOUNT` return 1 and second returns 0. Since EXISTS returns only `true` or `false` and there is no rows affected since it doesn't select any records. It checks the existence only. Since `@@ROWCOUNT returns the number of rows affected by the last statement` second case that willl be 0
@@ROWCOUNT by default has 0. SELECT @@ROWCOUNT; --Returns always 0 SELECT within IF LOOP will change the @@ROWCOUNT to 1. But after that IF EXISTS condition will change the @@ROWCOUNT to 0 again, and that is the reason you are getting 0 always. Scope of SELECT wont exist anymore.
Getting always 0 rows when using @@ROWCOUNT before IF EXISTS statement
[ "", "sql", "sql-server", "" ]
This is working fine.. ``` SELECT CM.CMN_CODE,CM.CMN_NAME --(SELECT CMPI_PRCINX FROM CMD_MTRL_PRICE_INF WHERE ROWNUM = 1 ORDER BY CMPI_PRCINX DESC) as k FROM CMN_MST CM LEFT JOIN CMD_MTRL_INF CMI ON CM.CMN_CODE=CMI.CMN_CDTY_MTRL LEFT JOIN CMD_MTRL_PRICE_INF CMPI ON CMI.CMI_CODE = CMPI.CMI_CODE WHERE CM.CTYP_CODE IN (SELECT CTYP_CODE FROM CTYP_MST WHERE CTYP_NAME = 'Common Material'); ``` But this peice is causing problem for me. ``` SELECT CM.CMN_CODE,CM.CMN_NAME, (SELECT CMPI_PRCINX FROM CMD_MTRL_PRICE_INF WHERE ROWNUM = 1 ORDER BY CMPI_PRCINX DESC) AS K FROM CMN_MST CM LEFT JOIN CMD_MTRL_INF CMI ON CM.CMN_CODE=CMI.CMN_CDTY_MTRL LEFT JOIN CMD_MTRL_PRICE_INF CMPI ON CMI.CMI_CODE = CMPI.CMI_CODE WHERE CM.CTYP_CODE IN (SELECT CTYP_CODE FROM CTYP_MST WHERE CTYP_NAME = 'Common Material'); ``` `SQL Error: ORA-00907: missing right parenthesis 00907. 00000 - "missing right parenthesis"` also tried ``` DECLARE PCMPI_PRCINX NUMERIC; BEGIN SELECT CMPI_PRCINX INTO PCMPI_PRCINX FROM CMD_MTRL_PRICE_INF WHERE ROWNUM = 1 ORDER BY CMPI_PRCINX DESC; --SELECT CMPI_PRCINX FROM CMD_MTRL_PRICE_INF WHERE ROWNUM = 1 ORDER BY CMPI_PRCINX DESC; SELECT CM.CMN_CODE,CM.CMN_NAME --(SELECT CMPI_PRCINX FROM CMD_MTRL_PRICE_INF WHERE ROWNUM = 1 ORDER BY CMPI_PRCINX DESC) AS K, PCMPI_PRCINX AS PREVPRICE, CMPI.CMPI_UPDATE_STS FROM CMN_MST CM LEFT JOIN CMD_MTRL_INF CMI ON CM.CMN_CODE=CMI.CMN_CDTY_MTRL LEFT JOIN CMD_MTRL_PRICE_INF CMPI ON CMI.CMI_CODE = CMPI.CMI_CODE WHERE CM.CTYP_CODE IN (SELECT CTYP_CODE FROM CTYP_MST WHERE CTYP_NAME = 'Common Material'); END; / ```
I think you need to rethink this part of your query: ``` ,(select cmpi_prcinx from cmd_mtrl_price_inf where rownum = 1 order by cmpi_prcinx desc) as k ``` This will return the first row from cmpi\_prcinx and then apply the order. So it will not return the largest value of cmpi\_prcinx as you may expect. You can try this: ``` ,(select max(cmpi_prcinx) from cmd_mtrl_price) as k ```
Try ``` SELECT CM.CMN_CODE,CM.CMN_NAME, (SELECT CMPI_PRCINX FROM CMD_MTRL_PRICE_INF WHERE ROWNUM = '1' ORDER BY CMPI_PRCINX DESC) AS K FROM CMN_MST CM LEFT JOIN CMD_MTRL_INF CMI ON CM.CMN_CODE=CMI.CMN_CDTY_MTRL LEFT JOIN CMD_MTRL_PRICE_INF CMPI ON CMI.CMI_CODE = CMPI.CMI_CODE WHERE CM.CTYP_CODE IN (SELECT CTYP_CODE FROM CTYP_MST WHERE CTYP_NAME = 'Common Material'); ``` Where ROWNUM = 1 has been changed by ROWNUM = '1'
select in select showing error - missing right parenthesis
[ "", "sql", "oracle", "" ]
Just a simple question really .. Imagine that you have a table with data and you need to update something in it once a month and then save it. After that your DB should forbid ANY additional changes to the affected rows but keep them. Presumably forever and without deleting those rows. How would I implement this in a good way ? Thanks !
One more solution: [read only filegroups](http://msdn.microsoft.com/en-us/library/bb522469.aspx): ``` ALTER DATABASE Test MODIFY FILEGROUP [FG2] READ_ONLY; ``` Example: ``` CREATE DATABASE Test; GO ALTER DATABASE Test ADD FILEGROUP FG2 GO ALTER DATABASE Test ADD FILE ( NAME = 'SecondFile', FILENAME = 'D:\BD\SecondFile.ndf' ) TO FILEGROUP FG2; GO CREATE TABLE dbo.MyTable2 (Col1 INT) ON FG2; INSERT dbo.MyTable2 VALUES (11); GO ALTER DATABASE Test MODIFY FILEGROUP [FG2] READ_ONLY; GO INSERT dbo.MyTable2 VALUES (22); GO /* Msg 652, Level 16, State 1, Line 1 The index "" for table "dbo.MyTable2" (RowsetId 72057594060210176) resides on a read-only filegroup ("FG2"), which cannot be modified. */ ```
revoke update/insert/delete on the table from all normal users
Make rows unchangeable in sql server
[ "", "sql", "sql-server-2008", "rows", "" ]
Below is my current SELECT CASE statement: ``` SELECT CASE WHEN edition = 'STAN' AND has9 = 1 THEN '9' WHEN edition = 'STAN' AND has8 = 1 THEN '8' WHEN edition = 'STAN' AND has7 = 1 THEN '7' WHEN edition = 'STAN' AND hasOLD = 1 THEN 'OLD' WHEN edition = 'SUI' AND has_s9 = 1 THEN 'S9' WHEN edition = 'SUI' AND has_s8 = 1 THEN 'S8' ELSE 'S7' END AS version ``` I do not always want to repeat the edition = 'xxx' condition, such as ``` CASE WHEN edition = 'STAN' AND has9 = 1 THEN '9' ELSE WHEN has8 = 1 THEN '8' ELSE WHEN has7 = '7' ELSE WHEN edition 'SUI' AND has_s9 = 1 THEN 'S9' ELSE ... ``` In Excel this is fairly easy but how can I compile that in PostgreSQL?
Try this ``` SELECT CASE WHEN edition = 'STAN' THEN CASE WHEN has9 = 1 THEN '9' WHEN has8 = 1 THEN '8' WHEN has7 = 1 THEN '7' WHEN hasOLD = 1 THEN 'OLD' END WHEN edition = 'SUI' THEN CASE WHEN has9 = 1 THEN 'S9' WHEN has8 = 1 THEN 'S8' END ELSE 'S7' END AS version ```
You can nest your case when. By the way, when you make a case on a single field, you can do ``` case <field> when <value> when <otherValue> ``` rather then ``` case when <field> = <value> when <field> = <otherValue> ``` So ``` case edition when 'STAN' case when has9 = 1 then '9' when has8 = 1 then '8' when has7 = 1 then '7' when hasOLD = 1 then 'OLD' end when 'SUI' case when has_s9 = 1 then 'S9' when has_s8 = 1 then 'S8' end else 'S7' end as version ```
Simplify nested case when statement
[ "", "sql", "postgresql", "case-when", "subquery", "multiple-conditions", "" ]
in SSMS, when I run ``` select 1 as A, 2 as B ``` I get a table with one row and two cells. When I run ``` select avg(column1) from table1 where company like '%3m%'and Column2!=0 ``` I get a number x, and when I run ``` select avg(column3) from table1 where company like '%3m%'and Column4!=0 ``` I get another number y. I wonder how can use a single query to put x and y together in a one row table? Thanks for any advice!
``` SELECT (SELECT AVG(column1) FROM table1 WHERE company LIKE '%3m%'and Column2!=0) AS A, (SELECT AVG(column3) FROM table1 WHERE company LIKE '%3m%'and Column4!=0) AS B ```
Try this: ``` SELECT AVG(CASE WHEN Column2 !=0 THEN Column1 END) AS A, AVG(CASE WHEN Column4 !=0 THEN Column3 END) AS B FROM Table1 WHERE Company LIKE '%3m%' AND (Column2!=0 OR Column4 !=0) ```
sql join two cells without criteria
[ "", "sql", "" ]
I have a database table containing the start and expire date of employee certifications. I need to query the table to get employees who were certified during a date range. Example table: ``` EmpID | FromDate | ExpireDate ---------+-------------+-------------- 1 | 2/1/2011 | 3/1/2015 2 | 10/1/2010 | 2/1/2013 3 | 3/1/2013 | 5/30/2013 4 | 11/1/2000 | 3/1/2012 5 | 5/6/2013 | 5/30/2017 ``` If a user wants to find employees that were certified between 5/1/2013 and 5/30/2013 they should get back employee ids 1, 3, and 5. Note that employee id 5 was certified during the date range though the FromDate is after the start date of the query (5/1/2013). Hope that makes sense. I'm having a hard time wrapping my head around how to write the query.
Assuming that your dates are really stored as dates, the following should work: ``` select t.* from t where FromDate <= '5/30/2013' and EndDate >= '5/1/2013'; ``` The logic is that `FromDate` is less than the end of the period and `EndDate` is after the beginning. That will get any overlap at all with the period.
The query will be: ``` declare @FromDate datetime declare @ToDate datetime set @FromDate = '1-May-2013' set @ToDate = '30-May-2013' select * from emp where (@FromDate <= FromDate And @ToDate > FromDate And @ToDate < ExpireDate) Or (@FromDate >= FromDate And @ToDate between FromDate and ExpireDate) ``` Here both the conditions are separately mentioned and added using or operator. It will help you to write down your query in more logical manner.
Query to find records that were active within a range of dates
[ "", "sql", "" ]
I have the following tables Business ``` +-------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------+--------------+------+-----+---------+----------------+ | b_id | bigint(20) | NO | PRI | NULL | auto_increment | | b_name | varchar(255) | NO | | NULL | | +-------------+--------------+------+-----+---------+----------------+ ``` Locations ``` +-------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------+--------------+------+-----+---------+----------------+ | l_id | bigint(20) | NO | PRI | NULL | auto_increment | | l_name | varchar(255) | NO | | NULL | | | b_id | big(20) | NO | | NULL | | +-------------+--------------+------+-----+---------+----------------+ ``` Jobs ``` +-------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------+--------------+------+-----+---------+----------------+ | j_id | bigint(20) | NO | PRI | NULL | auto_increment | | j_name | varchar(255) | NO | | NULL | | | b_id | bigint(20) | NO | | NULL | | | l_id | bigint(20) | NO | | NULL | | +-------------+--------------+------+-----+---------+----------------+ ``` People ``` +-------------+---------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------+---------------+------+-----+---------+----------------+ | u_id | bigint(20) | NO | PRI | NULL | auto_increment | | salutation | varchar(10) | NO | | NULL | | | first_name | varchar(25) | NO | | NULL | | | last_name | varchar(25) | NO | | NULL | | +-------------+---------------+------+-----+---------+----------------+ ``` People's Jobs ``` +-------------+------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------+------------+------+-----+---------+----------------+ | pj_id | bigint(20) | NO | PRI | NULL | auto_increment | | u_id | bigint(20) | NO | | NULL | | | j_id | bigint(20) | NO | | NULL | | | l_id | bigint(20) | NO | MUL | NULL | | +-------------+------------+------+-----+---------+----------------+ ``` I need to produce a table that shows ``` +----------+-------------------------+------------+------------+------------+ | b_id | b_name | Locations | Jobs | People | +----------+-------------------------+------------+------------+------------+ | 21 | Widgets Inc | 0 | x | 0 | | 24 | Prince Privates | 0 | 0 | 0 | | 23 | Halon plc | x | 0 | 0 | | 18 | Stinky Hotels | x | x | x | | 20 | Pylon Catering Corps | x | x | x | | 22 | Skytrain Biscuits | 0 | 0 | 0 | +----------+-------------------------+------------+------------+------------+ ``` I can achieve a correct count of matching locations for each business with: ``` SELECT b.b_id, b.b_name, count(l.l_id) AS locations FROM business AS b LEFT JOIN locations AS l ON b.b_id=l.b_id GROUP BY b.b_id ORDER BY b_name ``` If I extend it to include a count of the jobs at each business and then the count of people at each business it all goes pear shaped. I know that the following is inherently wrong with regards to getting the count of people (as people can hold more than 1 job). I don't know if I need to use sub selects or COALESCE? ``` SELECT b.b_id, b.b_name, count(l.l_id) AS locations, count(j.j_id) AS jobs, count(p.u_id) AS people FROM business AS b LEFT JOIN locations AS l ON b.b_id=l.b_id LEFT JOIN job AS j ON b.b_id=j.b_id LEFT JOIN people_jobs AS p ON l.l_id=p.l_id GROUP BY b.b_id ORDER BY b_name ```
I think you can do a quick-and-dirty fix of your query by using `count(distinct)`: ``` SELECT b.b_id, b.b_name, count(distinct l.l_id) AS locations, count(distinct j.j_id) AS jobs, count(distinct p.u_id) AS people FROM business b LEFT JOIN locations l ON b.b_id = l.b_id LEFT JOIN job j ON b.b_id = j.b_id LEFT JOIN people_jobs p ON l.l_id = p.l_id GROUP BY b.b_id ORDER BY b_name ; ``` It is also possible that the problem is simply that the `join` to `people_jobs` needs more conditions: ``` people_jobs p ON l.l_id = p.l_id and j.j_id = p.j_id ``` And maybe a condition on `u`. Your problem is that you are trying to do aggregation across multiple dimensions and getting a cartesian product for each business. An alternative that is sometimes necessary is to do the counts in subqueries.
This query should do what you need: ``` SELECT b.b_id, b.b_name, (SELECT COALESCE(COUNT(l_id ),0) FROM locations WHERE b_id=b.b_id) AS locations, (SELECT COALESCE(COUNT(j_id ),0) FROM jobs WHERE b_id=b.b_id) AS jobs, (SELECT COALESCE(COUNT(DISTINCT u_id),0) FROM jobs j JOIN people_jobs pj ON pj.j_id=j.j_id WHERE j.b_id=b.b_id ) AS people FROM business as b ORDER BY b_name ``` You don't need the GROUP BY if you use subSELECTs, as the outer query will return 1 row per b\_id, no more. If instead you do JOIN the 4 tables at the main query level, like you were doing, you have two difficulties: * number of rows increases (avoidable with GROUP BY) * a simple COUNT does not work properly (avoidable with COUNT(DISTINCT ...)) *(as shown in Gordon's answer)*
MYSQL Counting matching results across multiple tables
[ "", "mysql", "sql", "count", "" ]
I have two MySQL queries: First: ``` SELECT DISTINCT (broker.broker_id),company_id ,broker_name,company_name,mobile1_no,email,pan_card_num,broker_id,broker_id,company_id FROM broker_firm AS broker_firm LEFT JOIN broker ON broker_firm.company_id = broker.firm_id AND broker_firm.is_active =1 AND broker.is_active =1 ``` This query is generating 331 results. Second: ``` SELECT COUNT( broker.broker_id ) AS tot FROM broker_firm AS broker_firm LEFT JOIN broker AS broker ON broker_firm.company_id = broker.firm_id AND broker_firm.is_active =1 AND broker.is_active =1 ``` This query is generating 289 results. Can anyone please tell me the reason why? I expected both of the results to be same. Or maybe, the Count(\*) result to be greater. Thanks in advance
When you do a `left join`, the logic is simple: keep all the rows in the first table, regardless of whether the condition in the `on` clause is true. If the condition is false, then all the columns in the second table get a value of `NULL`. When you do an `inner join`, the logic is to keep all rows in the first table. In the first query, the additional conditions are in the `on` clause. Hence, all rows in the first table are kept (and don't forget that the join itself may result in duplicates). In the second query, the `where` clause has a condition `broker.is_active = 1`. This condition will fail when `is_active` is `NULL` -- which is what happens when the records don't match. In other words, the condition is turning the `left join` into an `inner join`. EDIT: The idea is the same. The second query is counting the matching records. `count(broker.broker_id)` counts the non-`NULL` values for that column. This is the same as doing an inner join. The first query is counting all records. `select distinct` selects distinct values of *all* the columns. Your syntax is a bit confusing, because it suggests that you just want the distinct value of one column. But that is not how it works. Because you have columns from both tables in the `select`, the non-matching brokers will have their company information on the row, making that row distinct from all other rows.
First query counts all firms, you have 42 firms with no broker. try ``` select count(broker_firm.company_id) FROM broker_firm where broker_firm.company_id not in (select firm_id from broker) ```
MySQL SELECT Count statement issue
[ "", "mysql", "sql", "" ]
I have this following query and I want to display the results where masini > 2 but when I run the query it says that 'masini' is not an existing column but it's the name of a custom column I defined on the first row. I am new to MySQL.. can anyone point me in the right direction? This is my query: ``` SELECT pers.serie_buletin AS persoana, COUNT(prop.serie_buletin) AS masini FROM persoana pers JOIN proprietate prop ON pers.id_persoana = prop.serie_buletin WHERE masini > 2 GROUP BY persoana ; ``` I defined the column on this line, in this part "COUNT(prop.serie\_buletin) AS masini" but it says "Error Code: 1054. Unknown column 'masini' in 'where clause'". What am I missing?
Change `WHERE` to `HAVING`. ``` GROUP BY persoana HAVING masini > 2; ``` > The MySQL HAVING clause is used in the SELECT statement to specify > filter conditions for group of rows or aggregates. > > The MySQL HAVING clause is often used with the GROUP BY clause. When > using with the GROUP BY clause, you can apply a filter condition to > the columns that appear in the GROUP BY clause. If the GROUP BY clause > is omitted, the MySQL HAVING clause behaves like the WHERE clause. > Notice that the MySQL HAVING clause applies the condition to each > group of rows, while the WHERE clause applies the condition to each > individual row. [source](http://www.mysqltutorial.org/mysql-having.aspx)
The where clause is evaluated first, so MySQL don't know what is masini there. Here are some similar questions. [Getting unknown column error when using 'as' in mysql statement](https://stackoverflow.com/questions/18158605/getting-unknown-column-error-when-using-as-in-mysql-statement) [Unknown Column In Where Clause](https://stackoverflow.com/questions/153598/unknown-column-in-where-clause) As explained in the questions above and another answers here, you can only use alias from sub-queries, or in clauses that are evaluated after the alias is assigned as ORDER BY, GROUP BY or HAVING, in your case you can use the having clause.
MySQL. Queries. Unknown column
[ "", "mysql", "sql", "database", "" ]
I searched this everywhere but cannot find any solution, I need to list a specific city like 'New York' first in the following sql statement, I tried to place ORDER BY CASE City.name\_en WHEN 'New York' THEN 1 ELSE 2 END ,City.name\_en in the inner sql statement but did not work, is there any workaround to solve this? ``` SELECT * FROM ( SELECT RTRIM(Organization.name_en) as '@name','flag.png' AS '@flag', RTRIM(Country.name_en) as 'country',RTRIM(city.name_en) as 'city',Organization.id as 'OrganizationID', ROW_NUMBER() OVER (ORDER BY sequence_num) AS RowNum FROM Organization LEFT JOIN City ON City.id = city_id LEFT JOIN Country ON country.id = Organization.country_id LEFT JOIN Industry ON Industry.id = Industry_id WHERE (industry_id =@industry_id OR Coalesce(@industry_id,'') = '') AND (Organization.name_en LIKE '%' + @OrganizationName + '%' OR Coalesce(@OrganizationName,'') = '') ) AS SOD WHERE SOD.RowNum BETWEEN ((@PageNumber-1)*@RowsPerPage)+1 AND @RowsPerPage*(@PageNumber) FOR XML PATH('organization'), ROOT('organizations') ```
Actually I solved it like this: ``` ROW_NUMBER() OVER (ORDER BY CASE city.name_en WHEN @cityName THEN 1 ELSE 2 END ,city.name_en) AS RowNum ```
use Union example Select country\_code, country\_name from country where country\_name ='New York' union all Select country\_code, country\_name from country where country\_name != 'New York' order by country\_name
List specific value first
[ "", "sql", "sql-server", "database", "sql-server-2005", "" ]
The [documentation](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Select) of `HIVE` notes that `LIMIT` clause `returns rows chosen at random`. I have been running a `SELECT` table on a table with more than `800,000` records with `LIMIT 1`, but it always return me the same record. I'm using the `Shark` distribution, and I am wondering whether this has got anything to do with this not expected behavior? Any thoughts would be appreciated. Thanks, Visakh
Even though the documentation states it returns rows at random, it's not actually true. It returns "chosen rows at random" as it appears in the database without any where/order by clause. This means that it's not really random (or randomly chosen) as you would think, just that the order the rows are returned in can't be determined. As soon as you slap a `order by x DESC limit 5` on there, it returns the last 5 rows of whatever you're selecting from. To get rows returned at random, you would need to use something like: `order by rand() LIMIT 1` However it can have a speed impact if your indexes aren't setup properly. Usually I do a min/max to get the ID's on the table, and then do a random number between them, then select those records (in your case, would be just 1 record), which tends to be faster than having the database do the work, especially on a large dataset
To be safe you want to use > select \* from table > > distribute by rand() > > sort by rand() > > limit 10000;
Is LIMIT clause in HIVE really random?
[ "", "sql", "hive", "hiveql", "shark-sql", "" ]
Have a table "user\_order" with columns "user\_id", "code" (character varying) and created\_date\_key (integer). I am trying to write a query which displays all records for code 26 and date greater that '12-5-2013 23:59:59'. ``` Select * from user_order where code like 26 ::text and to_date(created_date_key ::text, 'YYYY-MM-DD') > to_date ('12-5-2013 23:59:59' ::text, 'YYYY-MM-DD') ``` ERROR: date format not recognized.
``` select * from user_order where code like 26 ::text and to_date(created_date_key ::text, 'YYYY-MM-DD') > '12-5-2013 23:59:59' ```
created\_date\_key should be timestamp and not integer.
POSTGRESQL: Error: date format not recognized
[ "", "sql", "postgresql", "" ]
Table : 1.)Test 2.)Position First table ``` //TEST A# --------------- 1 2 3 ``` Second table: ``` //Position A# POSITION ------------------ 1 GM 1 DIRECTOR 2 DOCTOR 3 HELLO 3 GM ``` when i use the following pl/sql in my sqlplus ``` DECLARE c_a# test.A#%TYPE; c_pos position.position%TYPE; CURSOR c_app IS SELECT t.a#,p.position from test t INNER JOIN position p ON t.a#=p.p#; BEGIN OPEN c_app LOOP FETCH c_app into c_a# , c_pos; DBMS_OUTPUT.PUT_LINE( c_a# || ':' || c_pos ); END LOOP; CLOSE c_app; END; / ``` here is the output: ``` 1:GM 1:Director 2:Doctor ... ... ``` Expected output: ``` 1:GM,Director 2:Doctor 3:HELLO,GM ``` is there anything wrong in my looping?
I'm not sure which environment you're using because Oracle have different string aggregation function for `10G` and `11G` release. For 10G you should consider using `WM_CONCAT` function. Below is the sample code which you're trying to achieve through `cursor` ``` DECLARE CURSOR C_APP IS SELECT T.A#, WM_CONCAT (P.POSITION) FROM TEST T INNER JOIN POSITION P ON T.A# = P.P# GROUP BY T.A#; C_A# TEST.A#%TYPE; C_POS POSITION.POSITION%TYPE; BEGIN OPEN C_APP; LOOP FETCH C_APP INTO C_A#, C_POS; EXIT WHEN C_APP%NOTFOUND; DBMS_OUTPUT.PUT_LINE (C_A# || ':' || C_POS); END LOOP; CLOSE C_APP; END; ``` For `11G` you can use `listagg` function. Below is the sample code ``` DECLARE CURSOR C_APP IS SELECT T.A#, LISTAGG(P.POSITION,',') WITHIN GROUP (ORDER BY P.POSITION) FROM TEST T INNER JOIN POSITION P ON T.A# = P.P# GROUP BY T.A#; C_A# TEST.A#%TYPE; C_POS POSITION.POSITION%TYPE; BEGIN OPEN C_APP; LOOP FETCH C_APP INTO C_A#, C_POS; EXIT WHEN C_APP%NOTFOUND; DBMS_OUTPUT.PUT_LINE (C_A# || ':' || C_POS); END LOOP; CLOSE C_APP; END; ``` Make sure you have `set serveroutput on` in order to display the result.
You can try one thing. use collect function. It will fetch the details as well as print it in the needed format.
PL/SQL using fetching data in LOOPING
[ "", "sql", "oracle", "plsql", "" ]
I have a table in mysql that has two columns. Id and number. First time list all rows from 4 to 13 by `SELECT * FROM table WHERE id BETWEEN 4 AND 13` now I have a list of all rows from 4 to 13. (green rows) I want to search in this list and find the first row where number is <= 10 and the last row where the number is <= 1000 and all rows inbetween. I.e. I need the orange rows shown in the image below: ![enter image description here](https://i.stack.imgur.com/BnEut.png) So, I need something like this: ``` SELECT * FROM TABLE WHERE id BETWEEN 4 AND 13 AND number START FROM <= 10 AND END WITH <= 1000 ``` But obviously the above isn't correct. How can I obtain the orange rows?
This should work for you: ``` SELECT T.ID, T.Number FROM T INNER JOIN ( SELECT MIN(CASE WHEN Number < 10 THEN ID END) AS FirstID, MAX(CASE WHEN Number < 1000 THEN ID END) AS LastID FROM T WHERE ID BETWEEN 4 AND 13 ) AS ID ON ID.FirstID <= T.ID AND ID.LastID >= T.ID ``` The key is the subquery - ``` SELECT MIN(CASE WHEN Number < 10 THEN ID END) AS FirstID, MAX(CASE WHEN Number < 1000 THEN ID END) AS LastID FROM T WHERE ID BETWEEN 4 AND 13 ``` Which gets the first ID less than 10 in the given range, and the last ID less than 1000 in the given range. These ID's are then used to filter the results. **[Example on SQL Fiddle](http://sqlfiddle.com/#!2/50988/1)**
Something like this: ``` SELECT MyTable.* FROM (SELECT * FROM T WHERE id BETWEEN 4 AND 13) as MyTable JOIN ( SELECT ID, NUMBER FROM T WHERE id BETWEEN 4 AND 13 AND NUMBER <= 10 LIMIT 1 ) as StartRow JOIN ( SELECT ID, NUMBER FROM T WHERE id BETWEEN 4 AND 13 AND NUMBER <= 1000 ORDER BY ID DESC LIMIT 1 ) as EndingRow WHERE MyTable.ID BETWEEN StartRow.ID and EndingRow.ID ``` See the [sqlFiddle Demo working here](http://sqlfiddle.com/#!2/50988/13) You select over your subquery, the green rows that represent the ids 4 to 13. Then in the main select, use `joins` to get your borders of your orange rows. Let's walk thru this step by step. First, you select your rows between the id 4 and 13 (*green rows*) and name the result MyTable. ``` SELECT * FROM (SELECT * FROM T WHERE id BETWEEN 4 AND 13) as MyTable ``` Now you need to know the first element in this subset that is below or equal to 10 (`<=10`). In my Approach i `join` the result again and add this clause to it while also shrinki8ng the resultset to 1 using `Limit`. ``` JOIN ( SELECT ID, NUMBER FROM T WHERE id BETWEEN 4 AND 13 AND NUMBER <= 10 LIMIT 1 ) as StartRow ``` So now for every row in your `4 to 13` Result, you have the field StartRow.ID as well. The same works for the end of your orange list. The last one of the `4 to 13` that is <=1000. ``` JOIN ( SELECT ID, NUMBER FROM T WHERE id BETWEEN 4 AND 13 AND NUMBER <= 1000 ORDER BY ID DESC LIMIT 1 ) as EndingRow ``` By adding `DESC` to the `ORDER` Statement, i get the last element that matches my query and not the first. Now we just have to put that into the capsulating Query's Where clause: ``` WHERE MyTable.ID BETWEEN StartRow.ID and EndingRow.ID ```
How to select rows with specific condition within specific ranage in MySQL?
[ "", "mysql", "sql", "" ]