Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I have been struggling with the concept of combining results of two queries together via a join on a common table and I was hoping I could gain some assistance. The following is a rough guide to the tables: ``` dbo.Asset (not returned in the SELECT statement, used for joins only) - dbo.Asset.AssetID - dbo.Asset.CatalogueID dbo.WorkOrder - WorkOrderID - AssetID - WorkOrderNumber dbo.WorkOrderSpare - WorkOrderID - WorkOrderSpareID - WorkOrderSpareDescription - ActualQuantity - CreatedDateTime dbo.Catalogue - CatalogueID (PK) - CatalogueNumber - CatalogueDescription - CatalogueGroupID dbo.CatalogueGroup - CatalogueGroupID - CatalogueGroupNumber ``` First Query: ``` SELECT CatalogueGroup.CatalogueGroupName, Catalogue.CatalogueNumber, Catalogue.CatalogueDescription, Catalogue.CatalogueID, Asset.IsActive FROM CatalogueGroup INNER JOIN Catalogue ON CatalogueGroup.CatalogueGroupID = Catalogue.CatalogueGroupID INNER JOIN Asset ON Catalogue.CatalogueID = Asset.CatalogueID ``` Second Query: ``` SELECT WorkOrder.WorkOrderNumber, WorkOrderSpare.WorkOrderSpareDescription, WorkOrderSpare.ActualQuantity, WorkOrderSpare.CreatedDateTime FROM WorkOrder INNER JOIN WorkOrderSpare ON WorkOrder.WorkOrderID = WorkOrderSpare.WorkOrderID ``` Now I can do the above easily enough in Access (WYSIWYG) by joining Asset/Catalogue/CatalogueGroup together and then joining Asset.AssetID onto WorkOrder.AssetID. I can't seem to get anything similar to work via raw code, I think I have my logic correct for the joins (INNER JOIN on the results of both) but then I am new to this. Any assistance would be greatly appreciated, any pointers on where I can read further into problems like this would be great. EDIT: This is what I was trying to use to no avail, I should also mention I am trying to do this in ms-sql, not Access (trying to move away from drag and drop): ``` SELECT CatalogueGroup.CatalogueGroupName, Catalogue.CatalogueNumber, Catalogue.CatalogueDescription, Catalogue.CatalogueID, Asset.IsActive, WorkOrderSpare.WorkOrderSpareDescription, WorkOrderSpare.ActualQuantity, WorkOrderSpare.CreatedDateTime, WorkOrder.WorkOrderNumber FROM (((CatalogueGroup INNER JOIN Catalogue ON CatalogueGroup.CatalogueGroupID = Catalogue.CatalogueGroupID) INNER JOIN Asset ON Catalogue.CatalogueID = Asset.CatalogueID) INNER JOIN WorkOrderSpare ON WorkOrder.WorkOrderID = WorkOrderSpare.WorkOrderID) INNER JOIN WorkOrder ON Asset.AssetID = WorkOrder.AssetID ```
Think I see the issue. Assuming that the joins themselves are correct (ie your columns do relate to each other), the order of your joins is a little off - when you join WorkOrder to WorkOrderSpare, neither of those two tables relate back to any table you've identified up until that point in the query. Think of it as joining two tables separately from the chain of joins you have going so far - it's almost like doing two separate join queries. If you switch the last two it should work, that way WorkOrder will join to Asset (which you've already defined) and then you can join WorkOrderSpare to WorkOrder. I've also taken the liberty of removing parentheses on the joins, that's an Access thing. ``` SELECT CatalogueGroup.CatalogueGroupName, Catalogue.CatalogueNumber, Catalogue.CatalogueDescription, Catalogue.CatalogueID, Asset.IsActive, WorkOrderSpare.WorkOrderSpareDescription, WorkOrderSpare.ActualQuantity, WorkOrderSpare.CreatedDateTime, WorkOrder.WorkOrderNumber FROM CatalogueGroup INNER JOIN Catalogue ON CatalogueGroup.CatalogueGroupID = Catalogue.CatalogueGroupID INNER JOIN Asset ON Catalogue.CatalogueID = Asset.CatalogueID INNER JOIN WorkOrder ON Asset.AssetID = WorkOrder.AssetID INNER JOIN WorkOrderSpare ON WorkOrder.WorkOrderID = WorkOrderSpare.WorkOrderID ```
I think you were close. As a slightly different approach to joining, try this: ``` SELECT CatalogueGroup.CatalogueGroupName, Catalogue.CatalogueNumber, Catalogue.CatalogueDescription, Catalogue.CatalogueID, Asset.IsActive, WorkOrderSpare.WorkOrderSpareDescription, WorkOrderSpare.ActualQuantity, WorkOrderSpare.CreatedDateTime, WorkOrder.WorkOrderNumber FROM CatalogueGroup, Catalogue, Asset, WorkOrder, WorkOrderSpare WHERE CatalogueGroup.CatalogueGroupID = Catalogue.CatalogueGroupID and Catalogue.CatalogueID = Asset.CatalogueID and Asset.AssetID = WorkOrder.AssetID and WorkOrder.WorkOrderID = WorkOrderSpare.WorkOrderID ``` It looks like it should work, but not having data, hard to know if simple joins all the way through is what you want. (It's a matter of personal preference whether to use the `and` clauses rather than the inner join syntax. While on style preferences, I like table aliases if supported, so `FROM CatalogueGroup cg` for example, so that you can refer to `cg.CatalogueGroupID` etc, rather than writing out the full table name every time.)
Joining Two Queries onto a Shared Table
[ "", "sql", "sql-server", "" ]
If I have to write this query in SQL Server: ``` SELECT * FROM Table1 WHERE data BETWEEN interval1 AND interval2 ``` * `interval1` is `fld1` if `fld1` >= '2014-1-1' otherwise it's '2014-1-1' * `interval2` is `fld2` if `fld2` is <= '2014-12-31' otherwise it's '2014-12-31' How can I write it without using variables? i try ``` Ksdt_TP022_Data BETWEEN (SELECT CASE WHEN sdt_TP019_InizioVal < '2014-1-1' THEN '2014-1-1' ELSE sdt_TP019_InizioVal END as b from TP022_Quarto) AND (SELECT CASE WHEN sdt_TP019_Fineval > '2014-12-31' THEN '2014-12-31' WHEN sdt_TP019_Fineval IS NULL THEN '2014-12-31' ELSE sdt_TP019_Fineval END as a from TP022_Quarto) ``` but get compiler error
``` SELECT * FROM Table1 WHERE data BETWEEN interval1 AND coalesce(interval2, '2014-12-31') and data between '2014-1-1' and '2014-12-31' ```
Try this ``` select * from Table1 WHERE data between case when fld1 < '20140101' then '20140101' else fld1 end and case when fld2 >'20141231 then '20141231' else fld2 end ```
SQL between min of two values
[ "", "sql", "sql-server", "between", "" ]
I need to find the missing id's from the table `#a` below: ``` id |SEQ|Text 1 |1 |AA 1 |3 |CC 1 |4 |DD 1 |5 |EE 1 |6 |FF 1 |7 |GG 1 |8 |HH 1 |10 |JJ 2 |1 |KK 2 |2 |LL 2 |3 |MM 2 |4 |NN 2 |6 |PP 2 |7 |QQ 3 |1 |TT 3 |4 |ZZ 3 |5 |XX ``` The max and min SEQ of the table `#a` is stored in another table `#b`: ``` id| mn| mx 1 | 1 | 12 2 | 1 | 9 3 | 1 | 5 ``` My query below is giving the correct output but the execution is expensive. Is there another way to solve this? ``` with cte as ( select id, mn, mx from #b union all select id, mn, mx -1 from cte where mx-1 > 0 ) select cte.id, cte.mx from cte left join #a on cte.id = #a.id and cte.mx = #a.seq where #a.seq is null order by cte.id, cte.mx ``` There are mainly 2 problems in this query: 1. The query is running very slow. The above records are just example. In real database I have 50,000 rows. [![enter image description here](https://i.stack.imgur.com/rV5Pt.jpg)](https://i.stack.imgur.com/rV5Pt.jpg) 2. I tried to understand the execution plan to detect the hiccups. However I could not understand some part of it, which I have highlighted in Red. It would be great if someone could help me here. I am stuck.
You use recursive CTE to [generate a set of numbers](http://sqlperformance.com/2013/01/t-sql-queries/generate-a-set-1). It is quite inefficient way to do it (see charts for generating 50K numbers [here](http://sqlperformance.com/2013/01/t-sql-queries/generate-a-set-2)). I'd recommend to have a persisted table of numbers in the database. I, personally, have a table `Numbers` with 100K rows with one column `Number`, which is a primary key, which has integer numbers from 1 to 100,000. Once you have such table, your query is simplified to this: ``` SELECT #b.id, #b.mx FROM #b INNER JOIN Numbers ON #b.mx >= Numbers.Number AND #b.mn <= Numbers.Number -- use this clause if mn can be more than 1 LEFT JOIN #a ON #a.id = #b.id AND #a.seq = Numbers.Number WHERE #a.seq IS NULL ORDER BY #b.id, #b.mx ``` Also, it goes without saying, that you have to make sure that you have index on `#b` on `id`, plus index on `#a` on `(id, seq)`.
Two things that come to my mind are: * Use a numbers / tally table. Either by creating a normal table or create a virtual using CTE. Use that to find numbers that don't exist. * If there's not a lot of missing numbers, you can use a trick with row\_number() to find the ranges of numbers that don't have any gaps with something like this: ``` select id, min(seq), max(seq) from ( select id, seq, seq - row_number () over (partition by id order by SEQ asc) GRP from table1 ) X group by id, GRP order by 1 ``` This will of course need more handling after you have find the ranges of numbers that exists.
I need to find the missing Id's from the table after checking the min and max id's from another table
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have table t1 ``` ID NAME AGE GENDER BALANCE ----- ----- --------- ----- --------- 1001 John 10 M 10 1002 Meena 5 F 0 1003 Nikh 11 M 0 1004 divs 7 F 0 1005 neha 4 F 0 ``` from second row, if Gender is M then Balance (2nd row) should be age(2)+balance(1) else Balance(1)-age(2) Final structure should be like ``` ID NAME AGE GENDER BALANCE ----- ----- --------- ----- --------- 1001 John 10 M 10 1002 Meena 5 F 5 1003 Nikh 11 M 16 1004 divs 7 F 9 1005 neha 4 F 5 ``` Please help me with query /procedure
How about something like this? ``` with sample_data as (select 1001 id, 'John' name, 10 age, 'M' gender, 10 balance from dual union all select 1002 id, 'Meena' name, 5 age, 'F' gender, 0 balance from dual union all select 1003 id, 'Nikh' name, 11 age, 'M' gender, 0 balance from dual union all select 1004 id, 'divs' name, 7 age, 'F' gender, 0 balance from dual union all select 1005 id, 'neha' name, 4 age, 'F' gender, 0 balance from dual) select id, name, age, gender, sum(case when gender = 'F' then -1 * age else age end) over (order by id) balance from sample_data; ID NAME AGE GENDER BALANCE ---------- ----- ---------- ------ ---------- 1001 John 10 M 10 1002 Meena 5 F 5 1003 Nikh 11 M 16 1004 divs 7 F 9 1005 neha 4 F 5 ``` I'm guessing that the balance of the first row (I'm assuming an order of id, here) is 10 because that's John's age and he's male, rather than it being some arbitrary number. --- ETA: here are the alternatives to the above solution. I **HIGHLY** recommend you test everything against a production-like volume of data (where I've used the with clause to mimic a table called sample\_data with 5 rows, you would just need to use your table). That way, you can get timings that should highlight the most performant method for your scenario; hopefully your manager won't be blind to facts (if he is, run. Run fast!) 1) SQL statement with no analytic functions: ``` with sample_data as (select 1001 id, 'John' name, 10 age, 'M' gender, 10 balance from dual union all select 1002 id, 'Meena' name, 5 age, 'F' gender, 0 balance from dual union all select 1003 id, 'Nikh' name, 11 age, 'M' gender, 0 balance from dual union all select 1004 id, 'divs' name, 7 age, 'F' gender, 0 balance from dual union all select 1005 id, 'neha' name, 4 age, 'F' gender, 0 balance from dual) select sd1.id, sd1.name, sd1.age, sd1.gender, sum(case when sd2.gender = 'F' then -1 * sd2.age else sd2.age end) balance from sample_data sd1 inner join sample_data sd2 on (sd1.id >= sd2.id) group by sd1.id, sd1.name, sd1.age, sd1.gender order by id; ID NAME AGE GENDER BALANCE ---------- ----- ---------- ------ ---------- 1001 John 10 M 10 1002 Meena 5 F 5 1003 Nikh 11 M 16 1004 divs 7 F 9 1005 neha 4 F 5 ``` 2) Procedural (slow-by-slow row-by-row {yawn}) method (NOT recommended): ``` create or replace procedure calc_balance1 as v_balance number := 0; cursor cur is with sample_data as (select 1001 id, 'John' name, 10 age, 'M' gender, 10 balance from dual union all select 1002 id, 'Meena' name, 5 age, 'F' gender, 0 balance from dual union all select 1003 id, 'Nikh' name, 11 age, 'M' gender, 0 balance from dual union all select 1004 id, 'divs' name, 7 age, 'F' gender, 0 balance from dual union all select 1005 id, 'neha' name, 4 age, 'F' gender, 0 balance from dual) select id, name, age, gender, balance from sample_data; begin for rec in cur loop v_balance := v_balance + case when rec.gender = 'F' then -1 * rec.age else rec.age end; dbms_output.put_line('id = '||rec.id||', name = '||rec.name||', age = '||rec.age||', gender = '||rec.gender||', balance = '||v_balance); end loop; end calc_balance1; / begin calc_balance; end; / id = 1001, name = John, age = 10, gender = M, balance = 10 id = 1002, name = Meena, age = 5, gender = F, balance = 5 id = 1003, name = Nikh, age = 11, gender = M, balance = 16 id = 1004, name = divs, age = 7, gender = F, balance = 9 id = 1005, name = neha, age = 4, gender = F, balance = 5 ``` --- However, if you had to come up with a procedure for this, I'd use the query with an analytic function and just stick it in a ref cursor, eg: ``` create or replace procedure calc_balance2 (p_refcur out sys_refcursor) as begin open p_refcur for with sample_data as (select 1001 id, 'John' name, 10 age, 'M' gender, 10 balance from dual union all select 1002 id, 'Meena' name, 5 age, 'F' gender, 0 balance from dual union all select 1003 id, 'Nikh' name, 11 age, 'M' gender, 0 balance from dual union all select 1004 id, 'divs' name, 7 age, 'F' gender, 0 balance from dual union all select 1005 id, 'neha' name, 4 age, 'F' gender, 0 balance from dual) select id, name, age, gender, sum(case when gender = 'F' then -1 * age else age end) over (order by id) balance from sample_data order by id; end calc_balance2; / ``` ## ------------------ I see the procedure you wrote; here's how I would do it instead: ``` -- mimicking your test_divs table: create table test_divs as select 1001 id, 'John' name, 10 age, 'M' gender, 10 balance from dual union all select 1002 id, 'Meena' name, 5 age, 'F' gender, 0 balance from dual union all select 1003 id, 'Nikh' name, 11 age, 'M' gender, 0 balance from dual union all select 1004 id, 'divs' name, 7 age, 'F' gender, 0 balance from dual union all select 1005 id, 'neha' name, 4 age, 'F' gender, 0 balance from dual; create or replace procedure t1_d12_v2 as begin merge into test_divs tgt using (select id, name, age, gender, sum(case when gender = 'F' then -1 * age else age end) over (order by id) balance from test_divs) src on (tgt.id = src.id) when matched then update set tgt.balance = src.balance; end t1_d12_v2; / select * from test_divs; ID NAME AGE GENDER BALANCE ---------- ----- ---------- ------ ---------- 1001 John 10 M 10 1002 Meena 5 F 0 1003 Nikh 11 M 0 1004 divs 7 F 0 1005 neha 4 F 0 begin t1_d12_v2; commit; end; / select * from test_divs; ID NAME AGE GENDER BALANCE ---------- ----- ---------- ------ ---------- 1001 John 10 M 10 1002 Meena 5 F 5 1003 Nikh 11 M 16 1004 divs 7 F 9 1005 neha 4 F 5 ``` I *HIGHLY* suggest you test both methods with your production-like data and see which one performs better. (If your manager is really dead set against the analytic function, I would swap the "src" subquery to be the other sql statement that I came up with - the one with the join and group by.) Doing updates row-by-row like you're doing is going to be doing context switches between sql and pl/sql twice for every pass through the loop. Why bother, when you could do the whole thing in a single sql statement. Seriously.
``` select ID, NAME, AGE, GENDER, @b:=if(GENDER='M', @b+AGE, @b-AGE) BALANCE from t1, (select @b:=0) b ``` [**Demo on sqlfiddle.com**](http://sqlfiddle.com/#!9/cc5ec/1)
Calculation in two columns of different row in sql
[ "", "sql", "oracle", "" ]
Refering to this: [Is there any difference between GROUP BY and DISTINCT](https://stackoverflow.com/questions/164319/is-there-any-difference-between-group-by-and-distinct) ``` Given a table that looks like this: name ------ barry dave bill dave dave barry john This query: SELECT name, count(*) AS count FROM table GROUP BY name; Will produce output like this: name count ------------- barry 2 dave 3 bill 1 john 1 ``` What is the correct Rails convention here for ActiveModel to perform a GROUP BY with COUNT?
`Distinct` and `Group By` are going to give you different results. To get the results you expect you'll want to use ``` Person.group(:name).count (1.2ms) SELECT COUNT(*) AS count_all, name AS name FROM "people" GROUP BY "people"."name" => {"Dan"=>3, "Dave"=>2, "Vic"=>1} ``` Seen above, group will return things as a hash. While distinct just returns the number of people in total, seen below. ``` Person.distinct(:name).count (0.4ms) SELECT DISTINCT COUNT(DISTINCT "people"."id") FROM "people" => 6 ```
Note that the accepted answer will return a hash: ``` Tagging.joins(:tag).group(:name).size (0.4ms) SELECT COUNT(*) AS count_all, `name` AS name FROM `taggings` INNER JOIN `tags` ON `tags`.`id` = `taggings`.`tag_id` GROUP BY `name` => {"applesauce"=>1, "oranges"=>2} ``` But what if you want to return the count plus some columns from different tables in a join. Then you also need to use the select ActiveRecord query: ``` collection = Tagging.select('COUNT(*) as total, taggings.taggable_id, taggings.taggable_type').joins(:tag).where(taggable_type: 'LineItem', taggable_id: ['5cad0dcc3ed1496086000031', '5cad0dcd3ed1496086000081'] ).group(:name) collection.each do |item| puts item.taggable_id puts item.total end 5cad0dcc3ed1496086000031 1 5cad0dcc3ed1496086000031 2 ``` With this second approach, you can fetch additional details about a join relationship without any additional queries or loop constructs.
GROUP BY and COUNT using ActiveRecord
[ "", "sql", "ruby-on-rails", "ruby", "postgresql", "activerecord", "" ]
I have this table: ``` table1 +------+----------+-----------+----------+ | id | org1 | org2 | org3 | +------+----------+-----------+----------+ | 1 | HR | (NULL) | Staff | +------+----------+-----------+----------+ | 2 | (NULL) | IT | Dev | +------+----------+-----------+----------+ | 3 | (NULL) | (NULL) | Finance | +------+----------+-----------+----------+ ``` I want to shift all values to the left so the end result would be: ``` table1 +------+----------+-----------+----------+ | id | org1 | org2 | org3 | +------+----------+-----------+----------+ | 1 | HR | Staff | (NULL) | +------+----------+-----------+----------+ | 2 | IT | Dev | (NULL) | +------+----------+-----------+----------+ | 3 | Finance | (NULL) | (NULL) | +------+----------+-----------+----------+ ``` Is there any elegant way of doing it?
**UPDATE:** Based on the **cars10** answer need to switch the order `COALESCE(org2,org3)` and take into account when all 3 columns are `NOT NULL` ``` SELECT id, o1, CASE WHEN o2!=o1 THEN o2 END o2, CASE WHEN o3!=o2 THEN o3 END o3 FROM ( SELECT id ,COALESCE(org1,org2,org3) o1 ,IF((org1 IS NOT NULL) AND (org2 IS NOT NULL) AND (org3 IS NOT NULL), org2, COALESCE(org3,org2) ) o2 ,org3 o3 FROM table1 ) t ``` Adding case mentioned by cars10: ``` DROP TABLE IF EXISTS table1; CREATE TABLE `table1` ( `id` INT(11) NOT NULL AUTO_INCREMENT, `org1` VARCHAR(255) DEFAULT NULL, `org2` VARCHAR(255) DEFAULT NULL, `org3` VARCHAR(255) DEFAULT NULL, PRIMARY KEY (`id`) ); INSERT INTO `table1` VALUES ('1', NULL, 'IT', 'DEV'); INSERT INTO `table1` VALUES ('2', 'HR',NULL,'Staff'); INSERT INTO `table1` VALUES ('3', 'ID','Building',NULL); INSERT INTO `table1` VALUES ('4', 'Support','Business','1st line'); INSERT INTO `table1` VALUES ('5','Finance', NULL, NULL); INSERT INTO `table1` VALUES ('6', NULL, 'Finance', NULL ); INSERT INTO `table1` VALUES ('7', NULL, NULL, 'Finance'); INSERT INTO `table1` VALUES ('8', NULL, NULL, NULL); ``` <http://www.sqlfiddle.com/#!9/cd969/1> As **Thorsten Kettner** mentioned, there is no elegant way of doing this. I'm finding above one the shortest working solution.
Use `coalesce()` and a subquery ``` select id, o1, CASE WHEN o2!=o1 THEN o2 END o2, CASE WHEN o3!=o2 THEN o3 END o3 FROM ( select id, coalesce(org1,org2,org3) o1, coalesce(org2,org3) o2, org3 o3 from tbl ) t ``` **UPDATE** The previous answer was not sufficient, as R2D2 found out quite rightly. Unfortunately you cannot do CTEs in mysql so I created a view instead (I extended the example by another column `org4`): ``` CREATE VIEW vert AS select id i,1 n, org1 org FROM tbl where org1>'' UNION ALL select id,2, org2 FROM tbl where org2>'' UNION ALL select id,3, org3 FROM tbl where org3>'' UNION ALL select id,4, org4 FROM tbl where org4>''; ``` With this view it is now possible to do the following: ``` SELECT id, (select org from vert where i=id order by n limit 1) org1, (select org from vert where i=id order by n limit 1,1) org2, (select org from vert where i=id order by n limit 2,1) org3, (select org from vert where i=id order by n limit 3,1) org4 FROM tbl ``` Not beautiful, but it gets the job done, see here: [SQLfiddle](http://www.sqlfiddle.com/#!9/da08f/7) input: ``` | id | org1 | org2 | org3 | org4 | |----|--------|--------|---------|--------| | 1 | HR | (null) | Staff | IT | | 2 | (null) | IT | Dev | (null) | | 3 | (null) | (null) | Finance | HR | ``` output: ``` | id | org1 | org2 | org3 | org4 | |----|---------|-------|--------|--------| | 1 | HR | Staff | IT | (null) | | 2 | IT | Dev | (null) | (null) | | 3 | Finance | HR | (null) | (null) | ```
How to shift column values in MySQL?
[ "", "mysql", "sql", "" ]
This is the function that I am trying to create ``` CREATE FUNCTION fnGetValue ( @WhereClause VARCHAR(256) ) RETURNS TABLE AS RETURN SELECT A.Name, B.Value FROM A INNER JOIN B ON A.Akey = B.AKey WHERE + @WhereClause + GO ``` The parameter `@WhereClause` is built from my PHP, something like ``` A.Akey IN (2,3) ``` But when I try this I get this error > An expression of non-boolean type specified in a context where a condition is expected, near 'AND'. I want to let everyone know, I know query is wrong because SQL is expecting an expression that can be evaluated to boolean. But my question is how to achieve the thing I am trying for. Please help me.
I think that you would be better off building the SQL function in PHP and calling that. Using your approach it would be very hard to guarantee what went into the query and it's just asking for an SQL injection attack. Using something similar to (but not necessarily) [Laravel](http://laravel.com/docs/4.2/queries "Laravel") would help with building the query and it would also address some of the security concerns associated with dynamically creating queries. If you are determined to do it this way, you could try what is suggested in this [question](https://stackoverflow.com/questions/11329823/add-where-clauses-to-sql-dynamically-programmatically).
Here the `where clause` is dynamically send, hence you have to use `dynamic sql`; One limitation here is in a `function` you cannot use `dynamic sql`. > Only functions and some extended stored procedures can be executed > from within a function. see ==> [Getting an error when executing a dynamic sql within a function (SQL Server)?](https://stackoverflow.com/questions/15180561/getting-an-error-when-executing-a-dynamic-sql-within-a-function-sql-server) A procedure can you used here like ``` CREATE procedure fnGetValue ( @WhereClause VARCHAR(256) ) AS declare @sql NVARCHAR(1000) = 'SELECT A.Name, B.Value FROM A INNER JOIN B ON A.Akey = B.AKey WHERE ' + @WhereClause EXEC sp_executesql @sql return GO ``` Then, ``` exec fnGetValue 'any condition' ```
How to pass the actual where clause to SQL function?
[ "", "sql", "sql-server", "" ]
I have some data coming from a source in my Oracle database. If a particular Office\_ID has been deactivated and it has all three clients (A,B,C) for a particular day, then we have to check whether all clients have gone. If yes, then we need to check whether timeframe for all clients is within 10 Minutes. If this repeats three times in a day for a particular office we declare the office as closed. Here is some sample data: ``` +-----------+-----------+--------------+--------+ | OFFICE_ID | FAIL_TIME | ACTIVITY_DAY | CLIENT | | 1002 | 5:39:00 | 23/01/2015 | A | | 1002 | 17:49:00 | 23/12/2014 | A | | 1002 | 18:41:57 | 1/5/2014 | B | | 1002 | 10:32:00 | 1/7/2014 | A | | 1002 | 10:34:23 | 1/7/2014 | B | | 1002 | 10:35:03 | 1/7/2014 | C | | 1002 | 12:08:52 | 1/7/2014 | B | | 1002 | 12:09:00 | 1/7/2014 | A | | 1002 | 12:26:10 | 1/7/2014 | B | | 1002 | 13:31:32 | 1/7/2014 | B | | 1002 | 15:24:06 | 1/7/2014 | B | | 1002 | 15:55:06 | 1/7/2014 | C | +-----------+-----------+--------------+--------+ ``` The result should be like this: ``` 1002 10:32:00 A 1002 10:34:23 B 1002 10:35:03 C ``` Any help would be appreciated. I am looking for a SQL query or a PL/SQL procedure.
A solution using the [`COUNT`](http://docs.oracle.com/cd/E11882_01/server.112/e41084/functions039.htm#i82697) [analytic function](http://docs.oracle.com/cd/E11882_01/server.112/e41084/functions004.htm) with a `RANGE BETWEEN INTERVAL '10' MINUTE PRECEDING AND INTERVAL '10' MINUTE FOLLOWING` that avoids self-joins: [SQL Fiddle](http://sqlfiddle.com/#!4/5327d/8) **Oracle 11g R2 Schema Setup**: ``` CREATE TABLE Test ( OFFICE_ID, FAIL_TIME, ACTIVITY_DAY, CLIENT ) AS SELECT 1002, '5:39:00', '23/01/2015', 'A' FROM DUAL UNION ALL SELECT 1002, '17:49:00', '23/12/2014', 'A' FROM DUAL UNION ALL SELECT 1002, '18:41:57', '1/5/2014', 'B' FROM DUAL UNION ALL SELECT 1002, '10:32:00', '1/7/2014', 'A' FROM DUAL UNION ALL SELECT 1002, '10:34:23', '1/7/2014', 'B' FROM DUAL UNION ALL SELECT 1002, '10:35:03', '1/7/2014', 'C' FROM DUAL UNION ALL SELECT 1002, '12:08:52', '1/7/2014', 'B' FROM DUAL UNION ALL SELECT 1002, '12:09:00', '1/7/2014', 'A' FROM DUAL UNION ALL SELECT 1002, '12:26:10', '1/7/2014', 'B' FROM DUAL UNION ALL SELECT 1002, '13:31:32', '1/7/2014', 'B' FROM DUAL UNION ALL SELECT 1002, '15:24:06', '1/7/2014', 'B' FROM DUAL UNION ALL SELECT 1002, '15:55:06', '1/7/2014', 'C' FROM DUAL ``` **Query 1**: ``` WITH Times AS ( SELECT OFFICE_ID, TO_DATE( ACTIVITY_DAY || ' ' || FAIL_TIME, 'DD/MM/YYYY HH24/MI/SS' ) AS FAIL_DATETIME, CLIENT FROM Test ), Next_Times As ( SELECT OFFICE_ID, FAIL_DATETIME, COUNT( CASE CLIENT WHEN 'A' THEN 1 END ) OVER ( PARTITION BY OFFICE_ID ORDER BY FAIL_DATETIME RANGE BETWEEN INTERVAL '10' MINUTE PRECEDING AND INTERVAL '10' MINUTE FOLLOWING ) AS COUNT_A, COUNT( CASE CLIENT WHEN 'B' THEN 1 END ) OVER ( PARTITION BY OFFICE_ID ORDER BY FAIL_DATETIME RANGE BETWEEN INTERVAL '10' MINUTE PRECEDING AND INTERVAL '10' MINUTE FOLLOWING ) AS COUNT_B, COUNT( CASE CLIENT WHEN 'C' THEN 1 END ) OVER ( PARTITION BY OFFICE_ID ORDER BY FAIL_DATETIME RANGE BETWEEN INTERVAL '10' MINUTE PRECEDING AND INTERVAL '10' MINUTE FOLLOWING ) AS COUNT_C FROM Times ) SELECT OFFICE_ID, TO_CHAR( FAIL_DATETIME, 'HH24:MI:SS' ) AS FAIL_TIME, TO_CHAR( FAIL_DATETIME, 'DD/MM/YYYY' ) AS ACTIVITY_DAY FROM Next_Times WHERE COUNT_A > 0 AND COUNT_B > 0 AND COUNT_C > 0 ORDER BY FAIL_DATETIME ``` **[Results](http://sqlfiddle.com/#!4/5327d/8/0)**: ``` | OFFICE_ID | FAIL_TIME | ACTIVITY_DAY | |-----------|-----------|--------------| | 1002 | 10:32:00 | 01/07/2014 | | 1002 | 10:34:23 | 01/07/2014 | | 1002 | 10:35:03 | 01/07/2014 | ```
To identify records you can join table to it self three times like this: ``` SELECT a.*, b.*, c.* FROM FailLog a INNER JOIN FailLog b ON b.OFFICE_ID = A.OFFICE_ID AND a.CLIENT = 'A' AND b.CLIENT = 'B' AND b.ACTIVITY_DAY = a.ACTIVITY_DAY INNER JOIN FailLog c ON c.OFFICE_ID = A.OFFICE_ID AND c.CLIENT = 'C' AND c.ACTIVITY_DAY = a.ACTIVITY_DAY AND -- need to calculate difference in min here GREATEST (a.FAIL_TIME, b. FAIL_TIME, c. FAIL_TIME) - LEAST (a.FAIL_TIME, b. FAIL_TIME, c. FAIL_TIME) <= 10 ``` The output will give you one row instead of three as requested in the question, but that will be the right level for the fault data, as all three clients should have it.
Select records all within 10 minutes from each other
[ "", "sql", "oracle", "plsql", "oracle10g", "" ]
I have the following situation: 1 Table for a Warehouse where I have: the ProductID, the qty and the address 1 Table for the Stores Where I have: The ProductID and qty 1 table for the Stores Storage addresses What I need to know is if a product is "in Stock" or "out of Stock" using the criteria that for a product to be considered "out of stock" it has to have qty >=0 or >20k in both tables (stores and warehouse) What I have tried is the following: ``` SELECT Product_Cod, qty_wh, QtyStore, CASE WHEN sum(QtyStore) BETWEEN 1 AND 20000 THEN CASE WHEN qty_wh > 0 THEN 'In Stock' WHEN qty_wh = 0 THEN 'In Stock' END WHEN sum(QtyStore) = 0 OR sum(QtyStore) > 20000 THEN CASE WHEN qty_wh <= 0 THEN 'Out of Stock' WHEN qty_wh > 0 THEN 'in Stock' END END AS [Result] FROM tb_Products, tb_qtyStock, tb_StoreProdAddress WHERE Product_Cod = wh_CodProduct AND Product_Cod = Store_CodProduct GROUP BY Product_Cod, qty_wh, QtyStore ORDER BY Product_Cod ``` Which outputs: ``` ProdCod qty_wh QtyStore Result 10026 26 0 In Stock 10026 26 1 In Stock 10026 26 2 In Stock 10070 25 0 In Stock 10070 25 2 In Stock 10070 25 3 In Stock 10071 20 0 In Stock 10071 20 1 In Stock 10071 20 29991 In Stock 10072 32 0 In Stock 10072 32 1 In Stock 10072 32 29978 In Stock 10204 0 0 Out of Stock 10204 0 1 In Stock 10204 0 4 In Stock 10204 0 29996 Out of Stock ``` But I can't make it work like I want because the QtyStore column doesn't SUM() it repeats because the tb\_StoreProdAddress is used for several stores so Multiple stores can have the same product but each store may have a different address for that product, but the address for given product in the warehouse is the same. There is a way to SUM() the QtyStore column and just Group the qty\_wh ???
Simply remove the QtyStore column from your `GROUP BY` clause, and put `SUM(QtyStore)` in the `SELECT`-part of your query. ``` SELECT Product_Cod, qty_wh, SUM(QtyStore) AS TotalQtyStore, CASE WHEN sum(QtyStore) BETWEEN 1 AND 20000 THEN CASE WHEN qty_wh > 0 THEN 'In Stock' WHEN qty_wh = 0 THEN 'In Stock' END WHEN sum(QtyStore) = 0 OR sum(QtyStore) > 20000 THEN CASE WHEN qty_wh <= 0 THEN 'Out of Stock' WHEN qty_wh > 0 THEN 'in Stock' END END AS [Result] FROM tb_Products, tb_qtyStock, tb_StoreProdAddress WHERE Product_Cod = wh_CodProduct AND Product_Cod = Store_CodProduct GROUP BY Product_Cod, qty_wh ORDER BY Product_Cod ```
You could try maxing qty\_wth then summing QtyStore like so: ``` SELECT Product_Cod, max(qty_wh) as qty_wh, sum(QtyStore) as QtyStore, CASE WHEN sum(QtyStore) BETWEEN 1 AND 20000 THEN CASE WHEN qty_wh > 0 THEN 'In Stock' WHEN qty_wh = 0 THEN 'In Stock' END WHEN sum(QtyStore) = 0 OR sum(QtyStore) > 20000 THEN CASE WHEN qty_wh <= 0 THEN 'Out of Stock' WHEN qty_wh > 0 THEN 'in Stock' END END AS [Result] FROM tb_Products, tb_qtyStock, tb_StoreProdAddress WHERE Product_Cod = wh_CodProduct AND Product_Cod = Store_CodProduct GROUP BY Product_Cod ORDER BY Product_Cod ```
SQL query using case and Group by aggregation
[ "", "sql", "sql-server", "" ]
I have a table that looks like this: ``` +----+-------+---------+--------+--------+ | id | meta1 | meta2 | value1 | value2 | +----+-------+---------+--------+--------+ | 1 | foo | bar | 0.1 | 0.01 | | 1 | baz | quux | 0.2 | 0.01 | | 1 | lorem | ipsum | 0.1 | 0.05 | | 2 | dolor | sit | 0.2 | 0.02 | | 2 | amet | eos | 0.3 | 0.02 | | 3 | clita | corpora | 0.5 | 0.03 | +----+-------+---------+--------+--------+ ``` I am trying to extract one (complete) row for each id with the lowest `value1` and in the case that there are equal `value1`s, falling back to the lowest value2. The query should result in a result set like this: ``` +----+-------+---------+--------+--------+ | id | meta1 | meta2 | value1 | VALUE2 | +----+-------+---------+--------+--------+ | 1 | foo | bar | 0.1 | 0.01 | | 2 | dolor | sit | 0.2 | 0.02 | | 3 | clita | corpora | 0.5 | 0.03 | +----+-------+---------+--------+--------+ ``` I started by attempting the following query: ``` SELECT t1.* FROM test t1 INNER JOIN (SELECT id, MIN(value1) minValue1 FROM test GROUP BY id) t2 ON t1.id = t2.id AND t1.value1 = t2.minValue1; ``` But this doesn't 'break the tie' for id '1' and I end up with two of those records. I have tried adding HAVING clauses and additional subqueries and am lost beyond this initial step. Help much appreciated.
You want all records for which no better record (ie. with lower value1 or same value1 and lower value2) exists: ``` select * from mytable where not exists ( select * from mytable better where better.id = mytable.id and ( better.value1 < mytable.value1 or (better.value1 = mytable.value1 and better.value2 < mytable.value2) ) ); ```
You can use `not exists`: ``` SELECT t.* FROM test t WHERE NOT EXISTS (SELECT 1 FROM test t2 WHERE t2.id = t.id AND (t2.value1 < t.value1 OR (t2.value1 = t.value1 and t2.value2 < t.value2) ) ); ``` An alternative method is to use variables: ``` select t.* from (select t.*, (@rn := if(@i = id, @rn + 1, if(@i := id, 1, 1) ) ) as rn from test t cross join (select @rn := 0, @i := -1) params order by id, value1, value2 ) t where rn = 1; ```
Select a single result from groups by the min of two values
[ "", "mysql", "sql", "" ]
I have a table with the following values ``` id Level Threshold 1 1 5000 2 2 10000 3 3 15000 ``` What i need to achieve is that when i pass 6000 , I need to get Level 1. 12000 Level 2 and 16000 Level 3? ``` 6000 - Level 1 12000 - Level 2 16000 - Level 3 ``` Can someone let me know how this can be achieved?
What I understood from your question is that when user will give 6000 then it should check which value is less than 6000 so it's 5000 and it's level is 1 same as when 12000 so it has two output as 5000 (level1) and 10000 (level2) but you need maximum one so it is 10000 (Level2). So according to this understanding the query is : ``` select max(LEVEL) from Table where Threshold< 6000; ```
[**Fiddle Demo**](http://sqlfiddle.com/#!3/c68682/1) ``` create table temp (id int, level int, Threshold int); insert into temp values (1,1, 5000); insert into temp values (2,2, 10000); insert into temp values (3,3, 15000); select max(LEVEL) from temp where Threshold<= 8000; (8000 or any other value) ```
Find the Range from a Table
[ "", "sql", "sql-server", "" ]
I know it's kind of a confusing question so let me first explain the scenario. The scenario is: there are 3 tables 1. `service_master (id, name, cost, duration, visit_count, remarks)` 2. `product_master (id, name, cost)` 3. `sell (id, item_id, item_type)` so, in **`sell`** table **`id`** goes for the `primary key`, **`item_id`** has data from 2 tables (`product_master` and `service_master`) **`item_type`** will be use to recognize if the item is a `service` or a `product` (`P, S`) so the issue is when I want to fetch all the sold item for the user how will I get the name of item instead of the `ID`es Please suggest what should I do and if I am doing it wrong then please correct me. So many thanks in advace !!
If IDs are unique over both, the service and the product table, (perhaps by using a sequence) you could just: ``` SELECT sell.id sell_id , sell.item_type , item_detail.id , item_detail.name , item_detail.cost FROM sell JOIN (SELECT id, name, cost FROM service_master UNION ALL SELECT id, name, cost FROM product_master ) item_detail ON sell.item_id = item_detail.id ; ``` Otherwise, you have another option (in addition to the ones suggested by Thorsten Kettner and DhruvJoshi), which you might want to test for performance implications, if your tables are large - but you want to retrieve the data for a very small subset (e.g.: a single invoice): ``` SELECT id sell_id , CASE item_type WHEN 'P' THEN (SELECT name FROM product_master WHERE sell.item_id = id) WHEN 'S' THEN (SELECT name FROM service_master WHERE sell.item_id = id) END AS item_name FROM sell ; ``` See it in action: [SQL Fiddle](http://sqlfiddle.com/#!3/bb910/1) ### Update There is a way of simply joining the tables - which even works for overlapping IDs: ``` SELECT sell.id sell_id , sell.item_type , item_detail.id , item_detail.name , item_detail.cost FROM sell2 sell JOIN (SELECT 'S' flag, id, name, cost FROM service_master2 UNION ALL SELECT 'P', id, name, cost FROM product_master2 ) item_detail ON sell.item_type = item_detail.flag AND sell.item_id = item_detail.id ; ``` See this in comparison: [SQL Fiddle](http://sqlfiddle.com/#!3/3616ac/1). Please comment, if and as this requires adjustment / further detail.
*How* to select depends on *what* you want to select of course :-) Here is an example on how to select all data: ``` select s.*, sm.name, sm.cost, sm.duration from sell s join service_master sm on sm.id = s.item_id where s.item_type = 'SERVICE' union all select s.*, pm.name, pm.cost, null from sell s join product_master pm on pm.id = s.item_id where s.item_type = 'PRODUCT'; ```
how to fetch data when a single column contains values from 2 different tables?
[ "", "sql", "sql-server", "database", "sql-server-2008", "" ]
Hi I have a database full of records of players and theirs match Stats. Example: [![Database example](https://i.stack.imgur.com/LoCoL.png)](https://i.stack.imgur.com/LoCoL.png) ( number means goals scored in match ) What I'm, trying to do is, select top X players that scores most of goals in Season. I'm trying something like this, but I think there must be another Select with select count (number) or something. ``` SELECT TOP 10 [playerID] ,[number] ,[league] ,[yearID] ,[YellowCard] ,[RedCard] FROM [ms4033].[dbo].[Shooter] where yearID=28 AND league=4 group by playerID ``` Thanks for your time :-)
You need to `SUM` number field and order by it like this; ``` SELECT TOP 10 playerID, SUM([number]) as goals, [league], [yearID], SUM([YellowCard]) as YellowCards, SUM([RedCard]) as RedCards, FROM [ms4033].[dbo].[Shooter] WHERE yearID=28 AND league=4 GROUP BY playerID, league, yearID ORDER BY goals DESC ```
``` select top 10 t.[playerID] ,t.totalgoals ,s.[league] ,s.[yearID] ,s.[YellowCard] ,s.[RedCard] from ( SELECT -- TOP 10 [playerID] ,sum([number]) as totalgoals ,[league] ,[yearID] -- ,[YellowCard] -- ,[RedCard] FROM [ms4033].[dbo].[Shooter] where yearID=28 AND league=4 group by playerID,[league],[yearID]) t join [ms4033].[dbo].[Shooter] s on t.playerid = s.playerid order by totalgoals desc ``` You can do the calculation for total goals in the inner query and order it to select the top 10 players.
SQL - select top 10 rows with the highest number (counted)
[ "", "sql", "sql-server", "" ]
This is a slightly strange use case admittedly, but how do you order the results of a simple union like this alphabetically? ``` select name from Reviewer union select model from Car; ```
Ironically, in many databases, your query *would* return the values in alphabetical order. The sorting is done during the duplicate removal for `union`. Of course, you can't depend on that. So, you should include an `order by` clause: ``` select name from Reviewer union select model from Car order by name; ``` The `order by` applies to the complete `union`. The column name comes from the first subquery.
``` select val from ( select name as val from Reviewer union select model as val from Car ) as x order by val ```
SQL: How to order a union alphabetically?
[ "", "sql", "" ]
In Table 1, I have customer ID and the list of Items the customer ID has purchased. **Table 1** ``` Customer ID | Item Code ------------------------ 1000 | 10 1000 | 20 1000 | 30 1000 | 40 2000 | 10 2000 | 60 2000 | 90 2000 | 100 ------------------------ ``` In another table I have a Hierarchy as below **Table 2** ``` This has precedence | Over This --------------------------------- 30 | 10 20 | 40 60 | 90 100 | 60 --------------------------------- ``` I want to add a result column in `Table 1` which will, for example , for Customer 1000, eliminate 10 by 30 and 40 by 20 based on table 2. Case 1: The 10 should be replaced by 30 only for Customer 1000 as there is a 30 present in the column 'Item Code' for Customer 1000 and not for Customer 2000. Case 2: For Customer 2000, 60 should be replaced by 90 and 90 should be replaced by 100 for both rows. So Ideally, Final result should be as follows- --- > Customer ID | Item Code | Final Code --- > ``` > 1000 | 10 | 30 > 1000 | 20 | 20 > 1000 | 30 | 30 > 1000 | 40 | 20 > 2000 | 10 | 10 (This is not replaced!) > 2000 | 60 | 100 > 2000 | 90 | 100 > 2000 | 100 | 100 > ``` ---
Updating the query as OP got updated and shows recursion. If you want to **add a column** in Table 1 to show precedence data. Here how you can accomplish it. ``` ALTER TABLE Table1 ADD Result int GO ;WITH CTE([This has Precedence], [Over This]) AS ( --First find the top level members --These are the items which does't have any precedence SELECT T2A.[This has Precedence], T2A.[Over This] FROM dbo.Table2 T2A LEFT JOIN dbo.Table2 T2B ON T2A.[This has Precedence] = T2B.[Over This] WHERE T2B.[Over This] IS NULL UNION ALL --Now call the lower level members and join with the CTE SELECT CTE.[This has Precedence], T2.[OVER THIS] FROM dbo.Table2 T2 INNER JOIN CTE ON T2.[This has Precedence] = CTE.[Over This] ) --Now use the recursive CTE to find the correct precedence member. UPDATE T1 SET Result = ISNULL(T1x.[Item Code], T1.[Item Code]) FROM Table1 T1 LEFT JOIN CTE T2 ON T1.[Item Code] = T2.[Over This] LEFT JOIN Table1 T1x ON T2.[This has Precedence] = T1x.[Item Code] AND T1.CID = T1x.CID GO ```
Doing an update, assuming those are your real column names: ``` ALTER TABLE Table1 ADD result int UPDATE a SET result = [Item Code] - [This has precedence] from Table1 a inner join Table2 b on a.[Item Code] = b.[Over this] ```
How do I delete values from a column based on a Hierarchy in a given table?
[ "", "sql", "t-sql", "sql-server-2012", "" ]
I need to examine ACCT\_NUMS values om TABLE\_1. If the ACCT\_NUM is prefixed by "GF0", then I need to disregard the "GF0" prefix and take the rightmost 7 characters of the remaining string. If this resulting value is not found in account\_x\_master or CW\_CLIENT\_STAGE, then, the record is to be flagged as an error. The following seems to do the trick, but I have a concern... ``` UPDATE table_1 SET Error_Ind = 'GW001' WHERE LEFT(ACCT_NUM, 3) = 'GF0' AND RIGHT(SUBSTRING(ACCT_NUM, 4, LEN(ACCT_NUM) - 3), 7) NOT IN ( SELECT acct_num FROM account_x_master ) AND RIGHT(SUBSTRING(ACCT_NUM, 4, LEN(ACCT_NUM) - 3), 7) NOT IN ( SELECT CW_CLIENT_STAGE.AGS_NUM FROM dbo.CW_CLIENT_STAGE ) ``` My concern is that SQL Server may attempt to perform a SUBSTRING operation ``` SUBSTRING(ACCT_NUM, 4, LEN(ACCT_NUM) - 3) ``` that results in a computed negative value and causing the SQL to fail. Of course, this wouldn't fail is the SUBSTRING operation were only applied to those records that we at least 3 characters long, which would always be the case if the ``` LEFT(ACCT_NUM, 3) = 'GF0' ``` were applied first. If possible, I'd like to avoid adding new columns to the table. Bonus points for simplicity and less overhead :-) How can I rewrite this UPDATE SQL to protect against this?
As other people said, your concern is valid. I'd make two changes to your query. 1) To avoid having negative value in the `SUBSTRING` parameter we can rewrite it using `STUFF`: ``` SUBSTRING(ACCT_NUM, 4, LEN(ACCT_NUM) - 3) ``` is equivalent to: ``` STUFF(ACCT_NUM, 1, 3, '') ``` Instead of extracting a tail of a string we replace first three characters with empty string. If the string is shorter than 3 characters, result is empty string. By the way, if your `ACCT_NUM` may end with space(s), they will be trimmed by the `SUBSTRING` version, because `LEN` doesn't count trailing spaces. 2) Instead of ``` LEFT(ACCT_NUM, 3) = 'GF0' ``` use: ``` ACCT_NUM LIKE 'GF0%' ``` If you have an index on `ACCT_NUM` and only relatively small number of rows start with `GF0`, then index will be used. If you use a function, such as `LEFT`, index can't be used. So, the final query becomes: ``` UPDATE table_1 SET Error_Ind = 'GW001' WHERE ACCT_NUM LIKE 'GF0%' AND RIGHT(STUFF(ACCT_NUM, 1, 3, ''), 7) NOT IN ( SELECT acct_num FROM account_x_master ) AND RIGHT(STUFF(ACCT_NUM, 1, 3, ''), 7) NOT IN ( SELECT CW_CLIENT_STAGE.AGS_NUM FROM dbo.CW_CLIENT_STAGE ) ```
You have a very valid concern, because SQL Server will rearrange the order of evaluation of expressions in the `WHERE`. The only way to guarantee the order of operations in a SQL statement is to use `case`. I don't think there is a way to catch failing calls to `substring()` . . . there is no `try_substring()` analogous to `try_convert()`. So: ``` WHERE LEFT(ACCT_NUM, 3) = 'GF0' AND (CASE WHEN LEN(ACCT_NUM) > 3 THEN RIGHT(SUBSTRING(ACCT_NUM, 4, LEN(ACCT_NUM) - 3), 7) END) NOT IN (SELECT acct_num FROM account_x_master ) AND (CASE WHEN LEN(ACCT_NUM) > 3 THEN RIGHT(SUBSTRING(ACCT_NUM, 4, LEN(ACCT_NUM) - 3), 7) END) NOT IN (SELECT CW_CLIENT_STAGE.AGS_NUM FROM dbo.CW_CLIENT_STAGE ) ``` This is uglier. And, there may be ways around it, say by using `LIKE` with wildcards rather than string manipulation. But, the `case` will guarantee that the `SUBSTRING()` is only run on strings long enough so no error is generated.
Is this simple UPDATE SQL an error waiting to happen? How to rewrite it?
[ "", "sql", "sql-server", "" ]
I have a table like this: ``` +---------+------------+--------+--------------+ | Id | Name | Status | Content_type | +---------+------------+--------+--------------+ | 2960671 | PostJob | Error | general_url | | 2960670 | auto_index | Done | general_url | | 2960669 | auto_index | Done | document | | 2960668 | auto_index | Error | document | | 2960667 | auto_index | Error | document | +---------+------------+--------+--------------+ ``` And I want to count how many of each type that has 'Error' as status, so in the result it would be `1x general_url` and `2x document` I tried something like this: ``` SELECT COUNT(DISTINCT Content_type) from Indexing where Status = 'Error'; ``` But I could not figure out how to get the content\_type out of it
You want this ``` select Content_type, count(Status) from Indexing where Status='Error' group by Content_type; ```
GROUP BY should do the job: ``` SELECT Content_type, COUNT(Id) from Indexing where Status = 'Error' GROUP BY Content_type; ``` **Explanation:** `COUNT (x)` counts the number of rows in the *group*, `COUNT (*)` would do the same. `COUNT (DISTINCT x)` counts the number of distinct values in the *group*. Without a `GROUP BY` clause the *group* is the whole set of records, so in your example you would have seen a single value (2) as your result; i.e. there are 2 distinct Content\_types in the set.
SQL select counts on 1 value
[ "", "sql", "" ]
I have this below query ``` SELECT FullUrl ,Title ,NextWebGroupId ,TimeCreated FROM AllWebsTest WHERE (Title LIKE '%' + @Title + '%') OR (FullUrl LIKE '%' + @FullUrl + '%') ``` Title and FullURL are parameter through TextBox. What I expect is, 1. If I put text in Title textbox and search, it doesnt give any result, only if I put some string or even blank space in FullURL textbox, only then I get result. 2. Vice versa- if Title is blank (No space as well) and I put search term in fullurl textbox i dont get any result. How can i handle blank textbox, so that only putting search term in one of the box should return result. This works fine, if i do query in SQL server directly. But through asp.net webform, its giving problem. **Update1:** I tried below solutions, while logically it should work i am still not getting results if one of the textbox is untouched. can there be issue with special character? Below is the exact tag ``` <asp:SqlDataSource ID="SqlDataSource1" runat="server" ConnectionString="<%$ ConnectionStrings:testConnectionString %>" SelectCommand="SELECT FullUrl, Title, NextWebGroupId, TimeCreated FROM AllWebsTest WHERE (Title LIKE '%' + ISNULL(@Title, '') + '%') OR (FullUrl LIKE '%' + ISNULL(@FullUrl, '') + '%')"> <SelectParameters> <asp:ControlParameter ControlID="txtSearch" Name="Title" PropertyName="Text" Type="String" /> <asp:ControlParameter ControlID="Textsearch2" Name="FullUrl" PropertyName="Text" Type="String" /> </SelectParameters> </asp:SqlDataSource> ```
Try this query: ``` SELECT FullUrl ,Title ,NextWebGroupId ,TimeCreated FROM AllWebsTest WHERE (Title LIKE '%' + ISNULL(@Title, '') + '%') AND (FullUrl LIKE '%' + ISNULL(@FullUrl, '') + '%') ``` Take a look at this [SQLFiddle](http://sqlfiddle.com/#!3/ce9ad/7) and confirm that this behaviour is what you expected. This query will filter rows only when a value is passed to one of your variables, `@title` or `@fullURL` (and return those that match the condition). Otherwise it will return all results. **Update after your recent comments:** ``` SELECT FullUrl ,Title ,NextWebGroupId ,TimeCreated FROM AllWebsTest WHERE (Title LIKE '%' + ISNULL(LTRIM(@Title), '') + '%') AND (FullUrl LIKE '%' + ISNULL(LTRIM(@FullUrl), '') + '%') ``` Here is a [SQLFiddle](http://sqlfiddle.com/#!3/ce9ad/20) with how the code above works.
Most likely something along the way is using `NULL` instead of an empty string, resulting in a `WHERE Title LIKE NULL` which will fail. Try the following instead, explicitly checking for the null case: ``` SELECT FullUrl, Title, NextWebGroupId, TimeCreated FROM AllWebsTest WHERE (@Title IS NULL OR Title LIKE '%' + @Title + '%') OR (@FullUrl IS NULL OR FullUrl LIKE '%' + @FullUrl + '%') ```
SQL Where, LIKE , OR combination issue with Blank textBox
[ "", "sql", "asp.net", "sql-server", "" ]
I'd like to get a count of negative values and positive values for each id. [Sample Fiddle](http://sqlfiddle.com/#!4/db410) `ID=1 has 2 positive and 0 negative transactions. etc.` ``` with trans_detail as ( select 1 as trans_id, 100 as trans_amount from dual union all select 1 as trans_id, 200 as trans_amount from dual union all select 2 as trans_id, -100 as trans_amount from dual union all select 2 as trans_id, -300 as trans_amount from dual union all select 3 as trans_id, 400 as trans_amount from dual union all select 3 as trans_id, -500 as trans_amount from dual ) select trans_id, count(*) over (partition by trans_id) as pos_count, count(*) over (partition by trans_id) as neg_count from trans_detail where trans_amount > 0 UNION select trans_id, count(*) over (partition by trans_id) as pos_count, count(*) over (partition by trans_id) as neg_count from trans_detail where trans_amount < 0; ``` Desired Result: ``` ID POS_COUNT NEG_COUNT ---- ----------- ----------- 1 2 0 2 0 2 3 1 1 ```
Count 1 each time you see a positive or negative amount and sum that up. ``` select trans_id, sum(case when trans_amount >=0 then 1 else 0 end) as pos_amt, sum(case when trans_amount < 0 then 1 else 0 end) as neg_amt from trans_detail group by trans_id ``` <http://sqlfiddle.com/#!4/db410/12>
``` select trans_id, nvl(sum(case when trans_amount < 0 then 1 end),0) as neg, nvl(sum(case when trans_amount > 0 then 1 end),0) as pos from trans_detail group by trans_id ``` SQL Fiddle: <http://sqlfiddle.com/#!4/db410/15>
How to get count of postitive and negative numbers per ID?
[ "", "sql", "oracle", "count", "window-functions", "" ]
I'm trying to create a Update from Select where in a specific case (88) the previous value should be used to update the column field ``` UPDATE myTable SET MyDateCol = CASE MYSelect.Indicator WHEN 88 THEN @previosValue ELSE MYSelect.NewValue FROM myTable INNER JOIN ( ... ) AS MYSelect ON myTable.ID = MYSelect.ID ``` EXAMPLE: ``` MYSelect | myTable | Indicator NewValue | MyDateCol 0 1 | 1 1 2 | 2 88 3 | 2 <- 3 4 | 4 4 5 | 5 5 6 | 6 6 7 | 7 88 8 | 7 <- ``` Full Update ``` UPDATE [dbo].[BestellDetails] SET [Datum] = CASE MYTABLE.WochenTag WHEN 88 THEN lag(MyDate) over(order by MYTABLE.MyDate) ELSE MyDate END FROM [BestellDetails] INNER JOIN ( SELECT INNERTabelle.BestellDetailId,INNERTabelle.WochenTag, DATEADD (dd,INNERTabelle.WochenTag, INNERTabelle.NewDatum) AS MyDate FROM ( SELECT TOP 100 PERCENT BestellDetailId, dbo.FirstDateOfWeekISO8601(dbo.Bestellung.Jahr, dbo.Bestellung.Kalenderwoche) AS NewDatum, (CASE BestellDetails.RefMenuId WHEN Speiseplandetails.RefMoId THEN 0 WHEN Speiseplandetails.RefDiId THEN 1 WHEN Speiseplandetails.RefMiId THEN 2 WHEN Speiseplandetails.RefDoId THEN 3 WHEN Speiseplandetails.RefFrId THEN 4 WHEN Speiseplandetails.RefSaId THEN 5 WHEN Speiseplandetails.RefSoId THEN 6 ELSE 88 END) AS WochenTag FROM dbo.Speiseplandetails RIGHT OUTER JOIN dbo.BestellDetails ON dbo.BestellDetails.RefMenuId = dbo.Speiseplandetails.RefMoId OR dbo.BestellDetails.RefMenuId = dbo.Speiseplandetails.RefDiId OR dbo.BestellDetails.RefMenuId = dbo.Speiseplandetails.RefMiId OR dbo.BestellDetails.RefMenuId = dbo.Speiseplandetails.RefDoId OR dbo.BestellDetails.RefMenuId = dbo.Speiseplandetails.RefFrId OR dbo.BestellDetails.RefMenuId = dbo.Speiseplandetails.RefSaId OR dbo.BestellDetails.RefMenuId = dbo.Speiseplandetails.RefSoId RIGHT OUTER JOIN dbo.Bestellung ON dbo.BestellDetails.RefBestellId = dbo.Bestellung.BestellId order by BestellDetailId ) as INNERTabelle ) AS MYTABLE ON [BestellDetails].BestellDetailId = MYTABLE.BestellDetailId ```
Please try the below query. It has optimized your query and also takes care of cases where there are repeated `88` values. In my case I've used `NULL` instead of `88`. > Can we have two rows in succession that both have an 88 indicator? – > Damien\_The\_Unbeliever ``` CREATE TABLE #temp( BestellDetailId int, row_num int,MyDate date ) INSERT INTO #temp SELECT TOP 100 PERCENT BestellDetailId, ROW_NUMBER() OVER (order by BestellDetailId) as row_num, ( CASE BestellDetails.RefMenuId WHEN Speiseplandetails.RefMoId THEN DATEADD (dd,0, dbo.FirstDateOfWeekISO8601(dbo.Bestellung.Jahr, dbo.Bestellung.Kalenderwoche) ) WHEN Speiseplandetails.RefDiId THEN DATEADD (dd,1, dbo.FirstDateOfWeekISO8601(dbo.Bestellung.Jahr, dbo.Bestellung.Kalenderwoche) ) WHEN Speiseplandetails.RefMiId THEN DATEADD (dd,2, dbo.FirstDateOfWeekISO8601(dbo.Bestellung.Jahr, dbo.Bestellung.Kalenderwoche) ) WHEN Speiseplandetails.RefDoId THEN DATEADD (dd,3, dbo.FirstDateOfWeekISO8601(dbo.Bestellung.Jahr, dbo.Bestellung.Kalenderwoche) ) WHEN Speiseplandetails.RefFrId THEN DATEADD (dd,4, dbo.FirstDateOfWeekISO8601(dbo.Bestellung.Jahr, dbo.Bestellung.Kalenderwoche) ) WHEN Speiseplandetails.RefSaId THEN DATEADD (dd,5, dbo.FirstDateOfWeekISO8601(dbo.Bestellung.Jahr, dbo.Bestellung.Kalenderwoche) ) WHEN Speiseplandetails.RefSoId THEN DATEADD (dd,6, dbo.FirstDateOfWeekISO8601(dbo.Bestellung.Jahr, dbo.Bestellung.Kalenderwoche) ) ELSE NULL END ) AS MyDate FROM dbo.Speiseplandetails RIGHT OUTER JOIN dbo.BestellDetails ON dbo.BestellDetails.RefMenuId = dbo.Speiseplandetails.RefMoId OR dbo.BestellDetails.RefMenuId = dbo.Speiseplandetails.RefDiId OR dbo.BestellDetails.RefMenuId = dbo.Speiseplandetails.RefMiId OR dbo.BestellDetails.RefMenuId = dbo.Speiseplandetails.RefDoId OR dbo.BestellDetails.RefMenuId = dbo.Speiseplandetails.RefFrId OR dbo.BestellDetails.RefMenuId = dbo.Speiseplandetails.RefSaId OR dbo.BestellDetails.RefMenuId = dbo.Speiseplandetails.RefSoId RIGHT OUTER JOIN dbo.Bestellung ON dbo.BestellDetails.RefBestellId = dbo.Bestellung.BestellId order by BestellDetailId --updating the temp table with correct values update t1 set t1.myDate=t3.myDate from #temp t1 left join #temp t3 on t3.row_num in (select max(row_num) from #temp t2 where t2.myDate is not null and t2.row_num<t1.row_num) where t1.MyDate is NULL UPDATE [dbo].[BestellDetails] SET [Datum] = MYTABLE.myDate FROM [BestellDetails] INNER JOIN #temp AS MYTABLE ON [BestellDetails].BestellDetailId = MYTABLE.BestellDetailId ``` below is the sample fiddle for updating a table with last value <http://sqlfiddle.com/#!6/c78ae/8>
``` UPDATE myTable SET MyDateCol = CASE MYSelect.Indicator WHEN 88 THEN (select lag(newvalue) over(order by newvalue) from MySelect) -- @previosValue ELSE MYSelect.NewValue FROM myTable INNER JOIN ( ... ) AS MYSelect ON myTable.ID = MYSelect.ID ``` This is assuming the previous value has to be selected based on the ascending order in `newvalue` column.
sql case when newValue else previousValue
[ "", "sql", "sql-server", "select", "sql-update", "case", "" ]
I have the following stored procedure with four parameters. Stored procedure `spTest`: ``` CREATE PROCEDURE spTest @Name varchar(20) = '', @Address varchar(100) = '', @City varchar(50) = '', @Pin varchar(50) = '' AS DECLARE @DynamicWhere varchar(max) DECLARE @Query varchar(max) /* Here I want to prepare a dynamic where clause for all possibilities */ SET @Query = 'SELECT * FROM Test_Table '+ @DynamicWhere +''; EXECUTE(@Query); GO ``` Well I am preparing it like this: ``` IF @Name = '' AND @Address = '' AND @City = '' AND @Pin = '' BEGIN SET @DynamicWhere = ''; END ELSE IF @Name != '' AND @Address = '' AND @City = '' AND @Pin = '' BEGIN SET @DynamicWhere = 'Name ='''+@Name+''''; END ELSE IF @Name != '' AND @Address != '' AND @City = '' AND @Pin = '' BEGIN SET @DynamicWhere = 'Name ='''+@Name+''' AND Address ='''+@Address+''''; END ...... ...... Many possibilities ``` Is this a right way **OR** is there any better way to prepare the dynamic `WHERE` clause?
It's called catch-all queries and it basically goes like this: ``` CREATE PROCEDURE spTest @Name varchar(20) = '', @Address varchar(100) = '', @City varchar(50) = '', @Pin varchar(50) = '' AS SELECT * FROM Test_Table WHERE (@Name = '' OR Name = @Name) AND (@Address = '' OR Address = @Address) AND (@City = '' OR City = @City) AND (@Pin = '' OR Pin = @Pin); GO ``` You also might want to read [this article about catch all queries](http://sqlinthewild.co.za/index.php/2009/03/19/catch-all-queries/)
You can use ISNULL and NULLIF also in this case: below code should work : ``` CREATE PROCEDURE spTest @Name varchar(20) = '', @Address varchar(100) = '', @City varchar(50) = '', @Pin varchar(50) = '' AS SET @Name=NULLIF(@Name,'') SET @Address=NULLIF(@Address,'') SET @City=NULLIF(@City,'') SET @Pin=NULLIF(@Pin,'') SELECT * FROM Test_Table WHERE Name = ISNULL(@Name,Name) AND Address = ISNULL(@Address,Address) AND City = ISNULL(@City,City) AND Pin = ISNULL(@Pin,Pin) GO ```
SQL Server 2008 R2: Prepare Dynamic WHERE Clause
[ "", "sql", "sql-server", "sql-server-2008-r2", "" ]
We have a file that needs to be imported that has dates in it. The dates are in a format that I have not seen before and the day part can vary in length (but not the month or year seemingly) and position based on wether the number is double digit or not, i.e. Dates: ``` 13082014 is 13th February 2014 9092013 is 9th September 2013 ``` The current script tries to substring the parts out, but fails on the second one as there is not enough data. I could write an if or case to check the length, but is there a SQL format that can be used to reliably import this data? To clarify this is MSSQL and the date format is ddmmyyyy or dmmyyyy
LPAD a zero when it is missing so to always get an eight character date string. Here is an example with Oracle, other DBMS may have other string and date functions to achieve the same. ``` select to_date(datestring, 'ddmmyyyy') from ( select lpad('13082014', 8, '0') as datestring from dual union all select lpad('9092013', 8, '0') as datestring from dual ); ``` Result: ``` 13.08.2014 09.09.2013 ```
One of the simple way is using STUFF. example: ``` select STUFF(STUFF('13082014 ',3,0,'/'),6,0,'/'); //result: 13/08/2014 ``` Good luck.
Format date where the position of the parts is variable
[ "", "sql", "datetime", "" ]
I'm currently building a simple view and need to combine both the `First Name` and `Last Name` columns to create a new `Customer` column. If a `First Name` and `Last Name` are not provided I'd like to change this new combined value to 'Name Not Provided'. Currently I use a simple select statement: `LastName + ', ' + FirstName AS Customer` which appears to work fine for combing the data but if the data doesn't exist, it will just return `', '`. How do I go about changing this so it returns 'Name Not Provided'?
``` SELECT Customer = CASE WHEN FirstName IS NULL AND LastName IS NULL THEN 'Name Not Provided' WHEN FirstName IS NULL AND LastName IS NOT NULL THEN LastName WHEN FirstName IS NOT NULL AND LastName IS NULL THEN FirstName ELSE LastName + ', ' + FirstName END FROM dbo.TableName ``` `Demo`
``` SET CONCAT_NULL_YIELDS_NULL ON SELECT ISNULL(LastName + ', ' + FirstName, 'Name Not Provided') AS Customer ```
SQL: Combine First Name and Last Name columns but if Null change to 'Name Not Provided'
[ "", "sql", "t-sql", "" ]
I have a column which represent the values as seconds. The Result of this column are as (465109,352669,351949.... etc). I need the results to be always in decimals and less then one. So the 465109 result should come as 0.465109, Similarly the 352669 should come as 0.352669. Is there a way i can do that in SQL Sever 2008? ``` DATEDIFF(SECOND,StartDate,GETDATE()) ```
Can you just use division? ``` select seconds / 1000000.0 ``` Note the `.0` so it does floating point division rather than integer division. If you want these as a decimal, then cast afterwards: ``` select cast(seconds / 1000000.0 as decimal(10, 6)) ```
A trick is this: ``` CAST('.' + REPLACE(CAST(@value AS varchar(30)), '.', '') AS Numeric(28,10)) ``` And as a math solution: ``` CAST(@valueAS Numeric(28,10)) / (POWER(10, LEN(CAST(FLOOR(@value) AS varchar(20))))) ```
SQL Server: Converting a number to decimal
[ "", "sql", "sql-server", "sql-server-2008", "" ]
``` CREATE TABLE WRITTEN_BY ( Re_Id CHAR(15) NOT NULL, Pub_Number INT NOT NULL, PRIMARY KEY(Re_Id, Pub_Number), FOREIGN KEY(Re_Id) REFERENCES RESEARCHER(Re_Id), FOREIGN KEY(Pub_Number) REFERENCES PUBLICATION(Pub_Number)); CREATE TABLE WORKING_ON ( Re_Id CHAR(15) NOT NULL, Pro_Code CHAR(15) NOT NULL, PRIMARY KEY(Re_Id, Pro_Code, Subpro_Code) FOREIGN KEY(Re_Id) REFERENCES RESEARCHER(Re_Id)); ``` **Re\_Id** stands for *ID of a researcher* **Pub\_Number** stands for *ID of a publication* **Pro\_Code** stands for *ID of a project* **Written\_by** table stores information about a Publication's ID and it's author **Working\_on** table stores information about a Project's ID and who is working on it Now, I have this query : ``` For each project, find the researcher who wrote the most number of publications . ``` This is what i've done so far : ``` SELECT Pro_Code,WORK.Re_Id FROM WORKING_ON AS WORK , WRITTEN_BY AS WRITE WHERE WORK.Re_Id = WRITE.Re.Id ``` so I got a table which contains personal ID and project's ID of a researcher who has at least 1 publication. But what's next ? How to solve this problem?
You haven't said which platform you're on but try this. It handles the case where there are ties as well. ``` select g.Pro_Code, g.Re_Id, g.numpublished from ( SELECT work.Pro_Code, WORK.Re_Id, count(WRITE.pub_number) as numpublished FROM WORKING_ON WORK JOIN WRITTEN_BY AS WRITE ON WORK.Re_Id = WRITE.Re_Id GROUP BY work.Pro_Code, WORK.Re_Id ) g inner join ( select Pro_code, max(numpublished) as maxpublished from ( SELECT work.Pro_Code, WORK.Re_Id, count(WRITE.pub_number) numpublished FROM WORKING_ON WORK JOIN WRITTEN_BY AS WRITE ON WORK.Re_Id = WRITE.Re_Id GROUP BY work.Pro_Code, WORK.Re_Id ) g2 group by Pro_code ) m on m.Pro_code = g.Pro_Code and m.maxpublished = g.numpublished ``` Some platforms will allow you to write it this way: ``` with g as ( SELECT work.Pro_Code, WORK.Re_Id, count(WRITE.pub_number) as numpublished FROM WORKING_ON WORK JOIN WRITTEN_BY AS WRITE ON WORK.Re_Id = WRITE.Re_Id GROUP BY work.Pro_Code, WORK.Re_Id ) select g.Pro_Code, g.Re_Id, g.numpublished from g inner join ( select Pro_code, max(numpublished) as maxpublished from g group by Pro_code ) m on m.Pro_code = g.Pro_Code and m.maxpublished = g.numpublished ```
If you are using MS SQL, this should work: ``` With cte as ( select a.Re_Id, Pub_Number,Pro_Code, COUNT(distinct Pub_Number) as pubs from WRITTEN_BY a inner join WORKING_ON b on a.Re_Id = b.Re_Id) SELECT Re_Id,pubs from cte HAVING pubs = MAX(pubs) GROUP BY Re_Id ```
SQL query : SELECT
[ "", "sql", "select", "" ]
I am trying to update the amount using Join but getting exception: ``` UPDATE tab1 SET tab1.total_adjusted_cost = tab1.total_adjusted_cost + t1.total FROM table1 tab1, (SELECT tab3.name, tab3.add, SUM(tab2.amount) AS total FROM table2 tab2, table3 tab3, table4 tab4 WHERE tab2.id = tab3.id AND tab3.id = tab4.id AND tab4.indicator = 'Y' GROUP BY tab3.name, tab3.add ) t1 WHERE tab1.id = t1.id; SQL Error: ORA-00933: SQL command not properly ended 00933. 00000 - "SQL command not properly ended" ```
Try to use merge ``` merge into table1 tab1 using ( SELECT tab3.name, tab3."add", SUM(tab2.amount) AS total FROM table2 tab2, table3 tab3 , table4 tab4 WHERE tab2.id = tab3.id AND tab3.id = tab4.id AND tab4.indicator ='Y' GROUP BY tab3.name, tab3."add" )t1 on(tab1.id = t1.id) when matched then update set tab1.total_adjusted_cost = tab1.total_adjusted_cost + t1.total ```
`group` is a keyword in sql. You have to escape it. ``` UPDATE tab1 SET tab1.total_adjusted_cost = tab1.total_adjusted_cost + t1.total FROM table1 tab1, (SELECT tab3.name, tab3."add", SUM(tab2.amount) AS total FROM table2 tab2, table3 tab3 , table4 tab4 WHERE tab2.id = tab3.id AND tab3.id = tab4.id AND tab4.indicator ='Y' GROUP BY tab3.name, tab3."add" ) t1 WHERE tab1.id = t1.id ```
Oracle Update Query using Join
[ "", "sql", "oracle", "sql-update", "ora-00933", "" ]
I want to write a select query that will compare the two table. By comparing it should tell that what record are need to be updated, inserted or deleted from table 2. Contents of the tables: ``` Table 1 ----- --------- id name ----- --------- 1 shubham 2 ravi 3 aman 4 vijay Table 2 ----- --------- id name ----- --------- 1 shubham 2 ravi 3 aman 4 vijay ``` now both the table has same data now when data is changed in `table1` like ``` Table 1 ----- --------- id name ----- --------- 1 shubham 2 harish 3 aman 5 saurabh ``` Now here in table1 name of id 2 is updated and id 4 is deleted also id is inserted. I want my query to select all these records from table 1 also tell whether which operation is to be done. I dont want to use set operator also. please help
I would do it like here: ``` select coalesce(t1.id, t2.id) id, t1.name new_name, t2.name old_name, case when t1.id is null then 'row deleted' when t2.id is null then 'row inserted' when t1.name <> t2.name or t1.name is null and t2.name is not null or t1.name is not null and t2.name is null then 'data changed' end change from t1 full join t2 on t1.id = t2.id order by id ``` [SQLFiddle](http://sqlfiddle.com/#!4/baa614/1) And by the way- what's so hard in formatting data in your questions? Just add 4 spaces before something and this will be indicated as code. Or select some part and press [Ctrl-K]. Everything is rendered as you write it.
No set operators involved in this solution, but they are probably easier to use when there are more than just a couple of columns to compare: ``` select coalesce(s.id, d.id) id , coalesce(s.name, d.name) name , case when s.id is null then 'D' when d.id is null then 'C' when s.name != d.name then 'U' end CUD from table1 s full join table2 d on s.id = d.id where s.id is null or d.id is null or s.name != d.name ``` The `CUD` column just indicates the operation to carry out `C`reate, `U`pdate, or `D`elete. The more flexible set based solution would be something like this: ``` select 'CU' op, s.* from table1 s minus select 'CU' op, d.* from table2 d union select 'D' op, d.* from table2 d where d.id not in (select s.id from table1 s) ``` In this case you don't know if the table1 records left after the minus operation are new or changed so the op is either `C`reate or `U`pdate, but you still definitively know the `D`elete operations. Either one of these queries could be used in the `USING` clause of a [`MERGE`](http://docs.oracle.com/cd/E11882_01/server.112/e41084/statements_9016.htm#SQLRF01606) statement to update `table2` to match `table1`
Compare two tables and select the record that is no in table 2
[ "", "sql", "oracle", "" ]
I'm using SQL Server 2012. I have created a trigger the code is below. I'm slightly confused though as running an update query on my table I get an error message 'Violation of PRIMARY KEY constraint'. However where I query the table the value I'm updating doesn't exist in the table? directory is the primary key column. ``` update myTable set directory = 'madeup2' where directory = 'madeup' ``` So the query above is what is causing the violation of primary key constraint. But I do the select query below and there is no directory in my table called madeup2 - no records are returned. ``` select * from myTable where directory = 'madeup2' ``` Is this something to do with my trigger? ``` create trigger trDefaultPath on mytable instead of insert, update, delete as begin declare @defCountIns int declare @retVal int declare @permission bit select @defCountIns = count(userName) from inserted where userName = 'Default' select @retVal = count(HostName) from UserHostName where HostName = HOST_NAME() -- workout permissions if @retVal = 0 and @defCountIns > 0 set @permission = 0 else set @permission = 1 begin if @permission = 1 begin update mytable set directory = inserted.directory, pathNumber = inserted.pathNumber, userName = inserted.userName from inserted; end else begin update mytable set directory = inserted.directory, pathNumber = inserted.pathNumber, userName = inserted.userName from inserted where inserted.username <> 'Default'; print 'Do not have permission to update directories' end end end ``` **Update** Thanks to the comments which pointed out that the update statements in my trigger are missing a where clause. One thing I don't understand though is I executed a update query with a where clause. In the inserted table though I will just have new 'directory' value so how do I create the where clause? Is it simply a case of using the deleted table - so delete the records in that table from mytable and then add the records from inserted table to mytable?
Your `UPDATE` statements will update the entire table - not just those rows that have been modified. That's *usually* not what you want. You'll need to add a `JOIN` between your `Inserted` pseudo table and the actual underlying data table that you want to update. ``` if @permission = 1 begin update mytable set directory = inserted.directory, pathNumber = inserted.pathNumber, userName = inserted.userName from inserted i inner join mytable on i.ID = mytable.ID; -- or whatever column is your PK end ```
In your trigger, your first UPDATE doesn't have a WHERE clause. And the second one has a WHERE clause that doesn't reference the table being updated. That means it's going to try to update every row in the table with the same values from the inserted table. Also your trigger isn't coded to handle inserts/updates that affect more than one row.
primary key violation even though record doesn't exist
[ "", "sql", "sql-server", "triggers", "sql-server-2012", "" ]
I'm trying to query the SQL database for all lines where the date is after a date given through user input. I've run into a variety of errors from "incorrect syntax near" when I surround my date with "#" to "arithmetic overflow error converting expression to". My current code looks like this: ``` inputdate = InputBox("Please enter the starting date (mm/dd/yyyy)") Debug.Print inputdate querydate = "(DTG > " & Format(inputdate, "MMDDYYYY") & ")" select StationID, DTG, CeilingFeet from SurfaceOb where " & querydate & " and StationID = 'NZWD'" ``` DTG is the column name for the date-time group in the SQL database. Any idea where I am going wrong? I've tried every solution I could find over the past few days without luck. Thank you in advance.
The primary issue is that dates must be enclosed in single-quotes. Here is a complete working example (minus a valid connection string) that should explain how to achieve your objective. Note that you will also want to switch to the ISO date format, in which the order is Year-Month-Day (`YYYY-MM-DD`). ``` Sub UpdateRecords() Dim connection As New ADODB.connection Dim recordset As New ADODB.recordset Dim connectionString As String Dim query As String Dim inputdate As Date Dim querydate As String inputdate = InputBox("Please enter the starting date (mm/dd/yyyy)") querydate = "(DTG > '" & Format(inputdate, "yyyy-mm-dd") & "')" query = "select StationID, DTG, CeilingFeet from SurfaceOb where " & querydate & " and StationID = 'NZWD'" connectionString = "..." connection.Open connectionString recordset.Open query, connection ActiveSheet.Range("A1").CopyFromRecordset recordset End Sub ```
Dates for SQL Server should be formatted as a date or date/time, qualified with single-quotes: **Date** in ISO unseparated date format ``` querydate = "(DTG > '" & Format(inputdate, "yyyymmdd") & "')" ``` **Date/Time** in ISO 8601 format ``` querydate = "(DTG > '" & Format(inputdate, "yyyy-mm-ddThh:mm:ss.000") & "')" ```
VBA variable in SQL date query
[ "", "sql", "vba", "date", "variables", "" ]
When I try to run below update query, It takes about 40 hours to complete. So I added a time limitation(Update query with time limitation). But still it takes nearly same time to complete.Is there any way to speed up this update? EDIT: What I really want to do is only get logs between some specific dates and run this update query on this records. ``` create table user (userid varchar(30)); create table logs ( log_time timestamp, log_detail varchar(100), userid varchar(30)); insert into user values('user1'); insert into user values('user2'); insert into user values('user3'); insert into user values(''); insert into logs values('no user mentioned','user3'); insert into logs values('inserted by user2','user2'); insert into logs values('inserted by user3',null); ``` **Table before Update** ``` log_time | log_detail | userid | .. |-------------------|--------| .. | no user mention | user3 | .. | inserted by user2 | user2 | .. | inserted by user3 | (null) | ``` **Update query** ``` update logs join user set logs.userid=user.userid where logs.log_detail LIKE concat("%",user.userID,"%") and user.userID != ""; ``` **Update query with time limitation** ``` update logs join user set logs.userid = IF (logs.log_time between '2015-08-11 00:39:41' AND '2015-08-01 17:39:44', user.userID, null) where logs.log_detail LIKE concat("%",user.userID,"%") and user.userID != ""; ``` **Table after update** ``` log_time | log_detail | userid | .. |-------------------|--------| .. | no user mentione | user3 | .. | inserted by user2 | user2 | .. | inserted by user3 | user3 | ``` **EDIT:** Original question [Sql update statement with variable](https://stackoverflow.com/q/31741652/1379734) .
Log tables can easily fill up with tons of rows of data each month and even the best indexing won't help, especially in the case of a `LIKE` operator. Your `log_detail` column is 100 characters long and your search query is `CONCAT("%",user.userID,"%")`. Using a function in a SQL command can slow things down because the function is doing extra computations. And what you're trying to search for is, if your userID is John, `%John%`. So your query will scan every row in that table because indexes will be semi-useless. If you didn't have the first `%`, then the query would be able to utilize its indexes efficiently. Your query would, in effect, do an `INDEX SCAN` as opposed to an `INDEX SEEK`. For more information on these concepts, see: [Index Seek VS Index Scan](http://blog.sqlauthority.com/2007/03/30/sql-server-index-seek-vs-index-scan-table-scan/) [Query tuning a LIKE operator](http://blogs.msdn.com/b/varund/archive/2009/11/30/index-usage-by-like-operator-query-tuning.aspx) Alright, what can you do about this? Two strategies. * Option 1 is to limit the number of rows that you're searching through. You had the right idea using time limitations to reduce the number of rows to search through. What I would suggest is to put the time limitations as the first expression in your `WHERE` clause. Most databases execute the first expression first. So when the second expression kicks in, it'll only scan through the rows returned by the first expression. ``` update logs join user set logs.userid=user.userid where logs.log_time between '2015-08-01' and '2015-08-11' and logs.log_detail LIKE concat('%',user.userID,'%') ``` * Option 2 depends on your control of the database. If you have total control (and you have the time and money, MySQL has a feature called `Auto-Sharding`. This is available in MySQL Cluster and MySQL Fabric. I won't go over those products in much detail as the links provided below can explain themselves much better than I could summarize, but the idea behind `Sharding` is to split the rows into horizontal tables, so to speak. The idea behind it is that you're not searching through a long database table, but instead across several sister tables at the same time. Searching through 10 tables of 10 million rows is faster than searching through 1 table of 100 million rows. [Database Sharding - Wikipedia](https://en.wikipedia.org/wiki/Shard_(database_architecture)) [MySQL Cluster](https://www.mysql.com/products/cluster/scalability.html) [MySQL Fabric](https://www.mysql.com/products/enterprise/fabric.html)
First, the right place to put the time limitation is in the `where` clause, not an `if`: ``` update logs l left join user u on l.log_detail LIKE concat("%", u.userID) set l.userid = u.userID where l.log_time between '2015-08-11 00:39:41' AND '2015-08-01 17:39:44'; ``` If you want to set the others to `NULL` do this before: ``` update logs l set l.userid = NULL where l.log_time not between '2015-08-11 00:39:41' AND '2015-08-01 17:39:44'; ``` But, if you really want this to be fast, you need to use an index for the join. It is possible that this will use an index on `users(userid)`: ``` update logs l left join user u on cast(substring_index(l.log_detail, ' ', -1) as signed) = u.userID set l.userid = u.userID where l.log_time between '2015-08-11 00:39:41' AND '2015-08-01 17:39:44'; ``` Look at the `explain` on the equivalent `select`. It is really important that the `cast()` be to the same type as the `UserId`.
How to increase query speed with using like command
[ "", "mysql", "sql", "" ]
``` Dim sel As String = "select * from personal_i where number= @number" Dim adapter As New SqlDataAdapter(sel, conn) adapter.SelectCommand.Parameters.AddWithValue("@number", Label61.Text) Dim dataset As New DataSet adapter.Fill(dataset) Label1.Text = dataset.Tables(0).Rows(0).Item(6).ToString ``` The result is : > 8/7/2015 12:00:00 AM **How do I display only the date, not with time?**
You are looking for the format function. ``` Label1.Text = Format(dataset.Tables(0).Rows(0).Item(6).ToString, “Short Date”) ```
I would suggest something like : ``` Dim sel As String = "select * from personal_i where number= @number" Dim adapter As New SqlDataAdapter(sel, conn) adapter.SelectCommand.Parameters.AddWithValue("@number", Label61.Text) Dim dataset As New DataSet adapter.Fill(dataset) Label1.Text = Format(Trim(Split(dataset.Tables(0).Rows(0).Item(6).ToString," ")(0)),"m/d/yyyy") ```
How to display only the date from sql result
[ "", "sql", "asp.net", "vb.net", "" ]
One model separated by records with dates and same verification codes. I want to select and group same records by verification code and select dates range. I wrote next sql, but his executing is to long.. ``` SELECT ( SELECT DATE FROM user_day_status WHERE VERIFICATION_TOKEN = uds.VERIFICATION_TOKEN ORDER BY DATE ASC limit 1) AS DATE_FROM, ( SELECT DATE FROM user_day_status WHERE VERIFICATION_TOKEN = uds.VERIFICATION_TOKEN ORDER BY DATE DESC limit 1) AS DATE_TO, uds.USER_ID,uds.VERIFICATION_TOKEN,uds.DESCRIPTION FROM user_day_status uds GROUP BY uds.VERIFICATION_TOKEN ```
You don't need to do a self join at all: Check here: <http://www.sqlfiddle.com/#!9/acf12/3> Basically: ``` SELECT MIN(uds1.DATE) AS DATE_FROM, MAX(uds1.DATE) AS DATE_TO, uds1.USER_ID, uds1.VERIFICATION_TOKEN, uds1.DESCRIPTION FROM user_day_status uds1 GROUP BY uds1.USER_ID, uds1.VERIFICATION_TOKEN, uds1.DESCRIPTION ```
Does this work for you : ``` SELECT MIN(uds1.DATE) AS DATE_FROM, MAX(uds1.DATE) AS DATE_TO, uds1.USER_ID,uds1.VERIFICATION_TOKEN,uds1.DESCRIPTION FROM user_day_status uds1 JOIN user_day_status uds2 ON uds1.VERIFICATION_TOKEN = uds2.VERIFICATION_TOKEN GROUP BY uds1.VERIFICATION_TOKEN ```
How to Optimaze this sql
[ "", "mysql", "sql", "optimization", "" ]
I am new to Oracle PL/SQL, although I have a lot of experience with SQL. At the moment I am trying to convert a couple of T-SQL statements to PL/SQL. I am trying to execute the following code, but I get some errors. If the table does not exist yet, then the error is: Table or view does not exist. But when I run the query without the second select statement it creates the table. Now that the table exists I tried to execute it once more and now I am getting the following error: > An INTO clause is expected in this SELECT statement. The code I am using: ``` DECLARE cnt NUMBER; stmt VARCHAR2(1000) := 'CREATE TABLE LAST_LOG_ARCHIVE (LAST_LOG_ARCHIVE TIMESTAMP NULL)'; BEGIN SELECT COUNT(*) INTO cnt FROM all_tables WHERE table_name = 'LAST_LOG_ARCHIVE'; IF (cnt = 0) THEN EXECUTE IMMEDIATE stmt; END IF; SELECT COALESCE(( SELECT LAST_LOG_ARCHIVE FROM LAST_LOG_ARCHIVE WHERE ROWNUM = 1 ), TO_TIMESTAMP('2015-01-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS')) AS LAST_LOG_ARCHIVE FROM dual; END; ``` How can I solve this...?
Change: ``` DECLARE cnt NUMBER; stmt VARCHAR2(1000) := 'CREATE TABLE LAST_LOG_ARCHIVE (LAST_LOG_ARCHIVE TIMESTAMP NULL)'; BEGIN SELECT COUNT(*) INTO cnt FROM all_tables WHERE table_name = 'LAST_LOG_ARCHIVE'; IF (cnt = 0) THEN EXECUTE IMMEDIATE stmt; END IF; SELECT COALESCE(( SELECT LAST_LOG_ARCHIVE FROM LAST_LOG_ARCHIVE WHERE ROWNUM = 1 ), TO_TIMESTAMP('2015-01-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS')) AS LAST_LOG_ARCHIVE FROM dual; END; ``` To: ``` DECLARE V_LAST_LOG_ARCHIVE varchar2(100); cnt NUMBER; stmt VARCHAR2(1000) := 'CREATE TABLE LAST_LOG_ARCHIVE (LAST_LOG_ARCHIVE TIMESTAMP NULL)'; BEGIN SELECT COUNT(*) INTO cnt FROM all_tables WHERE table_name = 'LAST_LOG_ARCHIVE'; IF (cnt = 0) THEN EXECUTE IMMEDIATE stmt; END IF; SELECT COALESCE(( SELECT LAST_LOG_ARCHIVE FROM LAST_LOG_ARCHIVE WHERE ROWNUM = 1), TO_TIMESTAMP('2015-01-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS')) INTO V_LAST_LOG_ARCHIVE FROM dual; END; ```
your second query needs the INTO statement ``` SELECT COALESCE(( SELECT LAST_LOG_ARCHIVE FROM LAST_LOG_ARCHIVE WHERE ROWNUM = 1 ), TO_TIMESTAMP('2015-01-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS')) AS LAST_LOG_ARCHIVE INTO l_someVariable FROM dual; ``` remember to declare the `l_someVariable` in the `DECLARE` section
Excute SELECT after CREATE in Oracle PL/SQL
[ "", "sql", "oracle", "plsql", "select-into", "" ]
There are several status an the status table. I want query like that. ``` select * from command where status <> 's' and status<> 'sc' ``` But query retrieve data with 'Status' S. What was the issue on that?
The checks `Col <> 's'` depends on the collation of the columns. If the collation is case insensitive, then `'S' = 's'` and your checks will not return `S`. If your collation is case sensitive which I believe is the case, will return `S`. For example note the following two collations `Latin1_General_CI_AS_KS_WS` and `Latin1_General_CS_AS_KS_WS`. Note the difference in case `CI` and `CS`. ``` SELECT 'S' WHERE 'S' <> 's' COLLATE Latin1_General_CI_AS_KS_WS ``` Does not return anything ``` SELECT 'S' WHERE 'S' <> 's' COLLATE Latin1_General_CS_AS_KS_WS ``` Returns `S` Coming back to your query. If this case insensitive check is a one time thing you can either as suggested by other answers do a `UPPER(Col)` / `LOWER(Col)` or use `COL COLLATE Latin1_General_CI_AS_KS_WS`. If all comparisons should be case insensitive, I would suggest [changing the collation of the column](https://msdn.microsoft.com/en-us/library/ms190920.aspx) itself.
Is the status in the DB `S` or `s`? Try ``` select * from command where lower(status) NOT IN ('s', 'sc') ```
Two Where conditions on same column
[ "", "sql", "sql-server", "sql-server-2008", "" ]
Why `"where SomeDate between getdate() and DATEADD(m, -1, getdate()))" -` not working? (I try calculate some value over last month)
You should use other way ``` where SomeDate between DATEADD(m, -1, getdate()) and getdate() ```
Because when you use `between` the *lower* value needs to go first. So, you want: ``` where SomeDate between DATEADD(month, -1, getdate()) and getdate() ```
between getdate() and DATEADD(m, -1, getdate())) on where clause
[ "", "sql", "" ]
So I'm playing around with SQL window functions following on-line help tutorials using AdventureWorks 2012. I selected out data from the below query into a new table ``` SELECT sh.CustomerID AS AccountID, sd.SalesOrderDetailID AS TransactionID, sh.OrderDate AS TransactionDate, sd.LineTotal AS Amount INTO Transactions FROM sales.SalesOrderHeader sh INNER JOIN sales.SalesOrderDetail sd on sh.SalesOrderID = sd.SalesOrderID ``` Then I ran the following query: ``` select *, MIN(Amount) OVER(Partition by AccountID order by transactionDate) AS MinOrderForDate from dbo.transactions Order by AccountID, transactionID desc ``` Which gave me my expected result: ``` AccountID TransactionID TransactionDate Amount MinOrderForDate 11000 63804 2007-11-04 00:00:00.000 53.990000 4.990000 11000 63803 2007-11-04 00:00:00.000 34.990000 4.990000 11000 63802 2007-11-04 00:00:00.000 4.990000 4.990000 11000 63801 2007-11-04 00:00:00.000 28.990000 4.990000 11000 63800 2007-11-04 00:00:00.000 2384.070000 4.990000 11000 38716 2007-07-22 00:00:00.000 21.980000 21.980000 11000 38715 2007-07-22 00:00:00.000 2319.990000 21.980000 11000 449 2005-07-22 00:00:00.000 3399.990000 3399.990000 11001 115673 2008-06-12 00:00:00.000 34.990000 4.990000 11001 115672 2008-06-12 00:00:00.000 8.990000 4.990000 11001 115671 2008-06-12 00:00:00.000 4.990000 4.990000 11001 115670 2008-06-12 00:00:00.000 539.990000 4.990000 11001 38639 2007-07-20 00:00:00.000 8.990000 4.990000 11001 38638 2007-07-20 00:00:00.000 53.990000 4.990000 11001 38637 2007-07-20 00:00:00.000 9.990000 4.990000 11001 38636 2007-07-20 00:00:00.000 4.990000 4.990000 11001 38635 2007-07-20 00:00:00.000 21.980000 4.990000 11001 38634 2007-07-20 00:00:00.000 2319.990000 4.990000 11001 423 2005-07-18 00:00:00.000 3374.990000 3374.990000 ``` But when I replaced MIN with MAX, I keep on getting the MAX value across the whole data range for the account and I can't see why? ``` select *, MAX(Amount) OVER(Partition by AccountID order by transactionDate) AS MaxOrderForDate from dbo.transactions Order by AccountID, transactionID desc AccountID TransactionID TransactionDate Amount MaxOrderForDate 11000 63804 2007-11-04 00:00:00.000 53.990000 3399.990000 11000 63803 2007-11-04 00:00:00.000 34.990000 3399.990000 11000 63802 2007-11-04 00:00:00.000 4.990000 3399.990000 11000 63801 2007-11-04 00:00:00.000 28.990000 3399.990000 11000 63800 2007-11-04 00:00:00.000 2384.070000 3399.990000 11000 38716 2007-07-22 00:00:00.000 21.980000 3399.990000 11000 38715 2007-07-22 00:00:00.000 2319.990000 3399.990000 11000 449 2005-07-22 00:00:00.000 3399.990000 3399.990000 11001 115673 2008-06-12 00:00:00.000 34.990000 3374.990000 11001 115672 2008-06-12 00:00:00.000 8.990000 3374.990000 11001 115671 2008-06-12 00:00:00.000 4.990000 3374.990000 11001 115670 2008-06-12 00:00:00.000 539.990000 3374.990000 11001 38639 2007-07-20 00:00:00.000 8.990000 3374.990000 11001 38638 2007-07-20 00:00:00.000 53.990000 3374.990000 11001 38637 2007-07-20 00:00:00.000 9.990000 3374.990000 11001 38636 2007-07-20 00:00:00.000 4.990000 3374.990000 11001 38635 2007-07-20 00:00:00.000 21.980000 3374.990000 11001 38634 2007-07-20 00:00:00.000 2319.990000 3374.990000 11001 423 2005-07-18 00:00:00.000 3374.990000 3374.990000 ``` Am I doing this wrong?
If you order by a non-unique column there's no guaranteed result set. In your case the result is exactly what you request, a cumulative MIN/MAX, there's a `ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW` automatically added to your query due to the `ORDER BY`. Additionally the `ORDER BY` in the Windowed Aggregate Function differs from the final `ORDER BY` which adds more confusion. You didn't specify exactly what you're trying to get. Based on the column alias you might want a "group" MIN/MAX: ``` MIN(Amount) OVER(Partition by AccountID, transactionDate) AS MinOrderForDate ```
You are not very explicit about your intent, but it sounds like you want the `min/max` for each `account/transactionDate` grouping. If that's the case, you probably didn't mean to put `transactionDate` in the `order by` clause of the window function. I think you meant to do this instead: ``` MIN(Amount) OVER(Partition by AccountID, transactionDate) AS MinOrderForDate MAX(Amount) OVER(Partition by AccountID, transactionDate) AS MaxOrderForDate ```
SQL windows function using MAX not giving expected result
[ "", "sql", "sql-server", "aggregate-functions", "" ]
I'm trying to determine withing if If I should create or alter and trigger. My Code is below. ``` IF OBJECT_ID(N'Sales.bonus_reminder', N'TR') IS NOT NULL ALTER TRIGGER Sales.bonus_reminder ON Sales.SalesPersonQuotaHistory AFTER INSERT AS RAISERROR ('Notify Compensation', 16, 10); else CREATE TRIGGER Sales.bonus_reminder ON Sales.SalesPersonQuotaHistory WITH ENCRYPTION AFTER INSERT, UPDATE AS RAISERROR ('Notify Compensation', 16, 10); ``` The errors I'm getting are : * Incorrect syntax near else * Create trigger should be the only statement in batch. How this code should look like?
If you don't want to the *create trigger* statement as dynamic SQL, then you can do something like this: ``` IF OBJECT_ID(N'Sales.bonus_reminder', N'TR') IS NOT NULL exec sp_executesql N'DROP TRIGGER Sales.bonus_reminder'; GO CREATE TRIGGER Sales.bonus_reminder ON Sales.SalesPersonQuotaHistory WITH ENCRYPTION AFTER INSERT, UPDATE AS RAISERROR ('Notify Compensation', 16, 10); ```
Using [this articular](https://www.mssqltips.com/sqlservertip/4640/new-create-or-alter-statement-in-sql-server-2016-sp1/) as my source of truth. here is the short answer. As of `SQL Server 2016 sp1` you can use `create or alter` statements instead of other drop and create methods (my personal fav till this) on some of the database objects (`stored procedures`/`functions`/`triggers`/`views`). so your script could look like ``` create or alter TRIGGER Sales.bonus_reminder ON Sales.SalesPersonQuotaHistory WITH ENCRYPTION AFTER INSERT, UPDATE AS RAISERROR ('Notify Compensation', 16, 10) ```
Create or alter trigger if exists
[ "", "sql", "sql-server", "t-sql", "triggers", "" ]
I'm trying to figure it ou a way to make this selects in the same query. I'm working on the wordpress database, and I have these SQL queries that works by themselves: ``` SELECT `meta_value` AS 'Name' FROM `wp_postmeta` WHERE `meta_key` = "myfield1" ORDER BY `post_id` ASC SELECT `meta_value` AS 'Department' FROM `wp_postmeta` WHERE `meta_key` = "myfield2" ORDER BY `post_id` ASC SELECT `meta_value` AS 'Location' FROM `wp_postmeta` WHERE `meta_key` = "myfield3" ORDER BY `post_id` ASC SELECT `meta_value` AS 'Job Title' FROM `wp_postmeta` WHERE `meta_key` = "myfield4" ORDER BY `post_id` ASC SELECT `meta_value` AS 'Social Number' FROM `wp_postmeta` WHERE `meta_key` = "myfield5" ORDER BY `post_id` ASC ``` But I'd like to have them all on the same query so I can export the data later on a report. Here's what I've tried: ``` SELECT (SELECT `meta_value` FROM `wp_postmeta` WHERE `meta_key` = "myfield1" ORDER BY `post_id` ASC) As name, (SELECT `meta_value` FROM `wp_postmeta` WHERE `meta_key` = "myfield2" ORDER BY `post_id` ASC) AS department, (SELECT `meta_value` FROM `wp_postmeta` WHERE `meta_key` = "myfield3" ORDER BY `post_id` ASC) AS location, (SELECT `meta_value` FROM `wp_postmeta` WHERE `meta_key` = "myfield4" ORDER BY `post_id` ASC) AS jobtitle, (SELECT `meta_value` FROM `wp_postmeta` WHERE `meta_key` = "myfield5" ORDER BY `post_id` ASC) AS socnumber ``` But that's returning commands out of sync. Anyone knows how to fix this? ``` #2014 - Commands out of sync; you can't run this command now ``` Edit: I'm querying the wp\_postmeta table, it has 4 columns meta\_id, post\_id, meta\_key and meta\_value columns. I'm selecting the meta\_value from a custom field I've added (called myfield1, saved in the meta\_key column). I'm ordering by post\_id which is the ID of the posts in wordpress so I get chronologic order. But I have 5 custom fields, myfield1 to myfield5. And I wanted to query all of them once.
To get a resultset with five columns (one for each custom field) you have to join the table to itself five times... Something like this: ``` SELECT pm1.meta_value AS 'Name' pm2.meta_value AS 'Department' pm3.meta_value AS 'Location' pm4.meta_value AS 'Job Title' pm5.meta_value AS 'Social Number' FROM `wp_postmeta` pm1 JOIN `wp_postmeta` pm2 ON pm1.post_id = pm2.post_id JOIN `wp_postmeta` pm3 ON pm1.post_id = pm3.post_id JOIN `wp_postmeta` pm4 ON pm1.post_id = pm4.post_id JOIN `wp_postmeta` pm5 ON pm1.post_id = pm5.post_id WHERE pm1.meta_key = "myfield1" AND pm2.meta_key = "myfield2" AND pm3.meta_key = "myfield3" AND pm4.meta_key = "myfield4" AND pm5.meta_key = "myfield5" ORDER BY `post_id` ASC ``` I haven't checked the SQL syntax, so forgive me if there's a syntax error! Assuming this is mySQL?
One method is to use conditional aggregation: ``` SELECT MAX(CASE WHEN `meta_key` = 'myfield1' THEN meta_value END) as Name, MAX(CASE WHEN `meta_key` = 'myfield2' THEN meta_value END) as Department, MAX(CASE WHEN `meta_key` = 'myfield3' THEN meta_value END) as Location, MAX(CASE WHEN `meta_key` = 'myfield4' THEN meta_value END) as JobTitle, MAX(CASE WHEN `meta_key` = 'myfield5' THEN meta_value END) as SocialNumber FROM `wp_postmeta` WHERE `meta_key` = "myfield5" GROPU BY post_id ORDER B post_id ASC; ```
Multiple selects on the same table
[ "", "sql", "wordpress", "select", "" ]
I have points table with data as follows: ``` +----+---------+--------+ | id | user_id | points | +----+---------+--------+ | 1 | 1 | 10 | | 2 | 3 | 5 | | 3 | 1 | 12 | | 4 | 2 | 27 | | 5 | 1 | 14 | +----+---------+--------+ ``` i need user-id's from above data those points sum is greater than 10 like ``` +--------+ | userid | +--------+ | 1 | | 2 | +--------+ ```
You can't user `WHERE` in your query since you are trying to operate on aggregates of users. Instead, use a `GROUP BY` along with a `HAVING` clause like this: ``` SELECT t.user_id FROM table t GROUP BY t.user_id HAVING SUM(t.points) > 10 ``` Here is a **[SQL Fiddle](http://sqlfiddle.com/#!9/16e48/1)** where you can test this query.
Try this query ``` select user_id from tablename group by user_id having sum(points)>10 ```
mysql SUM() in a WHERE clause
[ "", "mysql", "sql", "select", "sum", "" ]
I have a table with a series of dates, customer ID's, and integers that updates each week with a new date and integer values for all ID's. I'd like to be able to write a query that will detect if the most recent three values are in decreasing order, and return TRUE or FALSE, for each ID. So, for the table: ``` ID | Weekending | Value 1 | 2015-07-15 | 5 2 | 2015-07-15 | 23 1 | 2015-07-08 | 7 2 | 2015-07-08 | 21 1 | 2015-07-01 | 9 2 | 2015-07-01 | 24 ``` I'd want a query that would return TRUE for ID:1 because three consecutive values declined, but FALSE for ID:2 because the value increased.
Not sure about performance but using the `lag` window function like this should work: ``` select *, case when value < lag(value, 1) over (partition by id order by weekending) and value < lag(value, 2) over (partition by id order by weekending) then 'True' else 'False' end from t order by id, weekending desc; ``` This would mark the row `1 | 2015-07-15 | 5` as true, and all other rows as false. [Sample SQL Fiddle](http://www.sqlfiddle.com/#!15/3362c/3) To only get the last row per id you could do this: ``` select ID, Weekending, Value, decrease from ( select *, case when value < lag(value, 1) over (partition by id order by weekending) and value < lag(value, 2) over (partition by id order by weekending) then 'True' else 'False' end as decrease, row_number() over (partition by id order by weekending desc) as r from t ) src where r = 1; ``` which would give you a result like: ``` ID Weekending Value decrease 1 2015-07-15 5 True 2 2015-07-15 23 False ```
If you can select the relevant records based only on the weekending values (so assuming that all ids have records for every recent week), you could select the values into an array and use array methods to detect whether the values are descending or not: ``` with cte_values as ( select id, array_agg(value order by weekending) value_array from my_table where weekending >= <add something to select the relevant weeks>) select id case when value_array[2] < value_array[1] and value_array[3] < value_array[2] then true else false end is_ascending from cte_values ``` Not syntax checked
Detect a series of declining values in a postgres table
[ "", "sql", "postgresql", "" ]
I have the following table: ``` PrimaryKeyColumn1 |PrimaryKeyColumn2 | Column_A | Column_B | Column_C| ID1 Key1 0 1 0 ID2 Key2 1 0 1 ID3 Key3 1 1 0 ``` Basically, I need to figure out a T-SQL query to get the results like this: ``` ID1 Key1 B ID2 Key2 A,C ID3 Key3 A,B ``` Can someone please help me figure this out? The name of the columns include the letter (A,b,c...z) at the end of the column name after a underscore. The 0 or 1 indicates if that letter applies to the ID or not. If 0 it means not and if 1 it means yes. so, it needs to be next to the id in the results if it is 1 otherwise not needed if it is 0. Thank you!
I think the easiest way is just to use `CASE` and some string logic: ``` select PrimaryKeyColumn1, PrimaryKeyColumn2, stuff(((case when column_a = 1 then ',A' else '' end) + (case when column_b = 1 then ',B' else '' end) + (case when column_c = 1 then ',C' else '' end) ), 1, 1, '') as vals from . . . ```
i've done something like this ``` CREATE TABLE #test ( primaryKey nvarchar(150) , columnA bit , columnB bit , columnC bit ) -- create some test table INSERT INTO #test VALUES ('ID1',1,0,0) ,('ID2',1,1,0) ,('ID3',1,0,1) -- as well as the test values SELECT * into #tmpUnpiv FROM #test UNPIVOT ( truefalse for columnName in (columnA, columnB, columnC) ) unpiv; -- save the un-pivoted result as new table #tmpUnpiv SELECT DISTINCT primaryKey + ' ' + (SELECT replace(columnName,'column','') + ',' AS [text()] from #tmpUnPiv WHERE primaryKey = p.primaryKey AND truefalse = 1 FOR XML PATH('')) FROM #tmpUnPiv p -- then combine results through XML PATH DROP TABLE #tmpUnPiv DROP TABLE #test -- drop temp tables ```
Need a simple T-SQL Query
[ "", "sql", "sql-server", "sql-server-2008", "sql-server-2012", "" ]
Originally this is what the Claim Number looks like. 000000165130 it has 12 digits. I'd like to remove the zeros on the left side and the last zero on the right side making it 16513. select [Policy Number], left([Claim Number],11 ) [Claim Number] from DW.CLAIMDATA where [Policy Number] = 'P000463500' ``` Policy Number [Claim Number] P000463500 00000016513 ``` But how do I remove the six zeros on the left side?
Assuming this is fixed-width, you can use `right`. ``` right(left([Claim Number],11), 5) ``` Alternately, if the number of zeros is variable, you could cast to an integer: ``` cast(left([Claim Number], 11) as int) ``` And if you need it as a string instead: ``` cast(cast(left([Claim Number], 11) as int) as varchar(10)) ```
This will convert the varchar to an int, which will remove leading zeros. it then reverses the digits and converts to an int again to remove the trailing zeros. then just reverse on more time to get the original string without leading or trailing zeros. ``` SELECT [Policy Number] REVERSE(CONVERT(VARCHAR(50), CONVERT(INT, REVERSE(CONVERT(VARCHAR(50), CONVERT(INT, [Claim Number])))))) [Claim Number] FROM DW.CLAIMDATA WHERE [Policy Number] = 'P000463500' ```
How do I remove the digits from the left and right side in SQL Server?
[ "", "sql", "sql-server", "" ]
I have a table that is as follows: ``` id value date --------------- a, test, 01-01-15 a, test, 01-02-15 a, test, 01-03-15 a, test1, 01-04-15 a, test1, 01-05-15 b, test, 01-01-15 b, test, 01-02-15 ``` I need to write a query to be able to grab the latest mapping for both a & b for instance, I want my results to be as follows: ``` a, test1, 01-05-15 b, test, 01-02-15 ``` I can't seem to wrap my head around how to accomplish this ``` select max(date) ``` only returns the latest date value and `select max(date), value group by value` obviously does not return the id column
Depending on what version of SQL you use, you may have a function called `ROW_NUMBER() OVER()` that can help. ``` WITH x AS (SELECT id, value, date , ROW_NUMBER() OVER (PARTITION BY id ORDER BY date DESC) AS RowNum FROM table ) SELECT id, value, date FROM x WHERE RowNum = 1 ``` The above would work in SQL Server and anything else that supports CTE and Windowing Functions. You could also write it as a derived table like this: SELECT id, value, date FROM (SELECT id, value, date , ROW\_NUMBER() OVER (PARTITION BY id ORDER BY date DESC) AS RowNum FROM table ) AS x WHERE RowNum = 1
If your database supports `Row_number` then use this ``` select id,value,date from ( select row_number() over(partition by value order by date desc) as RN, id,value,date from yourtable ) A where RN=1 ``` another way is by joining the result back to the table. ``` SELECT A.id, A.value, A.date FROM yourtable A INNER JOIN (SELECT Max(date) AS date, value FROM yourtable GROUP BY value) B ON A.date = b.date AND A.value = B.value ```
Getting most recent date values where other column values change
[ "", "sql", "postgresql", "date", "max", "" ]
I have a query that fails to execute with "Could not allocate a new page for database 'TEMPDB' because of insufficient disk space in filegroup 'DEFAULT'". On the way of trouble shooting I am examining the execution plan. There are two costly steps labeled "Clustered Index Scan (Clustered)". I have a hard time find out what this means? I would appreciate any explanations to "Clustered Index Scan (Clustered)" or suggestions on where to find the related document?
> I would appreciate any explanations to "Clustered Index Scan > (Clustered)" I will try to put in the easiest manner, for better understanding you need to understand both index seek and scan. SO lets build the table ``` use tempdb GO create table scanseek (id int , name varchar(50) default ('some random names') ) create clustered index IX_ID_scanseek on scanseek(ID) declare @i int SET @i = 0 while (@i <5000) begin insert into scanseek select @i, 'Name' + convert( varchar(5) ,@i) set @i =@i+1 END ``` An index seek is where SQL server uses the [b-tree](https://en.wikipedia.org/wiki/B-tree) structure of the index to seek directly to matching records [![enter image description here](https://i.stack.imgur.com/FZSny.jpg)](https://i.stack.imgur.com/FZSny.jpg) you can check your table root and leaf nodes using the DMV below ``` -- check index level SELECT index_level ,record_count ,page_count ,avg_record_size_in_bytes FROM sys.dm_db_index_physical_stats(DB_ID('tempdb'),OBJECT_ID('scanseek'),NULL,NULL,'DETAILED') GO ``` Now here we have clustered index on column "ID" lets look for some direct matching records ``` select * from scanseek where id =340 ``` and look at the Execution plan [![enter image description here](https://i.stack.imgur.com/nqJ5o.png)](https://i.stack.imgur.com/nqJ5o.png) you've requested rows directly in the query that's why you got a clustered index SEEK . **Clustered index scan:** When Sql server reads through for the Row(s) from top to bottom in the clustered index. for example searching data in non key column. In our table NAME is non key column so if we will search some data in the name column we will see `clustered index scan` because all the rows are in clustered index leaf level. Example ``` select * from scanseek where name = 'Name340' ``` [![enter image description here](https://i.stack.imgur.com/pnH7x.png)](https://i.stack.imgur.com/pnH7x.png) please note: I made this answer short for better understanding only, if you have any question or suggestion please comment below.
Expanding on Gordon's answer in the comments, a clustered index scan is scanning one of the tables indexes to find the values you are doing a where clause filter, or for a join to the next table in your query plan. Tables can have multiple indexes (one clustered and many non-clustered) and SQL Server will search the appropriate one based upon the filter or join being executed. [Clustered Indexes](https://msdn.microsoft.com/en-us/library/ms190457.aspx) are explained pretty well on MSDN. The key difference between clustered and non-clustered is that the clustered index defines how rows are stored on disk. If your clustered index is very expensive to search due to the number of records, you may want to add a non-clustered index on the table for fields that you search for often, such as date fields used for filtering ranges of records.
What "Clustered Index Scan (Clustered)" means on SQL Server execution plan?
[ "", "sql", "sql-server", "sql-execution-plan", "" ]
I have this query : ``` SELECT Company_Name FROM tblBeneficaries WHERE BeneficaryID IN () ``` When I execute it it returns the message: > Incorrect syntax near ')'. What is the problem?
That's because, your `IN` clause has no parameter/argument `WHERE BeneficaryID IN ()`. It should be `WHERE BeneficaryID IN (id1,id2,id3, ...,idn)` Your current query is same as saying below, no need of the `WHERE` condition ``` SELECT Company_Name FROM tblBeneficaries ```
Your query as it reads should return no rows, because the set of desired BeneficaryIDs is empty. This syntax however is not valid, as the IN clause *must* contain at least one value, unfortunately. In order to represent an empty set, add NULL: ``` SELECT Company_Name FROM tblBeneficaries WHERE BeneficaryID IN (NULL); ``` This is useful when building the query dynamically, where you end up with zero or more elements in the set, i.e. `(NULL)` for an empty set or `(NULL, 4, 16)` for a set of numbers 4 and 16, for instance. It is still an annoying limitation on the IN clause and the trick shown doesn't work with NOT IN! (Adding NULL would prevent you from fetching *any* record, so you would have to use a non-null value that's never used, such as `WHERE BeneficaryID NOT IN (-1)`).
sql query "WHERE IN"
[ "", "sql", "where-clause", "" ]
If two independent scripts call a database with update requests to the same field, but with different values, would they execute at the same time and one overwrite the other? as an example to help ensure clarity, imagine both of these statements being requested to run at the same time, each by a different script, where Status = 2 is called microseconds after Status = 1 by coincidence. ``` Update My_Table SET Status = 1 WHERE Status= 0; Update My_Table SET Status = 2 WHERE Status= 0; ``` What would my results be and why? if other factors play a roll, expand on them as much as you please, this is meant to be a general idea. Side Note: Because i know people will still ask, my situation is using MySql with Google App Engine, but i don't want to limit this question to just me should it be useful to others. I am using Status as an identifier for what script is doing stuff to the field. if status is not 0, no other script is allowed to touch it.
This is what locking is for. All major SQL implementations lock DML statements by default so that one query won't overwrite another before the first is complete. There are different levels of locking. If you've got row locking then your second update will run in parallel with the first, so at some point you'll have 1s and 2s in your table. Table locking would force the second query to wait for the first query to completely finish to release it's table lock. You can usually turn off locking right in your SQL, but it's only ever done if you need a performance boost and you know you won't encounter race conditions like in your example. **Edits based on the new `MySQL` tag** If you're updating a table that used the `InnoDB` engine, then you're working with row locking, and your query could yield a table with both 1s and 2s. If you're working with a table that uses the `MyISAM` engine, then you're working with table locking, and your update statements would end up with a table that would either have all 1s or all 2s.
from <https://dev.mysql.com/doc/refman/5.0/en/lock-tables-restrictions.html> (MySql) > Normally, you do not need to lock tables, because all single UPDATE statements are atomic; no other session can interfere with any other currently executing SQL statement. However, there are a few cases when locking tables may provide an advantage: from <https://msdn.microsoft.com/en-us/library/ms177523.aspx> (sql server) > An UPDATE statement always acquires an exclusive (X) lock on the table it modifies, and holds that lock until the transaction completes. With an exclusive lock, no other transactions can modify data. If you were having two separate connections executing the two posted update statements, whichever statement was started first, would be the one that completed. THe other statement would not update the data as there would no longer be records with a status of 0
Do Sql Update Statements run at the same time if requested at the same time?
[ "", "mysql", "sql", "sql-update", "" ]
I have a string which may be of below types ``` string expected result 15-th-rp 15 15/12-rp 12 15-12-th 12 4-5-6 5 ``` Now i have to find the digit, 1) If a string contains only 1digit set then same will be displayed. 2) If there are multiple set of digits between character then i have to find the 2nd set of digit. Please help me. ``` with a as ( select '15-th-rp' as data from dual union all select '15/12-rp' from dual union all select '15-12-th' from dual union all select '4-5-6' from dual ) select regexp_substr(data,'[0-9]+',REGEXP_INSTR(data,'[/|-]')+1) from a; ```
I think this is what you're after: ``` with a as (select '15-th-rp' data from dual union all select '15/12-rp' data from dual union all select '15-12-th' data from dual union all select '4-5-6' data from dual) select data, coalesce(regexp_substr(data,'[0-9]+',1,2), regexp_substr(data,'[0-9]+',1,1)) extracted_data from a; DATA EXTRACTED_DATA -------- -------------- 15-th-rp 15 15/12-rp 12 15-12-th 12 4-5-6 5 ``` The good thing about using COALESCE is that it won't evaluate the second (and subsequent) arguments unless they're required.
Do it in 2 steps. First match the first and second occurences of the pattern and then show the first set only if there is no second set (i.e. it's null). ``` with a(data) as ( select '15-th-rp' from dual union all select '15/12-rp' from dual union all select '15-12-th' from dual union all select '4-5-6' from dual ), b(data, first, second) as ( select data ,regexp_substr(data, '[[:digit:]]+') ,regexp_substr(data, '[[:digit:]]+', 1, 2) from a ) select data, nvl(second, first) as result from b; ``` Returns result: ``` DATA RESULT -------- ------ 15-th-rp 15 15/12-rp 12 15-12-th 12 4-5-6 5 ```
Find a 2nd set of digit in a string(SQL/PL-SQL)
[ "", "sql", "oracle", "plsql", "" ]
I have a table with many rows that looks something like this: ``` Order_id Part_id Description GL_code 12345 Gk123 Gun mount 6850 12345 null Freight 4050.2 12346 Blac Lock 6790 12346 null Freight 4050 ``` I want to make a query that returns all order information where the `part_id` is a `GK% number` or a `Blac number` and the order has a freight `GL_code` of `4050.2`. The `Freight` is in the description column and in a different row than the `part_id`. I do not want to include all the `Blac` and `GK% parts` that do not have `freight` and `4050.2` on the order. I've only been able to get all 4050.2's or all GK%'s and Blac's.
One way would be to use a correlated subquery that makes sure that there exists a row for the same order with the Freight code 4050.2: ``` select * from tab t where (Part_id = 'Blac' or Part_id like 'GK%') and exists ( select 1 from tab where Order_id = t.Order_id and Description = 'Freight' and GL_code = '4050.2' -- remove the quotes if this is a number and not a char value ) ``` With your sample data this would return: ``` Order_id Part_id Description GL_code 12345 Gk123 Gun mount 6850 ``` Another option would be to use a purely set based query: ``` select Order_id from tab where (Part_id = 'Blac' or Part_id like 'GK%') intersect select Order_id from tab where Description = 'Freight' and GL_code = '4050.2' ``` Or if you just want to rows with Freight you can just invert the conditions: ``` select * from shipper_line t where Description = 'Freight' and GL_code = '4050.2' and exists ( select 1 from shipper_line where Order_id = t.Order_id and (Part_id = 'Blac' or Part_id like 'GK%') ); ```
It's probably easiest to join the table to itself; SQL is better at cross-column checks than cross-row checks. ``` Select a.* from MyTable a inner join MyTable b on a.Order_ID = b.Order_ID and b.Part_ID is null and b.Description = 'Freight' and b.GL_code = 4050.2 where (a.Part_ID like 'GK%' or a.Part_ID like 'Blac%') ``` You can also use `exists` or a subquery: ``` select * from MyTable a where (a.Part_ID like 'GK%' or a.Part_ID like 'Blac%') and exists (select 1 from MyTable b where a.order_ID = b.order_ID and b.Description = 'Freight' and b.GL_code = 4050.2) select * from MyTable a where (a.Part_ID like 'GK%' or a.Part_ID like 'Blac%') and order_ID in (select order_ID from MyTable where Description = 'Freight' and GL_code = 4050.2) ``` Edit: Here's a SQL Fiddle: <http://sqlfiddle.com/#!6/a778c/1/0>
How to filter order numbers by multiple columns
[ "", "sql", "sql-server", "" ]
My present table structure is : [![enter image description here](https://i.stack.imgur.com/QZSs2.png)](https://i.stack.imgur.com/QZSs2.png) I want to calculate the time duration weighted average price for each product id for that day. e.g. Average Price for PA1 = (100\*4 + 105\*4 + 110\*8 + 115\*4) / 20 i.e Here price stays 100 for 4 hrs, 105 for 4 hrs, 110 for 8 hrs and 115 for 4 hrs (the closing time is midnight) This below Oracle SQL will help create the table structure quickly, thank you. ``` CREATE TABLE ProductTable (ID NUMBER(5), ProdId varchar(7), StartDate DATE, Price NUMBER(4)); INSERT INTO ProductTable (ID, ProdId, StartDate, Price) VALUES (1, 'PA1', TO_DATE('2015/08/01 4:00', 'YYYY/MM/DD hh24:mi'), 100); INSERT INTO ProductTable (ID, ProdId, StartDate, Price) VALUES (2, 'PA1', TO_DATE('2015/08/01 8:00', 'YYYY/MM/DD hh24:mi'), 105) ; INSERT INTO ProductTable (ID, ProdId, StartDate, Price) VALUES (3, 'PA2', TO_DATE('2015/08/01 9:00', 'YYYY/MM/DD hh24:mi'), 120); INSERT INTO ProductTable (ID, ProdId, StartDate, Price) VALUES (4, 'PA1', TO_DATE('2015/08/01 12:00', 'YYYY/MM/DD hh24:mi'), 110) ; INSERT INTO ProductTable (ID, ProdId, StartDate, Price) VALUES (5, 'PA3', TO_DATE('2015/08/01 14:00', 'YYYY/MM/DD hh24:mi'), 150) ; INSERT INTO ProductTable (ID, ProdId, StartDate, Price) VALUES (6, 'PA2', TO_DATE('2015/08/01 15:00', 'YYYY/MM/DD hh24:mi'), 130) ; INSERT INTO ProductTable (ID, ProdId, StartDate, Price) VALUES (7, 'PA2', TO_DATE('2015/08/01 18:00', 'YYYY/MM/DD hh24:mi'), 125) ; INSERT INTO ProductTable (ID, ProdId, StartDate, Price) VALUES (8, 'PA1', TO_DATE('2015/08/01 20:00', 'YYYY/MM/DD hh24:mi'), 115) ; ```
``` with x as ( select prodid, price, nvl(lead(extract(hour from cast(startdate as timestamp))) over (partition by prodid order by startdate), 24) - extract(hour from cast(startdate as timestamp)) as timediff from producttable), y as ( select prodid, sum(x.timediff*x.price) as sm, sum(x.timediff) as stimediff from x group by prodid) select prodid, sm / stimediff as weightedAvg_Price from y; ```
You basically want the `lead()` function and then some date arithmetic: ``` select prodid, sum((least(trunc(startdate + 1, next_startdate) - startdate)*24) numhours from (select p.*, lead(startdate) over (order by startdate) as next_startdate from producttable t ) t where date = date '2015-08-01' group by prodid; ``` This bases the calculation on the first record during the day, which is how you have described the results in the question.
Extract EndDate from the StartDate column using SQL i.e two dates coming from the same column
[ "", "sql", "oracle", "" ]
I have a SQL Server stored procedure with nearly 300 variables within it. The variables were created as follows: ``` CASE WHEN Grp1 = 'SALES' AND grp_mnth = MONTH_0 THEN SUM(Col) OVER(PARTITION BY grp_loc,COMM) ELSE 0 END AS "SALES_1" ``` As this has data issue all the variables were replaced like: ``` SUM(CASE WHEN Grp1 = 'SALES' AND grp_mnth = MONTH_0 THEN Col ELSE 0 END) OVER(PARTITION BY grp_loc,COMM) AS "SALES_1" ``` The data issue has been solved by the above statement, but the procedure throws the below error when we add all the 300 variables, if only about 100 is added, the procedure runs fine. > Internal error: An expression services limit has been reached. Please look for potentially complex expressions in your query, and try to simplify them. As per the answer posted I have tried to split the query to multiple select queries and the error has been resolved, but while combining the result the data has not been fetched correctly. ``` BEGIN CREATE TABLE #TEMPTAB1 ( TYPE_1 char(15), NUMBER_1 char(7), STATUS_1 char(1), ... ) CREATE TABLE #TEMPTAB2 ( TYPE_2 char(15), NUMBER_2 char(7), STATUS_2 char(1), ... ) CREATE TABLE #TEMPTAB3 ( TYPE_3 char(15), NUMBER_3 char(7), STATUS_3 char(1), ... ) SELECT * FROM #TEMPTAB1 T1 INNER JOIN #TEMPTAB2 T2 ON T1.TYPE_1=T2.TYPE_2 AND T1.NUMBER_1 = T2.NUMBER_2 AND T1.STATUS_1 = T2.STATUS_2 INNER JOIN #TEMPTAB3 T3 ON T1.TYPE_1=T3.TYPE_3 AND T1.NUMBER_1 = T3.NUMBER_3 AND T1.STATUS_1 = T3.STATUS_3 END ``` Can anyone please suggest a way to correct the joins in the above code.
Please re-write your query! This issue occurs because SQL Server limits the number of identifiers and constants that can be contained in a single expression of a query. This limit is 65,535. one approach might be : > You can split the select query to multiple select queries.Store the > result in temp tables and combine the results at the end. [**more info**](http://support.microsoft.com/kb/913050)
Right click on your Database => properties => click on options, then change compatibility level to lower one in the list. See this image [compatibility level](https://i.stack.imgur.com/7ycac.png)
SQL Server - Internal error: An expression services limit has been reached
[ "", "sql", "sql-server", "" ]
Is this all query same? I can't see any difference in result, It just change the number in table row according to our selection. but, why we always use 1. Does it has some technical meaning and advantages?
A query like `SELECT <Something> FROM Tablename` selects `something` per row of `tablename`. Now if this `something` is not a column of table, and we just want to avoid computation on table, we use a number like `1` or `2` or anything that can be returned per row without calculation. They are all the same. Even something like ``` SELECT 'I am doing good' FROM tablename ``` Some smart developers choose to return a scalar value which can be useful for other parts of query like say ``` IF EXISTS( SELECT 1 FROM Issue_Log) BEGIN PRINT 'HAVE MORE WORK' END ``` In the above example we are simply checking if there are any rows in `Issue_Log` table. We simply return 1 from select query. If there were no rows, we'd not get any number of 1 as result and there fore `IF EXISTS(NULL)` is evaluated as false. Welcome to world of SQL.
You need to understand that the number you specify will be returned for as many times as there are rows in the table. The number of rows is same but the value returned is what you specify in the SELECT statement
select 1 from table, select 2 from table, select 3 from table e.t.c
[ "", "mysql", "sql", "sql-server", "database", "oracle", "" ]
I am trying to extract a substring using the following query. I am not getting the expected result. ``` DECLARE @st1 varchar(10) SET @st1 = 'Case # 00548295' SELECT @st1,ltrim(SUBSTRING(@st1, CHARINDEX('# ', @st1) + 1, LEN(@st1))) ``` I am getting 005 as the answer. I am trying to extract the integer values after 'Case[space]#[Space]...". My result needs to be 00548295. What is wrong here
You define `@st1` as varchar(10), but then try to assign 15 characters (`'Case # 00548295'`) to it. SQL Server silently truncates the data in this case. Simply increase the size: ``` DECLARE @st1 varchar(20) SET @st1 = 'Case # 00548295' SELECT @st1,ltrim(SUBSTRING(@st1, CHARINDEX('# ', @st1) + 1, LEN(@st1))) ```
You did not give your varchar variable enough characters to hold the entire string to begin with. ``` DECLARE @st1 varchar(20) SET @st1 = 'Case # 00548295' SELECT @st1,ltrim(SUBSTRING(@st1, CHARINDEX('# ', @st1) + 1, LEN(@st1))) -------------------- -------------------- Case # 00548295 00548295 (1 row(s) affected) ```
Substring in sql server
[ "", "sql", "sql-server", "t-sql", "" ]
Lets say you have the next code: ``` DECLARE @A INT = 1, @B INT = NULL; IF (@B != @A) SELECT 1; ELSE IF (@B = @A) SELECT 2; ELSE SELECT 3; ``` As you can see variable `@A` equals *'1'* for sure and variable `@B` certainly doesn't. I'm executing that part of code in *Microsoft SQL Server Management Studio 2014* and I'm getting *'3'* as result. That means `@A` is not the same as `@B`, but it's also not different than `@B`. How is that even possible? What am I missing here?
You cannot compare null with other values. You need to handle nulls separately. So,this will work ``` DECLARE @A INT = 1, @B INT = NULL; IF (@B != @A or @B is null ) SELECT 1; ELSE IF (@B = @A) SELECT 2; ELSE SELECT 3; ```
The correct version should be: ``` IF (@B = @A OR (@B IS NULL AND @A IS NULL)) SELECT 2; ELSE IF (@B != @A OR @B IS NULL OR @A IS NULL) SELECT 1; ELSE SELECT 3; ``` because NULL comparison must always be handled separately in SQL. I inverted the `!=` and the `=` cases because tsql doesn't have a logical XOR operator, because I want to consider NULL equal to NULL. Note that then the `SELECT 3` won't ever happen now.
Comparing two variables in SQL
[ "", "sql", "t-sql", "sql-server-2014", "" ]
I've written this code and someone helped me out but I can't figure out what ''?'' means to the DB engine. It would really help me understand what I've written here. ``` USE master EXEC sp_MSforeachdb 'use ? select ''?'', st.NAME, sc.NAME, sc.system_type_id, db_name() FROM sys.tables st INNER JOIN sys.columns sc ON st.object_id = sc.object_id WHERE (st.name LIKE ''rawp%'' or st.name like ''main'')' ``` I understand I'm searching all db in the instance for the name of the columns and their related tables but I need to understand how ''?'' fits into the Query. Thanks!
`sp_msforeachdb` is kind of... special. And I mean partly in the "riding the short bus" sense of the word (and even that's not entirely fair here; my main complaint is that it's undocumented), but mostly I just mean that it's unique. This syntax is only used with `sp_msforeachdb` and `sp_msforeachtable`, and nowhere else in Sql Server itself. What you should think of, though, is if you've ever written code that used an OLE or ODBC provider to do parameterized queries. Those tools use the `?` character as a parameter placeholder. Something similar is going on here. The `?` character is a parameter placeholder, where the value of the parameter will later be set to the name of each DB in your server, including `master`, `tempdb`, `model`, and `msdb`. Try running this code to get a sense of how it works: ``` EXEC sp_MSforeachdb 'print ''?'' ' ``` It's tempting to use this procedure for things like maintenance, reporting, and alerting scripts, but it's something you probably want to do only sparingly. Best not to rely on undocumented behavior. The [sqlblog link](https://sqlblog.org/2020/05/12/bad-habits-to-kick-relying-on-undocumented-behavior) posted first in another answer is the standard thinking on the subject. Basically, there are likely no *specific* plans to retire or break this procedure (or other `sp_*` procedures), but there is also pretty much *zero* investment in moving these procedures forward from version to version. For example, the linked article indicates that sp\_msforeachdb relies on the old `dbo.sysdatabases` rather than the more-correct `sys.databases`. I wouldn't expect Microsoft to target sp\_msforeachdb directly, but if they ever decide to remove `dbo.sysdatabases` my expectation would be that their testing will discover that sp\_msforeachdb is broken, at which time they'll also just remove sp\_msforeachdb rather than fix it.
From <http://weblogs.sqlteam.com/joew/archive/2008/08/27/60700.aspx> > Notice that [?] is used as a placeholder for the heretofore unspecified database name So the `?` will be replaced with the database name. Note that it is probably better to write it as ``` EXEC sp_MSforeachdb 'use [?] select ''?'', st.NAME, sc.NAME, ``` to support databases with "funny" names, like `[Hello[] World'`
What does ''?'' mean in SQL Server?
[ "", "sql", "sql-server-2012", "" ]
Let's say we have two tables, **`Incidents`** and **`IncidentTracking`**: ``` Incidents (Id INT PRIMARY KEY, CreatedOn Datetime, State VARCHAR(50)) IncidentTracking (Id INT PRIMARY KEY, IncidentId INT FOREIGN KEY REFERENCES TO Incidents.Id, TrackingDate Datetime, NewState VARCHAR(50)) ``` How do I insert a new record into `IncidentTracking` while updating some incidents? For example, I want to change the `State` for all incidents that are more than 90 days old to "Outdated", and insert a tracking info record where `IncidentId` is the updated incident (`SCOPE_IDENTITY()` maybe?), `TrackingDate` is `GETDATE()` and `NewState` is also "Outdated". Can it be done all in one statement or should I write a cursor?
I'd use [`OUTPUT`](https://msdn.microsoft.com/en-us/library/ms177564.aspx) clause. As `IncidentTracking` has a foreign key, it is not possible to OUTPUT directly to it. You'll get an error message if you try: > The target table 'dbo.IncidentTracking' of the OUTPUT INTO clause > cannot be on either side of a (primary key, foreign key) relationship. > Found reference constraint 'FK\_IncidentTracking\_Incidents'. So, we can use a temporary table or table variable. Like this: ``` BEGIN TRANSACTION; DECLARE @T TABLE (IncidentId int, TrackingDate datetime, NewState varchar(50)); UPDATE [dbo].[Incidents] SET [State] = 'Outdated' OUTPUT inserted.Id AS [IncidentId], GETDATE() AS [TrackingDate], inserted.[State] AS [NewState] INTO @T ([IncidentId], [TrackingDate], [NewState]) WHERE [CreatedOn] < DATEADD(day, -90, GETDATE()) ; INSERT INTO [dbo].[IncidentTracking] ([IncidentId], [TrackingDate], [NewState]) SELECT [IncidentId], [TrackingDate], [NewState] FROM @T; COMMIT TRANSACTION; ```
Write a stored procedure to perform your task. You can put that kind of logic in a stored procedure easily enough. If you will allow access to the table(s) outside of the procedure and still want the same behavior, a trigger is likely what you want (not a fan of them myself). Make sure, when writing your trigger, you remember that it will run against a ***recordset*** not a single record.
SQL Server: how to insert a record into related table during an update?
[ "", "sql", "sql-server", "t-sql", "" ]
I wish to retrieve data from two tables with a twist, I need multiple values (prices from different dates) from the second table. The following works fine, but I am not much of a SQL jockey and I have a sneaking suspicion that there must be a better way than joining the second table twice. Is there? ``` SELECT A.date, A.symbol, A.Value, B.close, C.close FROM Potential A LEFT JOIN Stock_Daily B ON A.symbol = B.symbol LEFT JOIN Stock_Daily C ON A.symbol = C.symbol WHERE A.Date = "2015-08-05" AND B.date = "2015-08-05" AND C.date = "2015-08-04" ORDER BY value DESC LIMIT 20; ``` Thanks in advance!
It is perfectly fine to do it the way you're doing it, but to expand a bit more on the comment, there is a potential issue with your query. Since you are doing a `LEFT JOIN` with the conditional logic in the `WHERE` clause for the joined table, it is effectively transforming the `LEFT JOIN` into an `INNER JOIN`. Should there be any dates missing from the right-hand table to not satisfy an `INNER JOIN`, this will give you incorrect results. I recommend changing the query to the following to avoid this: ``` SELECT A.date, A.symbol, A.Value, B.close, C.close FROM Potential A LEFT JOIN Stock_Daily B ON A.symbol = B.symbol AND B.date = "2015-08-05" LEFT JOIN Stock_Daily C ON A.symbol = C.symbol AND C.date = "2015-08-04" WHERE A.Date = "2015-08-05" ORDER BY value DESC LIMIT 20; ```
You can handle the pivoting with `case` expressions and `group by`: ``` SELECT p.date, p.symbol, min(p.value) value, /* dummy aggregate */ min(CASE WHEN s.Date = p.Date THEN s.close END) Close1, min(CASE WHEN s.Date = DATE_ADD(p.Date, INTERVAL 1 DAY THEN s.close END) Close2 FROM Potential p LEFT OUTER JOIN Stock_Daily s ON s.symbol = p.symbol AND (s.date = p.date or s.date = DATE_ADD(p.date, INTERVAL 1 DAY)) GROUP BY p.date, p.symbol ORDER BY value DESC LIMIT 20; ```
SQL query, two tables, multiple items
[ "", "mysql", "sql", "" ]
I managed to create a simple query that selects a random first and last name and inserts them into a result table. I wanted to create something I could interchange with the various tests I run where I have to manufacture a lot of data. Here is the code (I only included 5 first and last names each for simplicity purposes): ``` SELECT FirstName, LastName FROM (SELECT TOP 1 FirstName FROM (SELECT 'John' AS FirstName UNION SELECT 'Tim' AS FirstName UNION SELECT 'Laura' AS FirstName UNION SELECT 'Jeff' AS FirstName UNION SELECT 'Sara' AS FirstName) AS First_Names ORDER BY NEWID()) n1 FULL OUTER JOIN (SELECT TOP 1 LastName FROM (SELECT 'Johnson' AS LastName UNION SELECT 'Hudson' AS LastName UNION SELECT 'Jackson' AS LastName UNION SELECT 'Ranallo' AS LastName UNION SELECT 'Curry' AS LastName) AS Last_Names ORDER BY NEWID()) n2 ON [n1].FirstName = [n2].LastName WHERE n1.FirstName IS NOT NULL OR n2.LastName IS NOT NULL ``` Here are the results: ``` FirstName LastName NULL Hudson John NULL ``` I want the results to return one row with a first and last name randomly generated so that each row would have a complete name (no NULL values). I'm sure it's something simple I am overlooking.
The following code will allow you to generate a series of random names, where a cross join solution allows only one at a time. Not sure if you need to do this more than once, but if you do: ``` create table #table (firstname varchar(50), lastname varchar(50)) declare @counter int = 1 declare @max int = 5 --set number of repetitions here declare @a varchar(50) declare @b varchar(50) while @counter <= @max begin SET @a = (SELECT TOP 1 FirstName FROM (SELECT 'John' AS FirstName UNION SELECT 'Tim' AS FirstName UNION SELECT 'Laura' AS FirstName UNION SELECT 'Jeff' AS FirstName UNION SELECT 'Sara' AS FirstName) AS First_Names ORDER BY NEWID()) SET @b = (SELECT TOP 1 LastName FROM (SELECT 'Johnson' AS LastName UNION SELECT 'Hudson' AS LastName UNION SELECT 'Jackson' AS LastName UNION SELECT 'Ranallo' AS LastName UNION SELECT 'Curry' AS LastName) AS Last_Names ORDER BY NEWID()) insert into #table values (@a, @b) set @counter = @counter + 1 end select * from #table ```
The problem is your join. This is how you can do it: ``` SELECT FirstName, LastName FROM (SELECT TOP 1 FirstName FROM (SELECT 'John' AS FirstName UNION SELECT 'Tim' AS FirstName UNION SELECT 'Laura' AS FirstName UNION SELECT 'Jeff' AS FirstName UNION SELECT 'Sara' AS FirstName) AS First_Names ORDER BY NEWID())n1 CROSS JOIN (SELECT TOP 1 LastName FROM (SELECT 'Johnson' AS LastName UNION SELECT 'Hudson' AS LastName UNION SELECT 'Jackson' AS LastName UNION SELECT 'Ranallo' AS LastName UNION SELECT 'Curry' AS LastName) AS Last_Names ORDER BY NEWID())n2 ```
SQL random name generator not inserting first and last name in the same row
[ "", "sql", "sql-server", "t-sql", "join", "outer-join", "" ]
I have a table with thousands of records: ``` +-----+-------+------+----------+ | id | group | code | label | +-----+-------+------+----------+ | 1 | abc | 345 | myLabel1 | | 2 | abc | 345 | myLabel4 | | 3 | def | 987 | myLabel2 | | 4 | def | 345 | myLabel4 | | 5 | def | 987 | myLabel2 | | 6 | def | 231 | myLabel1 | | 7 | def | 231 | myLabel9 | | 8 | abc | 345 | myLabel3 | | 9 | def | 987 | myLabel2 | | 10 | def | 345 | myLabel1 | | ... | ... | ... | ... | +-----+-------+------+----------+ ``` In this table some entries are similar to each other. For example 1,2 and 8: `group` is "abc" and `code` is "345". Another example are entries 3, 5 and 9: `group` is "def", `code` is "987" and `label` is "myLabel2". Formally one could say: ``` IF group = "abc", THEN code = "345" IF group = "def" AND code = "987", THEN label = "myLabel2 " ``` I want to run a program over my tables which finds those similarities. **PS:** This table is just an example. It actually has more than 20 columns with different names.
This will give you duplicate rows in table with number of time occurrence ``` SELECT [group], [code] , COUNT(1) as CNT FROM YourTableName GROUP BY [group], [code] HAVING COUNT(1) > 1 ORDER BY CNT; ```
You can do the first as: ``` select group, min(code) as implied_code from records group by group having count(distinct code) = 1; ``` And the second as: ``` select group, code, min(label) as implied_label from records group by group, code having count(distinct label) = 1; ``` Note: `group` is a really bad name for a column because it is a SQL reserved word.
Find similarities in tables
[ "", "sql", "database", "similarity", "" ]
I want set two conditions in query, but query does not work. ``` SELECT IF((I.num_ip <> '100.100.100.100') && (I.num_ip <> '100.100.100.101'), I.num_ip, null) AS num_ip FROM company C, computers WHERE C.id_ip = I.id_ip AND C.date_conn = '2015-08-12' GROUP BY num_ip ```
It can be written in other way as well using `CASE` expression like ``` Select case when I.num_ip not in ('100.100.100.100', '100.100.100.101') then I.num_ip else null end) as num_ip From company C join computers I on C.id_ip = I.id_ip Where C.date_conn = '2015-08-12'; ``` \*\* Not sure though why you need a `GROUP BY` here.
IN your query there are multiple issues 1. Condition (I.num\_ip <> '100.100.100.100') && (I.num\_ip <> '100.100.100.101) wont be true almost all the time since num\_ip will have value either 100.100.100.100' or '100.100.100.101' 2. there is one ' missing at end on 101. please go through "<http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html>" for detailed use of control flows. 3. make sure num\_ip is nullable. 4. there are 2 where in query 5. also if you have group by then you should use aggregate functions COUNT, SUM, AVG, etc. please go through <http://www.tutorialspoint.com/mysql/mysql-group-by-clause.htm> 6. Computer table doesn't have alias used as I Select if(((I.num\_ip <> '100.100.100.100') || (I.num\_ip <> '100.100.100.101')), I.num\_ip, null) as num\_ip From company C, computers I Where C.id\_ip = I.id\_ip and C.date\_conn = '2015-08-12' --Group by num\_ip Thanks, Tanmay
Mysql: how send two parameters in constraction IF()
[ "", "mysql", "sql", "" ]
Thanks in advance for your time I have the following table from the mondial database ([Website](http://www.dbis.informatik.uni-goettingen.de/Mondial/), [Documentation](http://dbis.informatik.uni-freiburg.de/content/courses/SS12/Praktikum/Datenbanken%20und%20Cloud%20Computing/mondial/mondial-print.pdf)). ``` CREATE TABLE borders (Country1 VARCHAR(4), Country2 VARCHAR(4), Length FLOAT, CONSTRAINT CHECK (Length > 0), CONSTRAINT BorderKey PRIMARY KEY (Country1,Country2)); ``` **The table doesn't contain reciprocal values** (it only contains *Country1, Country2* or *Country2, Country1* to define a border). I need to make a query that outputs all couples of nations that are not neighbors but are neighbors to neighbors (all couples of nations that are separated by another nation). Country1 and Country2 contain the nations codes like "F" for France, "I" for Italy and so on. Here is an example row of the output: RSM || F RSM is the code for the Republic of San Marino (San Marino is an enclave entirely surrounded by Italy). RSM is not a neighbor to France obviously but Italy is, so the output contains the couple ***RSM, F*** and it also contains a similar couple for all the other nations that are neighbors to Italy. I have spent several hours trying to figure out a solution but I'm far from reaching a solution and I already have many questions, here's what I did: I started by making a query to find out all neighbors to a certain country. ``` SELECT Country1 FROM borders WHERE Country2 = "RSM" UNION SELECT Country2 FROM borders WHERE Country1 = "RSM" ``` This obviously only outputs "I" which is the code for Italy (so it's correct). I already have a question: is there a better way to do this or is this ok? Then I took it to the next step and made a query to find all the neighbors to the neighbors previously found like this: ``` SELECT Country1 FROM borders WHERE Country2 IN ( SELECT Country1 FROM borders WHERE Country2 = "RSM" UNION SELECT Country2 FROM borders WHERE Country1 = "RSM" ) UNION SELECT Country2 FROM borders WHERE Country1 IN ( SELECT Country1 FROM borders WHERE Country2 = "RSM" UNION SELECT Country2 FROM borders WHERE Country1 = "RSM" ) ``` So I do basically the same thing but instead of looking for the neighbors of a specific country I search for all the neighbors of a certain group of countries. The query is not finished because I need to remove the starting country and all its neighbors from the output so the query becomes this: ``` SELECT "RSM", Country1 FROM ( SELECT Country1 FROM borders WHERE Country2 IN ( SELECT Country1 FROM borders WHERE Country2 = "RSM" UNION SELECT Country2 FROM borders WHERE Country1 = "RSM" ) UNION SELECT Country2 FROM borders WHERE Country1 IN ( SELECT Country1 FROM borders WHERE Country2 = "RSM" UNION SELECT Country2 FROM borders WHERE Country1 = "RSM" ) ) tmp WHERE tmp.Country1 NOT IN ( SELECT Country1 FROM borders WHERE Country2 = "RSM" UNION SELECT Country2 FROM borders WHERE Country1 = "RSM" ) AND tmp.Country1 != "RSM" ``` This query works for a single starting country (RSM) but I need to output all the distinct (meaning no *F, RSM* couple in the output if *RSM, F* is present) couples like stated above and I also think that the query I developed is really bad because it keeps repeating the same query over and over. I have already searched ways to avoid this and I found interesting solutions such as the *WITH tmp AS query* clause but **I'm using MySQL** and the *WITH AS* clause is not supported. So what do you experts think? Is this at least near the right way to go? Am I missing something obvious? Thanks in advance for your time and sorry if this terrible query gave you a headache. EDIT 1: I made a [SQL Fiddle](http://sqlfiddle.com/#!9/0f9cc/1) containing the table, the data and my query in case you want to easily be able to run queries. I hope it'll be helpful to somebody.
User [jpw](https://stackoverflow.com/users/1175077/jpw) solved the problem in a comment. Many users suggested to make a view of the table in order to make the couples or nations symmetrical like this: ``` CREATE VIEW borders_symmetrical AS SELECT Country1, Country2 FROM borders UNION ALL SELECT Country2, Country1 FROM borders; ``` That indeed is a great suggestion in my opinion and it shows how views can be really useful. Now the query becomes much much easier: ``` SELECT DISTINCT b1.Country1, b2.Country2 FROM borders_symmetrical b2 JOIN borders_symmetrical b1 ON b2.Country1 = b1.Country2 WHERE b2.Country2 <> b1.Country1 #AND (b1.Country1 = 'RSM' OR b2.Country2 = 'RSM') # Debug a single nation AND b2.country2 NOT IN ( SELECT Country2 FROM borders_symmetrical WHERE Country1 = b1.Country1 ) AND b1.Country1 < b2.Country2 ``` I have reposted this because jpw didn't submit and answer but just commented on the post. Thanks again to everyone for your precious time, you helped me a lot and made me understand a couple of important things about SQL and Databases.
Assuming that the data has reciprocal values, meaning both ('France', 'Italy') and ('Italy', 'France') are in the data, then you can do this basically with a join and filtering: ``` select b1.country1, b2.country2 from borders b1 join borders b2 on b1.country2 = b2.country1 where not exists (select 1 from borders b where b.country1 = b1.country1 and b.country2 = b2.country2 ); ``` EDIT: Without reciprocal values, I would just create a view and use the view for the query: ``` create view v_borders as select country1, country2 from borders union all select country2, country1 from borders; ``` And then use the view in the query. Also you can do this within the query, it is just messy because MySQL does not support common table expressions (CTEs).
Query to find all couples of countries separated by another nation
[ "", "mysql", "sql", "database", "" ]
I am trying to sort data by applying case when statement in the order by clause but looks like Big Query doesn't support even though it worked fine in other SQL environments. Can somebody share your thoughts on this.
``` select x from ( select x , case when x = 'a' then 'z' else x end as y from (select 'a' as x), (select 'b' as x), (select 'c' as x), (select 'd' as x) ) order by y desc ```
Update (2021) - Bigquery now does support ORDER BY with expressions, e.g. ``` SELECT event_type, COUNT(*) as event_count FROM events GROUP BY event ORDER BY ( CASE WHEN event='generated' THEN 1 WHEN event='sent' THEN 2 WHEN event='paid' THEN 3 ELSE 4 END ) ```
Does Big Query support custom sorting?
[ "", "sql", "google-bigquery", "" ]
I am using SQL Server 2014 and want to know how to check my active transactions?
1. Query with `sys.sysprocesses` ``` SELECT * FROM sys.sysprocesses WHERE open_tran = 1 ``` 2. **[DBCC OPENTRAN](https://learn.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-opentran-transact-sql)** : helps to identify active transactions that may be preventing log truncation. DBCC OPENTRAN displays information about the oldest active transaction and the oldest distributed and nondistributed replicated transactions, if any, within the transaction log of the specified database. Results are displayed only if there is an active transaction that exists in the log or if the database contains replication information. An informational message is displayed if there are no active transactions in the log. 3. `sys.dm_tran_active_transactions` Returns information about transactions for the instance of SQL Server. [Syntax](https://learn.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-tran-active-transactions-transact-sql) [![enter image description here](https://i.stack.imgur.com/eCTUJ.png)](https://i.stack.imgur.com/eCTUJ.png) *Wondering about Transaction ?* > A transaction is a single unit of work. If a transaction is > successful, all of the data modifications made during the transaction > are committed and become a permanent part of the database. [Find more at docs](https://learn.microsoft.com/en-us/sql/t-sql/language-elements/transactions-transact-sql)
If you want to know more details about active sessions like session ID, Host Name,Login Name,Transaction ID,Transaction Name,Trnasaction Begin Time,Databse ID,Database Name you can use below sql query ``` SELECT trans.session_id AS [SESSION ID], ESes.host_name AS [HOST NAME],login_name AS [Login NAME], trans.transaction_id AS [TRANSACTION ID], tas.name AS [TRANSACTION NAME],tas.transaction_begin_time AS [TRANSACTION BEGIN TIME], tds.database_id AS [DATABASE ID],DBs.name AS [DATABASE NAME] FROM sys.dm_tran_active_transactions tas JOIN sys.dm_tran_session_transactions trans ON (trans.transaction_id=tas.transaction_id) LEFT OUTER JOIN sys.dm_tran_database_transactions tds ON (tas.transaction_id = tds.transaction_id ) LEFT OUTER JOIN sys.databases AS DBs ON tds.database_id = DBs.database_id LEFT OUTER JOIN sys.dm_exec_sessions AS ESes ON trans.session_id = ESes.session_id WHERE ESes.session_id IS NOT NULL ``` and you will get the result something like [![enter image description here](https://i.stack.imgur.com/H1eo1.png)](https://i.stack.imgur.com/H1eo1.png)
How to check active transactions in SQL Server 2014?
[ "", "sql", "sql-server-2014", "" ]
I am writing a simple query in SQL Server: ``` Select EmpId, EmpName, Sal from Emp where EmpId in (10,9,5,7,3,8); ``` I want to get the output in a same order which is given i.e; `10,9,5,7,3,8` Actually whatever I'll give the result will display with given order without order by ascending or descending. How can I do that? Please help.
No way to do this natively. Try: ``` SELECT EmpId,EmpName,Sal FROM Emp WHERE EmpId IN (10,9,5,7,3,8) ORDER BY CASE EmpId WHEN 10 THEN 1 WHEN 9 THEN 2 WHEN 5 THEN 3 WHEN 7 THEN 4 WHEN 3 THEN 5 WHEN 8 THEN 6 ELSE 7 END; ```
You can use a table variable to pass the inputs. You must insert the records into this table variable in the desired order. ``` Declare @empids table(id int identity(1,1),empid int) insert into @empids values(10),(9),(5),(7),(3),(8) Select e.EmpId,e.empname,e.sal from Emp e join @empids t on e.EmpId = t.empid order by t.id ``` Try this.
Display order of a SQL Query without order by clause
[ "", "sql", "sql-server", "t-sql", "" ]
I'm using SQL Server 2014 and I need to know which set of students have an action code used only once, and which students have an action code used more than once. [![enter image description here](https://i.stack.imgur.com/kIhoW.png)](https://i.stack.imgur.com/kIhoW.png) I'm trying to see if a student has more than one of 5 action codes. e.g. ``` SELECT StudentID FROM #T WHERE ActionCode IN (51201,51206,51207,51208,51209) ``` How would I code this please? Thank you for your help.
``` SELECT StudentID, ActionCode, ActionCodeCount = Count(*) FROM dbo.StudentActionCodes WHERE ActionCode IN (51201, 51206, 51207, 51208, 51209) GROUP BY StudentID, ActionCode ORDER BY StudentID DESC, ActionCode ; ``` ## [See this working in a live demo at SqlFiddle](http://sqlfiddle.com/#!6/5c8138/1) Given this set of values (including some unwanted ActionCodes): ``` StudentID ActionCode ----------- ----------- 987654 51201 987654 51206 987654 51207 987654 51208 987654 51209 987654 51210 987653 51201 987653 51208 987653 51208 987653 51211 987652 51201 987652 51206 987652 51206 987652 51207 987652 51208 987652 51209 987652 51212 987652 51213 987651 51201 987651 51206 987651 51209 987651 51209 987651 51214 987651 51215 987650 51201 987650 51201 987650 51201 987650 51201 987650 51208 987650 51208 987650 51216 ``` Here are the results: ``` StudentID ActionCode ActionCodeCount ----------- ----------- --------------- 987654 51201 1 987654 51206 1 987654 51207 1 987654 51208 1 987654 51209 1 987653 51201 1 987653 51208 2 987652 51201 1 987652 51206 2 987652 51207 1 987652 51208 1 987652 51209 1 987651 51201 1 987651 51206 1 987651 51209 2 987650 51201 4 987650 51208 2 ``` You might also enjoy this query: ``` SELECT * FROM dbo.StudentActionCodes S PIVOT ( Count(S.ActionCode) FOR S.ActionCode IN ([51201], [51206], [51207], [51208], [51209]) ) P ; ``` Which gives this result: ``` StudentID 51201 51206 51207 51208 51209 ----------- ----------- ----------- ----------- ----------- ----------- 987650 4 0 0 2 0 987651 1 1 0 0 2 987652 1 2 1 1 1 987653 1 0 0 2 0 987654 1 1 1 1 1 ```
Please try the below query: SQL fiddle link: <http://sqlfiddle.com/#!6/d1dc6/6> It gives sum of same type of actions per student as well as sum of all allowed actions per student ``` SELECT DISTINCT StudentID, Actioncode, COUNT(1) OVER (PARTITION BY StudentID,Actioncode ) as [Count_of_this_action_instance], COUNT(1) OVER (PARTITION BY StudentID ) as [Count_of_all_action_instance] FROM StudentActionCodes WHERE ActionCode IN (51201,51206,51207,51208,51209) ```
TSQL- How to count number of instances
[ "", "sql", "sql-server", "t-sql", "" ]
I have a question about select in SQL Server. First I'm getting columns name from temp table with proper type query looks like this: ``` select sc.name, st.name from tempdb.sys.columns sc left join sys.types st on sc.system_type_id = st.system_type_id where object_id = object_id('tempdb..#_tmpDocs') and st.name in ('char', 'varchar', 'nvarchar') ``` Result is a list of columns with type I want, but next I want to select those columns in different query so if ill save result from above query into temp table with name #columns with it be possible to do something like ``` select (select * from #columns) from target_table ```
Here's a dynamic sql that will do what you need: ``` CREATE TABLE #_tmpDocs (id INT, x CHAR(10), y VARCHAR(100)) DECLARE @cols NVARCHAR(1000) DECLARE @sql NVARCHAR(1000) SELECT @cols = COALESCE(@cols + ',', '') + QUOTENAME(SC.name) FROM tempdb.sys.columns sc LEFT JOIN sys.types st ON sc.system_type_id = st.system_type_id WHERE object_id = OBJECT_ID('tempdb..#_tmpDocs') AND st.name IN ( 'char', 'varchar', 'nvarchar' ) SET @sql = 'select ' + @cols + ' from #_tmpDocs' PRINT @sql EXEC (@sql) EXEC sp_executesql @sql ``` Chose between `EXEC (@sql)` and `EXEC sp_executesql @sql` based on you needs - see [here](http://www.sqlskills.com/blogs/kimberly/exec-and-sp_executesql-how-are-they-different/) to get an idea of what each is doing.
I don't think you can reference columns dynamically in SQL like that. Just doing a pure sql version, it won't work: ``` declare @x varchar(50) = 'MyColumn'; select @x from dbo.MyTable ``` You'll need to use [dynamic sql](https://msdn.microsoft.com/en-us/library/ms188001.aspx), and build a string: ``` declare @x nvarchar(50) = 'MyColumn'; declare @y nvarchar(250) = 'select ' + @x + ' from dbo.MyTable'; EXECUTE sp_executesql @y ``` So in your case you may want to loop through your return result or otherwise turn it into a string of column names that can be concatenated to a larger query (like above).
Select columns name from subquery in main query
[ "", "sql", "sql-server", "" ]
I have one table transaction with columns (id, amount) all positive integers and I have another table refund with same columns but here amount is expected to be negative. Now I want to use Insert into command to select from transaction and insert into refund. How can I make the amount negative while inserting. following is the query that I am trying. ``` INSERT INTO refund (id,amount) SELECT id, amount FROM `transaction` ``` Is there any way refund.amount always takes -ve values by default.
``` INSERT INTO refund (id,amount) SELECT id, (0 - amount) as amount FROM `transaction` ``` as all data in `transaction` is positive. ``` INSERT INTO refund (id,amount) SELECT id, (-1 * amount) as amount FROM `transaction` ```
If you, for some reason, are not guaranteed you get a positive amount every time you insert a new row, take the absolute value and multiply it by -1 ``` INSERT INTO refund (id,amount) SELECT id, -1 * ABS(amount) FROM `transaction` ``` Otherwise, the `ABS()` part is not needed ``` INSERT INTO refund (id,amount) SELECT id, -1 * amount FROM `transaction` ```
Mysql: How to convert positive value to negative while inserting
[ "", "mysql", "sql", "" ]
We have a table of days like: ``` ID Date 1 2015-07-29 2 2015-07-30 3 2015-07-31 4 2015-08-01 5 2015-08-03 7 2015-08-04 8 2015-08-05 9 2015-08-06 10 2015-08-07 11 2015-08-10 ``` And we want to find all the sequences (day+1). The result should be something like this: ``` Start End 2015-07-29 2015-08-01 2015-08-03 2015-08-07 2015-08-10 2015-08-10 ``` **1. Update** First I modified Deepanshu Kalra answer to use RowNumber instead of the Id (Id is autoincrement, so its possible that ids are missing) ``` DECLARE @P TABLE(DATE DATE) INSERT INTO @P SELECT MIN([DATE]) FROM MietvertragsArtikelDays UNION SELECT T1.[DATE] FROM (SELECT ROW_NUMBER() OVER (ORDER BY [DATE]) RowNumber, [DATE] FROM MietvertragsArtikelDays) AS T1 INNER JOIN (SELECT ROW_NUMBER() OVER (ORDER BY [DATE]) RowNumber, [DATE] FROM MietvertragsArtikelDays) AS T2 ON T1.RowNumber=T2.RowNumber+1 WHERE DATEDIFF(DAY,T2.[DATE],T1.[DATE]) <>1 UNION SELECT T2.[DATE] FROM (SELECT ROW_NUMBER() OVER (ORDER BY [DATE]) RowNumber, [DATE] FROM MietvertragsArtikelDays) AS T1 INNER JOIN (SELECT ROW_NUMBER() OVER (ORDER BY [DATE]) RowNumber, [DATE] FROM MietvertragsArtikelDays) AS T2 ON T1.RowNumber=T2.RowNumber+1 WHERE DATEDIFF(DAY,T2.[DATE],T1.[DATE]) <>1 DECLARE @X TABLE(DATE DATE, RN INT) INSERT INTO @X SELECT *, ROW_NUMBER() OVER(ORDER BY [DATE]) AS X FROM @P SELECT A.[DATE] Start, ISNULL(B.[DATE],A.[DATE]) [End] FROM @X A LEFT JOIN (SELECT [DATE], RN-1 AS RN FROM @X) B ON A.RN=B.RN WHERE A.RN%2=1 ``` **2. Update** The most elegant solution is Ughais ``` ;WITH CTE as ( SELECT *,DATEDIFF(D,0,[Date]) - ROW_NUMBER()OVER(ORDER BY ID ASC) grp FROM MietvertragsArtikelDays ) SELECT MIN([Date]),MAX([Date]) FROM CTE GROUP BY grp ```
This is an [Islands and Gap problem](https://www.simple-talk.com/sql/t-sql-programming/the-sql-of-gaps-and-islands-in-sequences/). You can use `ROW_NUMBER` and `DATEDIFF`. Something like this. **[SQL Fiddle](http://sqlfiddle.com/#!6/06763/1)** **Sample Data** ``` DECLARE @Dates TABLE ([ID] int, [Date] datetime); INSERT INTO @Dates ([ID], [Date]) VALUES (1, '2015-07-29 00:00:00'), (2, '2015-07-30 00:00:00'), (3, '2015-07-31 00:00:00'), (4, '2015-08-01 00:00:00'), (5, '2015-08-03 00:00:00'), (7, '2015-08-04 00:00:00'), (8, '2015-08-05 00:00:00'), (9, '2015-08-06 00:00:00'), (10, '2015-08-07 00:00:00'), (11, '2015-08-10 00:00:00'); ``` **Query** ``` ;WITH CTE as ( SELECT *,DATEDIFF(D,0,[Date]) - ROW_NUMBER()OVER(ORDER BY ID ASC) grp FROM @Dates ) SELECT MIN([Date]),MAX([Date]) FROM CTE GROUP BY grp ``` **Output** ``` 2015-07-29 00:00:00.000 2015-08-01 00:00:00.000 2015-08-03 00:00:00.000 2015-08-07 00:00:00.000 2015-08-10 00:00:00.000 2015-08-10 00:00:00.000 ```
As I started from an answer which was already posted, maybe I made it very complex. But it works. ``` DECLARE @T TABLE(ID INT, DATE DATE) INSERT INTO @T SELECT 1, '2015-07-29' UNION ALL SELECT 2, '2015-07-30' UNION ALL SELECT 3, '2015-07-31' UNION ALL SELECT 4, '2015-08-01' UNION ALL SELECT 5, '2015-08-03' UNION ALL SELECT 7, '2015-08-04' UNION ALL SELECT 8, '2015-08-05' UNION ALL SELECT 9, '2015-08-06' UNION ALL SELECT 10, '2015-08-07' UNION ALL SELECT 11, '2015-08-10' DECLARE @P TABLE(DATE DATE) INSERT INTO @P SELECT MIN([DATE]) FROM @T UNION SELECT T1.[DATE] FROM @T AS T1 INNER JOIN @T AS T2 ON T1.ID=T2.ID+1 WHERE DATEDIFF(DAY,T2.[DATE],T1.[DATE]) <>1 UNION SELECT T2.[DATE] FROM @T AS T1 INNER JOIN @T AS T2 ON T1.ID=T2.ID+1 WHERE DATEDIFF(DAY,T2.[DATE],T1.[DATE]) <>1 DECLARE @X TABLE(DATE DATE, RN INT) INSERT INTO @X SELECT *, ROW_NUMBER() OVER(ORDER BY [DATE]) AS X FROM @P SELECT A.[DATE], B.[DATE] FROM @X A LEFT JOIN (SELECT [DATE], RN-1 AS RN FROM @X) B ON A.RN=B.RN WHERE A.RN%2=1 ``` Please excuse the standards and all. Will edit later in the day. Sorry for that.
Get sequence of days from days
[ "", "sql", "t-sql", "" ]
I have created a temporary table to hold some values like this: ``` CREATE TABLE #TempCount (PendingOrders INT, OpenOrders INT, ClosedOrders INT); ``` I want to update the columns with a value from a `SELECT` statement as such: ``` UPDATE #TempCount SET PendingOrders = ( SELECT COUNT(OrderID) AS 'PendingOrders' FROM dbo.Products WHERE OrderStatus = 1 ) ``` However nothing seems to get updated. I expected `PendingOrders` to show a number but it doesn't get anything entered in. The column is empty. The same applies to the other columns - nothing gets updated. How could I solve this?
You need a row in order to update it. Do one "initializing" INSERT first: ``` INSERT INTO #TempCount VALUES(0, 0, 0) ```
Seems to me like you need an insert statement first to generate the row. ``` INSERT #TempCount (PendingOrders) SELECT COUNT(OrderID) AS 'PendingOrders' FROM dbo.Products WHERE OrderStatus = 1 ```
How to update a temporary table with a value from a select statement?
[ "", "sql", "sql-server", "t-sql", "" ]
I have a table with values like this ``` listid email 1 test@email.com 5 test@eamil.com 1 test123@email.com ``` From the above example you can see that the same email can show up in the email column with a different `listid`. I want to return emails that have a `listid` of 1 but not not also have a `listid` of 5. In that case the only email returned would be test123@email.com. For now I have ``` SELECT `email`,`listid` FROM `table` WHERE `emailaddress` LIKE '%@email%' AND `listid` != 5 AND `listid`=1 ``` Of course this does not work because test@email does have a listid of 1 too. How can I exclude test@email from the results?
You could use the `exists` operator: ``` SELECT `email`,`listid` FROM `table` t WHERE `emailaddress` LIKE '%@email%' AND `listid` = 1 AND NOT EXISTS (SELECT * FROM `table` t2 WHERE t1.`emailaddress` = t2.`emailaddress` AND t2.`listid = 5) ```
You could use a join: ``` SELECT t.email, t.listid FROM `table` t JOIN `table` t2 ON t.email = t2.email AND t2.listid != 5 WHERE t.listid = 1 ```
select where id == value and id != value mysql
[ "", "mysql", "sql", "select", "" ]
Pretty stuck and hoping someone can help I need to use a common table expression to display (select) all the odd Employee\_ID rows first and then all the even Employee\_ID rows from the employee table. This is what I have written so far, definitely not correct. Would be great if someone can help. ``` WITH MYCTE (Employee_ID) AS ( SELECT 1 AS odd FROM Employee UNION ALL SELECT odd + 1 FROM Employee WHERE odd < 10 ) SELECT * FROM MYCTE ORDER BY CASE WHEN odd % 2 = 1 THEN 0 ELSE 1 END ,odd ASC ```
All you need is to add a `ORDER BY` similar to the one below: `order by case when odd%2 = 1 then 0 else 1 end, odd asc` Here is a sample [SQLFiddle](http://sqlfiddle.com/#!9/accc72/2). Added to your code, this would look like: ``` With MYCTE(Employee_ID) As ( Select 1 as odd From Employee_Table UNION ALL Select odd + 1 From Employee_Table ) Select * From MYCTE ORDER BY CASE WHEN Employee_ID%2 = 1 THEN 0 ELSE 1 END, Employee_ID ASC ``` After closer inspection I noticed that your Recursive CTE wasn't built correctly, which is why you were getting constant errors. I re-wrote your CTE and now it should work as expected. Also check out [this SQLFiddle](http://sqlfiddle.com/#!6/27f3d/12) to see the results. ``` with mycte (empID, employee_name, reportsTo) as ( select employee_id , employee_nm , reportsTo from employee where reportsto = 0 union all select e.employee_id , e.employee_nm , e.reportsTo from employee e inner join mycte mc on e.reportsTo = mc.empID ) select * from mycte order by case when empID%2 = 1 then 0 else 1 end, empID ASC ```
Try this ``` select t.* from ( Select row_number() over (order by (select 1)) as sno, * From Employee_Table ) as t order by case when sno%1=1 then 1 else 2 end, sno ```
Recursive CTE - Returning odd and even records
[ "", "sql", "common-table-expression", "" ]
Consider a SQL query with the following `WHERE` predicate: ``` ... WHERE name IS NOT NULL ... ``` Where `name` is a textual field in PostgreSQL. No other query checks any textual property of this value, just whether it is `NULL` or not. Therefore, [a full btree index](http://www.postgresql.org/docs/9.3/static/indexes-types.html) seems like an overkill, even though [it supports this distinction](http://www.postgresql.org/docs/9.3/static/indexes-types.html): > Also, an IS NULL or IS NOT NULL condition on an index column can be used with a B-tree index. **What's the right PostgreSQL index to quickly distinguish `NULL`s from non-`NULL`s?**
I'm interpreting you claim that it's "overkill" in two ways: in terms of complexity (using a B-Tree instead of just a list) and space/performance. For complexity, it's not overkill. A B-Tree index is preferable because *deletes* from it will be faster than some kind of "unordered" index (for lack of a better term). (An unordered index would require a full index scan just to delete.) In light of that fact, any gains from an unordered index would be usually be outweighed by the detriments, so the development effort isn't justified. For space and performance, though, if you want a highly selective index for efficiency, you can include a `WHERE` clause on an index, as noted in the [fine manual](http://www.postgresql.org/docs/9.3/static/sql-createindex.html): ``` CREATE INDEX ON my_table (name) WHERE name IS NOT NULL; ``` Note that you'll only see benefits from this index if it can allow PostgreSQL to ignore a *large* amount of rows when executing your query. E.g., if 99% of the rows have `name IS NOT NULL`, the index isn't buying you anything over just letting a full table scan happen; in fact, it would be less efficient (as [@CraigRinger](https://stackoverflow.com/users/398670/craig-ringer) notes) since it would require extra disk reads. If however, only 1% of rows have `name IS NOT NULL`, then this represents huge savings as PostgreSQL can ignore most of the table for your query. If your table is very large, even eliminating 50% of the rows might be worth it. This is a tuning problem, and whether the index is valuable is going to depend heavily on the size and distribution of the data. Additionally, there is very little gain in terms of space if you still need another index for the `name IS NULL` rows. See [Craig Ringer's answer](https://stackoverflow.com/a/31966621/1394393) for details.
You could use an expression index, but you shouldn't. Keep it simple, and use a plain b-tree. --- An expression index can be created on `colname IS NOT NULL`: ``` test=> CREATE TABLE blah(name text); CREATE TABLE test=> CREATE INDEX name_notnull ON blah((name IS NOT NULL)); CREATE INDEX test=> INSERT INTO blah(name) VALUES ('a'),('b'),(NULL); INSERT 0 3 test=> SET enable_seqscan = off; SET craig=> SELECT * FROM blah WHERE name IS NOT NULL; name ------ a b (2 rows) test=> EXPLAIN SELECT * FROM blah WHERE name IS NOT NULL; QUERY PLAN ----------------------------------------------------------------------------- Bitmap Heap Scan on blah (cost=9.39..25.94 rows=1303 width=32) Filter: (name IS NOT NULL) -> Bitmap Index Scan on name_notnull (cost=0.00..9.06 rows=655 width=0) Index Cond: ((name IS NOT NULL) = true) (4 rows) test=> SET enable_bitmapscan = off; SET test=> EXPLAIN SELECT * FROM blah WHERE name IS NOT NULL; QUERY PLAN ------------------------------------------------------------------------------ Index Scan using name_notnull on blah (cost=0.15..55.62 rows=1303 width=32) Index Cond: ((name IS NOT NULL) = true) Filter: (name IS NOT NULL) (3 rows) ``` ... but Pg doesn't realise that it's also usable for `IS NULL`: ``` test=> EXPLAIN SELECT * FROM blah WHERE name IS NULL; QUERY PLAN ------------------------------------------------------------------------- Seq Scan on blah (cost=10000000000.00..10000000023.10 rows=7 width=32) Filter: (name IS NULL) (2 rows) ``` and even transforms `NOT (name IS NOT NULL)` into `name IS NULL`, which is usually what you want. ``` test=> EXPLAIN SELECT * FROM blah WHERE NOT (name IS NOT NULL); QUERY PLAN ------------------------------------------------------------------------- Seq Scan on blah (cost=10000000000.00..10000000023.10 rows=7 width=32) Filter: (name IS NULL) (2 rows) ``` so you're actually better off with two disjoint expression indexes, one on the null and one on the non-null set. ``` test=> DROP INDEX name_notnull ; DROP INDEX test=> CREATE INDEX name_notnull ON blah((name IS NOT NULL)) WHERE (name IS NOT NULL); CREATE INDEX test=> EXPLAIN SELECT * FROM blah WHERE name IS NOT NULL; QUERY PLAN -------------------------------------------------------------------------- Index Scan using name_notnull on blah (cost=0.13..8.14 rows=3 width=32) Index Cond: ((name IS NOT NULL) = true) (2 rows) test=> CREATE INDEX name_null ON blah((name IS NULL)) WHERE (name IS NULL); CREATE INDEX craig=> EXPLAIN SELECT * FROM blah WHERE name IS NULL; QUERY PLAN ----------------------------------------------------------------------- Index Scan using name_null on blah (cost=0.12..8.14 rows=1 width=32) Index Cond: ((name IS NULL) = true) (2 rows) ``` This is pretty gruesome though. For most sensible uses I'd just use a plain b-tree index. The index size improvement isn't too exciting, at least for small-ish inputs, like the dummy I created with a bunch of md5 values: ``` test=> SELECT pg_size_pretty(pg_relation_size('blah')); pg_size_pretty ---------------- 9416 kB (1 row) test=> SELECT pg_size_pretty(pg_relation_size('blah_name')); pg_size_pretty ---------------- 7984 kB (1 row) test=> SELECT pg_size_pretty(pg_relation_size('name_notnull')); pg_size_pretty ---------------- 2208 kB (1 row) test=> SELECT pg_size_pretty(pg_relation_size('name_null')); pg_size_pretty ---------------- 2208 kB (1 row) ```
PostgreSQL: Create an index to quickly distinguish NULL from non-NULL values
[ "", "sql", "postgresql", "indexing", "null", "" ]
How can I build a SQL Query to decompose some periods, for example in months. database table: ``` id fromdate todate value -------------------------------------------- 100 01.01.2015 01.03.2015 10 ``` desired query result: ``` id fromdate todate value -------------------------------------------- 100 01.01.2015 01.02.2015 5,25 100 01.02.2015 01.03.2015 4,75 ``` where value is based on days between the 2 dates, for example: ``` value(january) = 31(january nr of days) * 10(original value) / 59(total days) = 5,25 ``` Thank you
For calculations like this you can use `date dimension` - a table that contains all the dates in your domain as single rows (see this for [example](http://www.dwhworld.com/2010/11/date-dimension-sql-scripts-oracle/)). Once you have date dimension in your database things become simple: ``` WITH data_by_date AS ( -- Here we join dates to your periods to turn each row in -- as many rows as there are days in the period. -- We also turn value field into value_per_day. SELECT d.date, d.month_year, t.id, value / (t.todate - t.fromdate) as value_per_day FROM dim_date d INNER JOIN my_table t ON d.date >= t.fromdate AND d.date < t.todate ) SELECT -- Here we group by results by month. dd.id, MIN(dd.date) as fromdate, MAX(dd.date) as todate, SUM(dd.value_per_day) as value FROM data_by_date dd GROUP BY dd.id, dd.month_year ```
Use a hierarchical query to generate a list of months for each entry: [SQL Fiddle](http://sqlfiddle.com/#!4/045bc/1) **Oracle 11g R2 Schema Setup**: ``` CREATE TABLE TEST (id, fromdate, todate, value ) AS SELECT 100, DATE '2015-01-01', DATE '2015-03-01', 10 FROM DUAL UNION ALL SELECT 200, DATE '2014-12-22', DATE '2015-01-06', 30 FROM DUAL ``` **Query 1**: ``` SELECT ID, fromdate, todate, VALUE * ( todate - fromdate ) / ( maxdate - mindate ) AS value FROM ( SELECT ID, GREATEST( t.fromdate, m.COLUMN_VALUE ) AS fromdate, LEAST( t.todate, ADD_MONTHS( m.COLUMN_VALUE, 1 ) ) AS todate, t.fromdate AS mindate, t.todate AS maxdate, t.value FROM TEST t, TABLE( CAST( MULTISET( SELECT ADD_MONTHS( TRUNC( t.fromdate, 'MM' ), LEVEL - 1 ) FROM DUAL CONNECT BY ADD_MONTHS( TRUNC( t.fromdate, 'MM' ), LEVEL - 1 ) < t.todate ) AS SYS.ODCIDATELIST ) ) m ) ``` **[Results](http://sqlfiddle.com/#!4/045bc/1/0)**: ``` | ID | FROMDATE | TODATE | VALUE | |-----|----------------------------|----------------------------|-------------------| | 100 | January, 01 2015 00:00:00 | February, 01 2015 00:00:00 | 5.254237288135593 | | 100 | February, 01 2015 00:00:00 | March, 01 2015 00:00:00 | 4.745762711864407 | | 200 | December, 22 2014 00:00:00 | January, 01 2015 00:00:00 | 20 | | 200 | January, 01 2015 00:00:00 | January, 06 2015 00:00:00 | 10 | ```
SQL Query building: howto decompose periods of time in different rows
[ "", "sql", "oracle", "relational-database", "" ]
Here's the data I have (note that this is for only one entity id / employee id, there will be multiple. One entity ID can have multiple employee IDs under it): ``` SELECT EntityId, EmployeeId, PayPeriodStart, IsFullTime FROM dbo.Payroll WHERE EmployeeId = 316691 AND PayPeriodStart <= '12/31/2014' AND PayPeriodEnd >= '1/1/2014'; ``` ![](https://i.stack.imgur.com/2n9vy.png) I want to grab the LAST "IsFullTime" value FOR EACH EntityID & EmployeeID combo. I tried doing this: ``` SELECT EntityId, EmployeeId, LAST_VALUE(IsFullTime) OVER (PARTITION BY EntityId, EmployeeId ORDER BY EntityId, EmployeeId, PayPeriodStart) AS LastIsFullTimeValue FROM dbo.Payroll WHERE EmployeeId = 316691 AND PayPeriodStart <= '12/31/2014' AND PayPeriodEnd >= '1/1/2014'; ``` But I'm getting this: ![](https://i.stack.imgur.com/6Hdgi.png) The query should return only ONE row FOR EACH EntityID / EmployeeID. What am I doing wrong?
I believe you are looking to use `ROW_NUMBER()` and the get the last value based on `payperiodstart` date: ``` SELECT t.EntityId ,t.EmployeeId ,t.LastIsFullTimeValue FROM ( SELECT EntityId ,EmployeeId ,ROW_NUMBER() OVER ( PARTITION BY EntityId ,EmployeeId ORDER BY PayPeriodStart DESC ) AS rn ,LastIsFullTimeValue FROM dbo.Payroll WHERE EmployeeId = 316691 -- you could probably put this in your outer query instead AND PayPeriodStart <= '12/31/2014' AND PayPeriodEnd >= '1/1/2014' ) t WHERE t.rn = 1; ```
Try adding `ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING` I believe the default window for analytic functions that includes an `ORDER BY` ends at the current row. ``` LAST_VALUE(IsFullTime) OVER ( PARTITION BY EntityId, EmployeeId ORDER BY EntityId, EmployeeId, PayPeriodStart ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING ) AS LastIsFullTimeValue ```
Why is LAST_VALUE() not working in SQL Server?
[ "", "sql", "sql-server", "" ]
I have a PostgreSQL column of type text that contains data like shown below ``` (32.85563, -117.25624)(32.855470000000004, -117.25648000000001)(32.85567, -117.25710000000001)(32.85544, -117.2556) (37.75363, -121.44142000000001)(37.75292, -121.4414) ``` I want to convert this into another column of type text like shown below ``` (-117.25624, 32.85563)(-117.25648000000001,32.855470000000004 )(-117.25710000000001,32.85567 )(-117.2556,32.85544 ) (-121.44142000000001,37.75363 )(-121.4414,37.75292 ) ``` As you can see, the values inside the parentheses have switched around. Also note that I have shown two records here to indicate that not all fields have same number of parenthesized figures. **What I've tried** I tried extracting the column to Java and performing my operations there. But due to sheer amount of records I have, I will run out of memory. I also cannot do this method in batched due to time constraints. **What I want** A SQL query or a sequence of SQL queries that will achieve the result that I have mentioned above. I am using PostgreSQL9.4 with PGAdmin III as the client
As I put in the question, *I tried extracting the column to Java and performing my operations there. But due to sheer amount of records I have, I will run out of memory. I also cannot do this method in batched due to time constraints.* I ran out of memory here as I was putting everything in a Hashmap of **< my\_primary\_key,the\_newly\_formatted\_text >**. As the text was very long sometimes and due to the sheer number of records that I had, it wasnt surprising that I got an OOM. **Solution that I used:** As suggested my many folks here, this solution was better solved with a code. I wrote a small script that formatted the text as per my liking and wrote the primary key and the newly formatted text to a file in tsv format. Then I imported the tsv in a new table and updated the original table from the new one.
Assuming you also have a PK or some unique column, and possibly other columns, you can do as follows: ``` SELECT id, (...), string_agg(point(pt[1], pt[0])::text, '') AS col_reversed FROM ( SELECT id, (...), unnest(string_to_array(replace(col, ')(', ');('), ';'))::point AS pt FROM my_table) sub GROUP BY id; -- assuming id is PK or no other columns ``` PostgreSQL has the `point` type which you can use here. First you need to make sure you can properly divide the long string into individual points (insert ';' between the parentheses), then turn that into an array of individual points in text format, unnest the array into individual rows, and finally cast those rows to the `point` data type: ``` unnest(string_to_array(replace(col, ')(', ');('), ';'))::point AS pt ``` You can then create a new point from the point you just created, but with the coordinates reversed, turn that into a string and aggregate into your desired output: ``` string_agg(point(pt[1], pt[0])::text, '') AS col_reversed ``` But you might also move away from the text format and make an array of `point` values as that will be easier and faster to work with: ``` array_agg(point(pt[1], pt[0])) AS pt_reversed ```
Split multiple points in text format and switch coordinates in postgres column
[ "", "sql", "postgresql", "replace", "" ]
I have this query ``` SELECT s.account_number, a.id AS 'ASPIRION ID', a.patient_first_name, a.patient_last_name, s.admission_date, s.total_charge, astat.name AS 'STATUS', astat.definition, latest_note.content AS 'LAST NOTE', a.insurance_company FROM accounts a INNER JOIN services s ON a.id = s.account_id INNER JOIN facilities f ON f.id = a.facility_id INNER JOIN account_statuses astat ON astat.id = a.account_status_id INNER JOIN (SELECT account_id, MAX(content) content, MAX(created) FROM notes GROUP BY account_id) latest_note ON latest_note.account_id = a.id WHERE a.facility_id = 56 ``` My problem comes from ``` (SELECT account_id, MAX(content) content, MAX(created) FROM notes GROUP BY account_id) ``` **Content** is a varchar field and I am needed to get the most recent record. I now understand that MAX will not work on a varchar field the way that I want it. I am not sure how to be able to get the corresponding content with the **MAX** **id** and **group that by account id** on in this join. What would be the best way to do this? My notes table looks like this... ``` id account_id content created 1 1 This is a test 2011-03-16 02:06:40 2 1 More test 2012-03-16 02:06:40 ```
Here are two choices. If your `content` is not very long and don't have funky characters, you can use the `substring_index()`/`group_concat()` trick: ``` (SELECT account_id, SUBSTRING_INDEX(GROUP_CONCAT(content ORDER BY created desc SEPARATOR '|' ), 1, '|') as content FROM notes GROUP BY account_id ) latest_note ON latest_note.account_id = a.id ``` Given the names of the columns and tables, that is likely not to work. Then you need an additional join or a correlated subquery in the `from` clause. I think that might be easiest in this case: ``` select . . ., (select n.content from notes n where n.account_id = a.id order by created desc limit 1 ) as latest_note from . . . ``` The advantage to this method is that it only gets the notes for the rows you need. And, you don't need a `left join` to keep all the rows. For performance, you want an index on `notes(account_id, created)`.
``` SELECT s.account_number, a.id AS 'ASPIRION ID', a.patient_first_name, a.patient_last_name, s.admission_date, s.total_charge, astat.name AS 'STATUS', astat.definition, latest_note.content AS 'LAST NOTE', a.insurance_company FROM accounts a INNER JOIN services s ON a.id = s.account_id INNER JOIN facilities f ON f.id = a.facility_id INNER JOIN account_statuses astat ON astat.id = a.account_status_id INNER JOIN (SELECT account_id, MAX(created) mxcreated FROM notes GROUP BY account_id) latest_note ON latest_note.account_id = a.id and latest_note.mxcreated = --datetime column from any of the other tables being used WHERE a.facility_id = 56 ``` You have to `join` on the `max(created)` which would give the latest content. Or you can change the query to ``` SELECT account_id, content, MAX(created) mxcreated FROM notes GROUP BY account_id ``` as `mysql` allows you even if you don't include all non-aggregated columns in `group by` clause. However, unless you `join` on the max date you wouldn't get the correct results.
Getting max record on varchar field
[ "", "mysql", "sql", "join", "max", "" ]
I have flat table which I have to join using EAN attribute with my main table and update gid (id of my main table). ``` id attrib value gid 1 weight 10 NULL 1 ean 123123123112 NULL 1 color blue NULL 2 weight 5 NULL 2 ean 331231313123 NULL ``` I was trying to pivot ean rows into column, next join on ean both tables, and for this moment everything works great. ``` --update SideTable --set gid = ab_id select gid, ab_id from SideTable pivot (max (value) for attrib in ([EAN],[MPN])) as b join MainTable as c on c.ab_ean = b.EAN where b.EAN !='' AND c.ab_archive = '0' ``` When I am selecting both id columns is okey, but when I am uncomment first lines and delete select whole table is set with first gid from my main table. It have to set my main id into all attributes where ID where ean is matched from my main table. I am sorry for my terrible english but I hope someone can help me, with that.
The reason your update does not work is that you don't have any link between your source and target for the update, although you reference `sidetable` in the `FROM` clause, this is effectively destroyed by the `PIVOT` function, leaving no link back to the instance of `SideTable` that you are updating. Since there is no link, all rows are updated with the same value, this will be the last value encountered in the `FROM`. This can be demonstrated by running the following: ``` DECLARE @S TABLE (ID INT, Attrib VARCHAR(50), Value VARCHAR(50), gid INT); INSERT @S VALUES (1, 'weight', '10', NULL), (1, 'ean', '123123123112', NULL), (1, 'color', 'blue', NULL), (2, 'weight', '5', NULL), (2, 'ean', '331231313123', NULL); SELECT s.* FROM @S AS s PIVOT (MAX(Value) FOR attrib IN ([EAN],[MPN])) AS pvt; ``` You clearly have a table aliased `s` in the `FROM` clause, however because you have used pivot you cannot use `SELECT s*`, you get the following error: > The column prefix 's' does not match with a table name or alias name used in the query. You haven't provided sample data for your main table, but I am about 95% certain your PIVOT is not needed, I think you can get your update using just normal `JOIN`s: ``` UPDATE s SET gid = ab_id FROM SideTable AS s INNER JOIN SideTable AS ean ON ean.ID = s.ID AND ean.attrib = 'ean' INNER JOIN MainTable AS m ON m.ab_EAN = ean.Value WHERE m.ab_archive = '0' AND m.ab_EAN != ''; ```
try and break it down a bit more like this.. ``` update SideTable set SideTable.gid = p.ab_id FROM ( select gid, ab_id from SideTable pivot (max (value) for attrib in ([EAN],[MPN])) as b join MainTable as c on c.ab_ean = b.EAN where b.EAN !='' AND c.ab_archive = '0' ) p WHERE p.EAN = SideTable.EAN ```
How to UPDATE pivoted table in SQL SERVER
[ "", "sql", "sql-server", "t-sql", "pivot", "" ]
I have this stored procedure: ``` create procedure [dbo].[GetCarsByDates] (@startDate date, @returnDate date) as SELECT ManufacturerName, ModelName, CreationYear, Gear, CurrentKM, Picture, DATEDIFF(D, @startDate, @returnDate) * PricePerDay AS [Totalprice], PricePerDay, PricePerDayDelayed, InventoryCars.LicensePlate FROM Models JOIN Manufacturers ON Models.ManufacturerID = Manufacturers.ManufacturerID JOIN InventoryCars ON InventoryCars.ModelID = Models.ModelID JOIN CarsForRent ON CarsForRent.LicensePlate = InventoryCars.LicensePlate WHERE CarsForRent.RentalStartDate < @startDate AND CarsForRent.RentalReturnDate < @returnDate AND CarsForRent.RentalReturnDate < @startDate ORDER BY ManufacturerName, ModelName ``` I want to be able to select the properties by start and return dates. User inputs start date must be greater than return date and that is exactly what I did, but it still not working properly. The problem is that I get rows results of items that are not available. What is the problem with my where clause?
I feel your query should be written like below. I assumed that you need to query all cars available from `startDate` to `returnDate` and need to check based on `CarsForRent` table's columns `CarsForRent.RentalStartDate` and `CarsForRent.RentalReturnDate` ``` create procedure [dbo].[GetCarsByDates]@startDate date, @returnDate date as BEGIN select DISTINCT ManufacturerName, ModelName, CreationYear,Gear, CurrentKM, Picture, DATEDIFF(D, @startDate, @returnDate)*PricePerDay as[Totalprice], PricePerDay,PricePerDayDelayed, InventoryCars.LicensePlate from Models join Manufacturers on Models.ManufacturerID=Manufacturers.ManufacturerID join InventoryCars on InventoryCars.ModelID=Models.ModelID join CarsForRent on CarsForRent.LicensePlate=InventoryCars.LicensePlate where @startDate > CarsForRent.RentalReturnDate AND CarsForRent.RentalReturnDate >CarsForRent.RentalStartDate AND @startDate<=@returnDate order by ManufacturerName, ModelName END ``` if you do not need to check that `returndate` is >`startdate` remove this line from `where` clause : ``` AND @startDate<=@returnDate ``` I've created a sample fiddle Please add values to it in left side and play with the use cases <http://sqlfiddle.com/#!6/00236/1>
if you want the user only set startdate to values greater than returndate, you do this with an IF-Block before the actual query. For example so: ``` create procedure [dbo].[GetCarsByDates](@startDate date, @returnDate date) as if @startDate <= @returnDate OR @startDate IS NULL OR @returnDate IS NULL BEGIN /* maybe do some stuff here */ RAISERROR 'Errormessage', 11,1´; RETURN; END /* here Comes your query without having to check the correctnes of the input-values */ ```
SQL Server - Dates, Greater than and Less than
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have a table with products like this: ``` PRODUCT_ID SUB_PRODUCT_ID DESCRIPTION CLASS ---------- -------------- ----------- ----- A001 ACC1 coffeemaker A A002 ACC1 toaster A A003 ACC2 coffee table A A004 ACC5 couch A ``` I need to replicate a row changing only first characters in some columns to get a result like this in the same table, for example if I take the first row I want to copy `PRODUCT_ID` changing only the first letter (from A001 to B001), `SUB_PRODUCT_ID` changing only the first letter (from ACC1 to BCC1), keep `DESCRIPTION` and change `CLASS` (from A to B) ``` PRODUCT_ID SUB_PRODUCT_ID DESCRIPTION CLASS ---------- -------------- ----------- ----- A001 ACC1 coffeemaker A B001 BCC1 coffeemaker B ``` I have been trying using SUBSTR function without positive results.
is this what you want? ``` insert into products(PRODUCT_ID, SUB_PRODUCT_ID, DESCRIPTION, CLASS) select replace(product_id, 'A', 'B'), replace(SUB_PRODUCT_ID, 'A', 'B'), DESCRIPTION, replace(CLASS, 'A', 'B') from products where class = 'A'; ``` EDIT: `replace()` might be overkill: ``` insert into products(PRODUCT_ID, SUB_PRODUCT_ID, DESCRIPTION, CLASS) select 'B' || substr(product_id, 2), 'B' || substr(SUB_PRODUCT_ID, 2), DESCRIPTION, 'B' from products where class = 'A'; ```
``` Hi You can do a lilte bit of code twisting and here's the output :) SELECT 'A001' PID,'ACC1'SPID,'cofeemaker' dsc,'A' clss FROM dual UNION SELECT REPLACE(a.pid,SUBSTR(a.pid,1,1),'B') PID, REPLACE(a.SPID,SUBSTR(a.spid,1,1),'B')SPID, a.dsc, REPLACE(a.clss,SUBSTR(a.CLSS,1,1),'B') CLSS FROM (SELECT 'A001' PID,'ACC1'SPID,'cofeemaker' dsc,'A' clss FROM dual )a; ```
PLSQL - How to replicate a row changing columns first characters?
[ "", "sql", "plsql", "substr", "replicate", "" ]
Is there a way to create a multi column index across tables? For example, if I had the following tables: ``` Foo (Table Name) FooID (PK) BarID (FK) FooName Bar (Table Name) BarID (PK) BarName ``` I can do a ``` SELECT * FROM Foo LEFT JOIN Bar ON Foo.BarID = Bar.BarID WHERE FooName < "Bob" AND BarName > "Smith"; ``` In this case, I want a multi column index against `Foo.FooName then Bar.BarName`. I did some research but wasn't able to find anything, perhaps I'm not using the right terms. My question may depend on the SQL engine, in which case I'm interested in MySQL specifically, but I am interested in any other engines as well. Doing the multi column index on Foo with the Foreign Key doesn't help, as the underlying value of its Name is what I want for the speed.
Came across my own post years later and figured I could add some details, in case others have similar issues. As Mark B pointed out, each index is per table, however we can set things up that make this efficent, see below. There are a couple of different things going on here, so can use indexes to help accomplish what we need. We need an index to help filter the main table, then an index that works well for the join and filter of the 2nd table. To help accomplish this, we can create the following 2 indexes: ``` CREATE INDEX idx_fooname ON Foo (FooName); CREATE INDEX idx_barid_barname ON Bar (BarID, BarName); ``` Once those indexes are in place, a query can be used like: ``` SELECT * FROM Foo USE INDEX(idx_fooname) LEFT JOIN Bar USE INDEX (idx_barid_barname) ON Foo.BarID = Bar.BarID WHERE FooName < "Bob" AND BarName > "Smith"; ```
Smells like "over-normalization". Might it be worth moving those two fields into the same table? Akiban was an Engine that could do cross-table `JOINs`, etc. But it no longer exists. "Materialized Views" do not exist in MySQL (unless you implement them yourself).
SQL way to index across tables?
[ "", "mysql", "sql", "indexing", "" ]
I have the following piece of a SELECT query inside a stored procedure i'm developing: ``` AND ( CASE S.SWITCH WHEN 'A' THEN P.TEST = T.OPTION_1 WHEN 'C' THEN P.TEST = T.OPTION_1 + T.OPTION_2 WHEN 'G' THEN P.TEST = T.OPTION_3 WHEN 'N' THEN TRUE ELSE FALSE END ) ``` I'm getting an `Incorrect syntax near '='` error. Why would it complain about the first equals sign? It's a Sybase server if anyone is interested.
You case comparison should be something like below, if you are testing `P.TEST` value based on `S.SWITCH` case. ``` AND ( P.TEST = CASE WHEN S.SWITCH = 'A' THEN T.OPTION_1 WHEN S.SWITCH = 'C' THEN T.OPTION_1 + T.OPTION_2 WHEN S.SWITCH = 'G' THEN T.OPTION_3 WHEN S.SWITCH = 'N' THEN TRUE ELSE FALSE END ) ``` If you are comparing based on P.TEST and S.SWITCH, you can do either of following 1. Blorgbeard already provided this answer ``` AND ( (S.SWITCH = 'A' AND P.TEST = T.OPTION_1) OR (S.SWITCH = 'C' AND T.OPTION_1 + T.OPTION_2) OR (S.SWITCH = 'G' AND P.TEST = T.OPTION_3) OR (S.SWITCH = 'N') ) ``` 2. If you want to make case statement work for this, following could be a possible solution. ``` AND ( CASE 1 = WHEN S.SWITCH = 'A' AND P.TEST = T.OPTION_1 THEN 1 WHEN S.SWITCH = 'C' AND P.TEST = T.OPTION_1 + T.OPTION_2 THEN 1 WHEN S.SWITCH = 'G' AND P.TEST = T.OPTION_3 THEN 1 WHEN S.SWITCH = 'N' THEN 1 ELSE 0 END ) ```
Boolean expressions don't work like that in SQL. You can reformulate your switch like this: ``` AND ( (S.SWITCH = 'A' AND P.TEST = T.OPTION_1) OR (S.SWITCH = 'C' AND T.OPTION_1 + T.OPTION_2) OR (S.SWITCH = 'G' AND P.TEST = T.OPTION_3) OR (S.SWITCH = 'N') ) ```
Sybase proc throws "Incorrect syntax near '='"
[ "", "sql", "syntax", "syntax-error", "sybase", "" ]
`i have two table` [![table 1 data](https://i.stack.imgur.com/0aM2d.png)](https://i.stack.imgur.com/0aM2d.png) `and table two data is`[![table 2](https://i.stack.imgur.com/mHH5J.png)](https://i.stack.imgur.com/mHH5J.png) ``` and i want to show just like this ``` [![table 3](https://i.stack.imgur.com/T8l6a.png)](https://i.stack.imgur.com/T8l6a.png) I have to make a table out of two other tables one table with column `word` and `serial` and another with column `meaning` and `serial`. Now what I have to do is put this result(data) into `table3` (a table i already have, with the same columns as in table1 and (table2) tried lots but can't please help me I hope this isn't too confusing of a question.
``` select a.serial, b.word, a.word as meaning from tbl1 a join tbl2 b on a.serial = b.serial ```
This is the sql query your looking for ,details related to the same is been provided in the link below ``` INSERT INTO table3 (serial,word,meaning) SELECT t1.serial,t2.word,t1.word FROM table1 as t1 JOIN table2 as t2 ON t1.serial=t2.serial ``` click here [w3schools SQL tutorial](http://www.w3schools.com/sql/sql_insert_into_select.asp)
joining two table in sqlite
[ "", "android", "mysql", "sql", "sqlite", "" ]
I want to select all rows from my table where value of one column begins with value of another column. ``` Table column_a column_b abc abcdef pqr rstrv xyz xyzabc uvw abcdef ``` I want to get ``` pqr rstrv uvw abcdef ``` I'm using the query ``` SELECT * FROM table_name WHERE column_b NOT LIKE column_a + '%' ``` I'm getting the error > 1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '+ '%' LIMIT 0, 30' at line 1 I've tried solutions from these questions [Compare Columns Where One is Similar to Part of Another](https://stackoverflow.com/questions/4700550/compare-columns-where-one-is-similar-to-part-of-another) [Oracle: LIKE where any part of one string matches amy part of another string](https://stackoverflow.com/questions/25079431/oracle-like-where-any-part-of-one-string-matches-amy-part-of-another-string) [SQL search column where one item in column is substring of another item](https://stackoverflow.com/questions/25628400/sql-search-column-where-one-item-in-column-is-substring-of-another-item) [SQL search column where one item in column is substring of another item Update](https://stackoverflow.com/questions/25747107/sql-search-column-where-one-item-in-column-is-substring-of-another-item-update). But still resulting in error or wrong results My mysql version is Server version: 5.5.44-0ubuntu0.14.04.1 (Ubuntu). I'm using phpMyAdmin to execute queries.
You have the right idea, but MySQL doesn't support the `+` operator for string concatination, only for mathematical addition. Instead, just use the [`concat`](https://dev.mysql.com/doc/refman/5.5/en/string-functions.html#function_concat) function: ``` SELECT * FROM table_name WHERE column_b NOT LIKE CONCAT(column_a, '%') ```
In MySQL, you should use `CONCAT()` instead of `+`: ``` select * from TableName where column_b not like CONCAT(column_a,'%') ``` In SQL Server: ``` select * from TableName where column_b not like column_a+'%' ```
select where one column part of another
[ "", "mysql", "sql", "string", "select", "sql-like", "" ]
I have table with many rows that have the same date. I want to query the max (latest) data but if the dates are the same, only return one. It doesn't matter which record but I could use the latest (GID). I currently have this to to get the latest date: ``` SELECT ProjectNum,DateSaved,FilePath ,GID FROM dbo.master_m_table AS [alldata] WHERE DateSaved = ( SELECT MAX(DateSaved) FROM dbo.master_m_table WHERE FilePath = [alldata].FilePath) ``` But if there are two identical dates then it will return both. * edited - to better explain my question here is the data: ``` ProjectNum DateSaved FilePath GID 12345 01.01.2015 a 1 12345 01.01.2015 a 2 12345 01.01.2015 a 88 12345 01.01.2015 c 104 12345 25.01.2015 c 101 12345 25.01.2015 c 68 ``` ...and I want the latest date unless the dates are the same and then I want the max GID. example result: ``` ProjectNum DateSaved FilePath GID 12345 01.01.2015 a 88 12345 25.01.2015 c 101 ```
Looks like you're looking for this: ``` SELECT ProjectNum,DateSaved,FilePath,GID FROM ( select ProjectNum,DateSaved,FilePath,GID, row_number() over (partition by FilePath order by DateSaved desc, GID desc) as RN from dbo.master_m_table ) X where RN = 1 ``` Row number will create numbers for the rows, for each FilePath separately, and order it first by DateSaved and then GID descending and then outer part will then only pick the rows with number 1. You might also need ProjectNum in partition by part. Example in [SQL Fiddle](http://sqlfiddle.com/#!3/a629a/1)
Just use `top` and `order by`: ``` select top 1 t.* from dbo.master_m_table t order by t.datesaved desc; ``` If you wanted all of them, you can use `with ties`. I'm not sure what you mean by "the latest GUID". You can add more columns to the `order by` to handle ties.
SELECT MAX DATE when records have identical dates
[ "", "sql", "sql-server", "max", "" ]
I want to select information from two SQL tables within one query, the information is unrelated though, so no potential joints exist. An example could be the following setup. tblMadrid ``` id | name | games | goals 1 | ronaldo | 100 | 100 2 | benzema | 50 | 25 3 | bale | 75 | 50 4 | kroos | 80 | 10 ``` tblBarcelona ``` id | name | games | goals 1 | neymar | 60 | 25 2 | messi | 150 | 200 3 | suarez | 80 | 80 4 | iniesta | 40 | 5 ``` I want to have a query that gives me the following: ``` name | games | goals messi | 150 | 200 ronaldo | 100 | 100 ``` I tried to follow this logic: [Multiple select statements in Single query](https://stackoverflow.com/questions/1775168/multiple-select-statements-in-single-query) but the following code did not work: ``` USE Liga_BBVA SELECT (SELECT name, games, goals FROM tblMadrid WHERE name = 'ronaldo') AS table_a, (SELECT name, games, goals FROM tblBarcelona WHERE name = 'messi') AS table_b ORDER BY goals ``` Any advice on this one? Thanks Info: The football stuff is just a simplifying example. In reality it is not possible to put both tables into one and have a new "team" column. The two tables have completely different structures, but I need something that matches the characteristics of this example.
You can do something like this: ``` (SELECT name, games, goals FROM tblMadrid WHERE name = 'ronaldo') UNION (SELECT name, games, goals FROM tblBarcelona WHERE name = 'messi') ORDER BY goals; ``` See, for example: <https://dev.mysql.com/doc/refman/5.0/en/union.html>
If you like to keep records separate and not do the union. Try query below ``` SELECT (SELECT name, games, goals FROM tblMadrid WHERE name = 'ronaldo') AS table_a, (SELECT name, games, goals FROM tblBarcelona WHERE name = 'messi') AS table_b FROM DUAL ```
SQL: Two select statements in one query
[ "", "sql", "select", "" ]
For Access 2013, I need a way to round up any fractional numbers to the next whole number in an SQL query. Example: ``` SELECT ROUNDUP(NumberValues) FROM Table1 ``` In the above query, a row with 1.25 should return as 2 after querying. As far as I know, there's no ROUNDUP function in Access 2013 for use in a SQL query statement.
I found a ROUNDUP equivalent from this link: <http://allenbrowne.com/round.html#RoundUp> > To round upwards towards the next highest number, take advantage of the way Int() rounds negative numbers downwards, like this: > - Int( - [MyField]) > > As shown above, Int(-2.1) rounds down to -3. Therefore this expression rounds 2.1 up to 3. > > To round up to the higher cent, multiply by -100, round, and divide by -100: > Int(-100 \* [MyField]) / -100 The syntax is counter-intuitive, but it works exactly as I intended.
I have found the easiest way to round up a number in access is to use the round function like this: Round([MyField]+0.4,0) The number 10.1, for example then becomes 10.5. When the round function is applied, it rounds up to 11. If the number is 10.9, adding 0.4 becomes 11.3, which rounds to 11.
How to ROUNDUP a number in Access 2013?
[ "", "sql", "rounding", "ms-access-2013", "" ]
I'm trying to filter some records by date but the datetime field seems to be stored as varchar and I'm struggling to convert it. Below is a sample of the data: ``` ID DateField 0002 14/04/1989 01:30 0003 16/04/1989 09:45 0004 16/04/1989 06:00 0005 19/04/1989 01:07 0006 21/04/1989 16:03 ``` When I use ``` cast(Datefield as datetime) ``` I get the following error message: ``` Msg 241, Level 16, State 1, Line 1 Conversion failed when converting date and/or time from character string. ``` Edit - this does actually display results in datetime format but with the error message, why is that? What's the best way to convert my datefield to valid datetime format? Thanks
Use CONVERT with Style ``` CONVERT(DATETIME,Datefield ,103) ``` To find bad data, see what this returns ``` set dateformat dmy select Datefield from table where isdate(where)=0 ``` You should always use proper DATETIME datatype to store datetime values
Try this. You need to add style part to identify the format ``` SELECT CONVERT(DATETIME, dates, 103) FROM (VALUES ('14/04/1989 01:30'), ('16/04/1989 09:45'), ('16/04/1989 06:00'), ('19/04/1989 01:07'), ('21/04/1989 16:03')) cs (dates) ```
convert varchar to datetime field
[ "", "sql", "sql-server", "datetime", "" ]
I have a temp table in SQL server like below ``` ╔═══════╦═════════╦══════╦═════════╗ ║Serial ║ Account ║ Due ║ Balance ║ ║1 ║ A1 ║ 1000 ║ 3100 ║ ║2 ║ A1 ║ 1000 ║ 3100 ║ ║3 ║ A1 ║ 1000 ║ 3100 ║ ║4 ║ A1 ║ 1000 ║ 3100 ║ ║1 ║ A2 ║ 100 ║ 3100 ║ ║2 ║ A2 ║ 100 ║ 3100 ║ ║1 ║ B1 ║ 1000 ║ 1100 ║ ║2 ║ B1 ║ 1000 ║ 1100 ║ ║1 ║ B2 ║ 100 ║ 1100 ║ ║2 ║ B2 ║ 100 ║ 1100 ║ ╚═══════╩═════════╩══════╩═════════╝ ``` I want to identify the rows which due would be collected. A1 and A2 Due will be collected from 3100 and B1 and B2 due will be collected from 1100. Firstly I have used cumulative Due as following ``` ╔═══════╔═════════╦══════╦════════════╦═════════╦ ║Serial ║ Account ║ Due ║ Cumulative ║ Balance ║ ║1 ║ A1 ║ 1000 ║ 1000 ║ 3100 ║ ║2 ║ A1 ║ 1000 ║ 2000 ║ 3100 ║ ║3 ║ A1 ║ 1000 ║ 3000 ║ 3100 ║ ║4 ║ A1 ║ 1000 ║ 4000 ║ 3100 ║ ║1 ║ A2 ║ 100 ║ 100 ║ 3100 ║ ║2 ║ A2 ║ 100 ║ 200 ║ 3100 ║ ║1 ║ B1 ║ 1000 ║ 1000 ║ 1100 ║ ║2 ║ B1 ║ 1000 ║ 2000 ║ 1100 ║ ║1 ║ B2 ║ 100 ║ 100 ║ 1100 ║ ║2 ║ B2 ║ 100 ║ 200 ║ 1100 ║ ╚═══════╚═════════╩══════╩════════════╩═════════╝ ``` Now I want to select following rows as output ``` ╔═══════╔═════════╦══════╦════════════╦═════════╦ ║Serial ║ Account ║ Due ║ Cumulative ║ Balance ║ ║1 ║ A1 ║ 1000 ║ 1000 ║ 3100 ║ ║2 ║ A1 ║ 1000 ║ 2000 ║ 3100 ║ ║3 ║ A1 ║ 1000 ║ 3000 ║ 3100 ║ ║1 ║ A2 ║ 100 ║ 100 ║ 3100 ║ ║1 ║ B1 ║ 1000 ║ 1000 ║ 1100 ║ ║1 ║ B2 ║ 100 ║ 100 ║ 1100 ║ ╚═══════╚═════════╩══════╩════════════╩═════════╩ ``` Here is where I am stuck. How can I select those rows without using cursor or loop. All I want to do this with select statement and window functions. Thanks. **Possible Solution**: If the table can be updated as following then the problem would be solved. ``` ╔═══════╔═════════╦══════╦═══════════════════╦ ║Serial ║ Account ║ Due ║ Balance Remaining ║ ║1 ║ A1 ║ 1000 ║ 3100 ║ ║2 ║ A1 ║ 1000 ║ 2100 ║ ║3 ║ A1 ║ 1000 ║ 1100 ║ ║4 ║ A1 ║ 1000 ║ 100 ║ ║1 ║ A2 ║ 100 ║ 100 ║ ║2 ║ A2 ║ 100 ║ 0 ║ ║1 ║ B1 ║ 1000 ║ 1100 ║ ║2 ║ B1 ║ 1000 ║ 100 ║ ║1 ║ B2 ║ 100 ║ 100 ║ ║2 ║ B2 ║ 100 ║ 0 ║ ╚═══════╚═════════╩══════╩═══════════════════╩ ``` The cases **Balance Remaining** is equal/greater than **Due** we update it with difference else it will remain as before. Problem is here to update rows by partitioning between A & B. **UPDATE** I am providing link with new data set to express my requirement more clearly. [new dataset](http://pastebin.com/mmbnCWb9)
At last, solved this with update query. ``` UPDATE A SET A.Balance = @Balance , @PreBalance = @Balance , @Balance = ( CASE WHEN (@Balance IS NULL OR @AccountType <> A.AccountType) THEN CASE WHEN A.Balance - A.Due >= 0 THEN A.Balance ELSE A.Balance + A.Due END ELSE CASE WHEN @Balance - A.Due >= 0 AND (@Flag = 1 OR @AccountNO <> A.AccountNO) THEN @Balance ELSE @Balance + A.Due END END) - A.Due , A.FLAG = @Flag , @AccountNO = CASE WHEN A.Flag = 0 THEN A.AccountNO ELSE 'NoDueFoundForAcc' END , @Flag = CASE WHEN @AccountType = A.AccountType THEN CASE WHEN @PreBalance = @Balance THEN 0 ELSE 1 END ELSE CASE WHEN A.Balance - A.Due >= 0 THEN 1 ELSE 0 END END , @AccountType = A.AccountType FROM #tempTable A SELECT * FROM #tempTable A WHERE A.Flag = 1 ```
Very simple ``` select * from account where (Balance-(Select sum(ac.Due) from account ac where ac.SerialNo<=account.SerialNo and ac.Account =account.Account )>0) ``` **Update** there is no relation between A1 and A2 that say that balance 3100 is to be share between A1 and A2 and not with B1. So you have to specify some where that a1 and a2 are on same group there is suggested option for you Add group no column in you table and give same no for A1 and A2, other same no for B1 and B2. Then add Priority column that specifies A1 should deduct first due then if balance left a2 will get chance then query will be ``` SELECT * FROM account WHERE ( Balance - ( SELECT SUM(ac.Due) FROM account ac WHERE ( ac.GroupNo = account.GroupNo AND ( ( ac.Account = account.Account AND ( ac.SerialNo <= account.SerialNo ) ) OR ac.Prioirty < account.Prioirty ) ) ) > 0 ) ```
Complex select query for SQL Server
[ "", "sql", "sql-server", "" ]
I'm trying to do with following with an SQL query in Impala. I've got a single data table that has (among other things) two columns with values that intersect multiple times. For example, let's say we have a table with two columns for related names and phone numbers: `Names Phone Numbers John Smith (123) 456-7890 Rob Johnson (123) 456-7890 Greg Jackson (123) 456-7890 Tom Green (123) 456-7890 Jack Mathis (123) 456-7890 John Smith (234) 567-8901 Rob Johnson (234) 567-8901 Joe Wolf (234) 567-8901 Mike Thomas (234) 567-8901 Jim Moore (234) 567-8901 John Smith (345) 678-9012 Rob Johnson (345) 678-9012 Toby Ellis (345) 678-9012 Sam Wharton (345) 678-9012 Bob Thompson (345) 678-9012 John Smith (456) 789-0123 Rob Johnson (456) 789-0123 Kelly Howe (456) 789-0123 Hank Rehms (456) 789-0123 Jim Fellows (456) 789-0123` What I need to get from this table is a selection of each item from the Name column that has multiple entries from the Phone Numbers column associated with it, like this: `Names Phone Numbers John Smith (123) 456-7890 John Smith (234) 567-8901 John Smith (345) 678-9012 John Smith (456) 789-0123 Rob Johnson (123) 456-7890 Rob Johnson (234) 567-8901 Rob Johnson (345) 678-9012 Rob Johnson (456) 789-0123` This is the query I've got so far, but it's not quite giving me the results I'm looking for: ``` SELECT a.name, a.phone_number, b.phone_number, b.count1 FROM databasename a INNER JOIN ( SELECT phone_number, COUNT(phone_number) as count1 FROM databasename GROUP BY phone_number ) b ON a.phone_number = b.phone_number; ``` Any ideas on how to improve my query to get the results I'm looking for? Thank you.
Working with your query... This generates a subset by name of users having more than 1 phone number it then joins back to the entire set based on name returning all phone numbers for users having more than 1 phone number. however if a user has the same phone number listed more than once it would get returned. to eliminate those if needed, add distinct to the count in the inline view. ``` SELECT a.name, a.phone_number FROM databasename a INNER JOIN ( SELECT name, COUNT(phone_number) as count1 FROM databasename GROUP BY name having COUNT(phone_number) > 1 ) b on a.name = b.name Order by a.name, a.phone_Number ```
One method is to use exists: ``` select t.* from tablename t where exists (select 1 from tablename t2 where t2.name = t.name and t2.phonenumber <> t.phonenumber) ```
SQL Query: How to select multiple instances of a single item without collapsing into a group?
[ "", "sql", "hadoop", "impala", "" ]
I have a table with large amount of data and I need to get some information with only one query. Content of `PROCESSDATA` table: ``` PROCESSID | FIELDTIME | FIELDNAME | FIELDVALUE ------------------------------------------------------------------------- 125869 | 10/08/15 10:43:47,139000000 | IDREQUEST | 1236968702 125869 | 10/08/15 10:45:14,168000000 | state | Corrected 125869 | 10/08/15 10:43:10,698000000 | state | Pending 125869 | 10/08/15 10:45:15,193000000 | MsgReq | correctly updated ``` I need to get this result: ``` 125869 IDREQUEST 1236968702 state Corrected MsgReq correctly updated ``` So I made this kind of query: ``` SELECT PROCESSID, MAX(CASE WHEN FIELDNAME = 'IDREQUEST' THEN FIELDVALUE END) AS IDREQUEST MAX(CASE WHEN FIELDNAME = 'state' THEN FIELDVALUE END) AS state, MAX(CASE WHEN FIELDNAME = 'MsgReq' THEN FIELDVALUE END) AS MsgReq FROM PROCESSDATA WHERE FIELDNAME IN ('IDREQUEST', 'state', 'MsgReq') GROUP BY PROCESSID, FIELDNAME; ``` But I don't get exactly what I want: ``` 125869 IDREQUEST 1236968702 state Pending MsgReq correctly updated ``` I need to get the `FIELDVALUE` of a `FIELDNAME` based on `FIELDTIME`. In this example `FIELDNAME = 'state'` has two values `'Pending'` and `'Corrected'`, so I want to get `'Corrected'` because its `FIELDTIME` 10/08/15 10:45:14,168000000 > 10/08/15 10:43:10,698000000
Use `MAX( ... ) KEEP ( DENSE_RANK FIRST ORDER BY ... )` to get the maximum of a column based on the maximum of another column: [SQL Fiddle](http://sqlfiddle.com/#!4/473c3f/1) **Oracle 11g R2 Schema Setup**: ``` CREATE TABLE PROCESSDATA ( PROCESSID, FIELDTIME, FIELDNAME, FIELDVALUE ) AS SELECT 125869, TO_TIMESTAMP( '10/08/15 10:43:47,139000000', 'DD/MM/YY HH24:MI:SS,FF9' ), 'IDREQUEST', '1236968702' FROM DUAL UNION ALL SELECT 125869, TO_TIMESTAMP( '10/08/15 10:45:14,168000000', 'DD/MM/YY HH24:MI:SS,FF9' ), 'state', 'Corrected' FROM DUAL UNION ALL SELECT 125869, TO_TIMESTAMP( '10/08/15 10:43:10,698000000', 'DD/MM/YY HH24:MI:SS,FF9' ), 'state', 'Pending' FROM DUAL UNION ALL SELECT 125869, TO_TIMESTAMP( '10/08/15 10:45:15,193000000', 'DD/MM/YY HH24:MI:SS,FF9' ), 'MsgReq', 'correctly updated' FROM DUAL UNION ALL SELECT 125870, TO_TIMESTAMP( '10/08/15 10:43:47,139000000', 'DD/MM/YY HH24:MI:SS,FF9' ), 'IDREQUEST', '1236968702' FROM DUAL UNION ALL SELECT 125870, TO_TIMESTAMP( '10/08/15 10:45:15,193000000', 'DD/MM/YY HH24:MI:SS,FF9' ), 'MsgReq', 'correctly updated' FROM DUAL ``` **Query 1**: ``` SELECT PROCESSID, MAX( CASE FIELDNAME WHEN 'IDREQUEST' THEN FIELDVALUE END ) KEEP ( DENSE_RANK FIRST ORDER BY CASE FIELDNAME WHEN 'IDREQUEST' THEN FIELDTIME END DESC NULLS LAST ) AS IDREQUEST, MAX( CASE FIELDNAME WHEN 'state' THEN FIELDVALUE END ) KEEP ( DENSE_RANK FIRST ORDER BY CASE FIELDNAME WHEN 'state' THEN FIELDTIME END DESC NULLS LAST ) AS state, MAX( CASE FIELDNAME WHEN 'MsgReq' THEN FIELDVALUE END ) KEEP ( DENSE_RANK FIRST ORDER BY CASE FIELDNAME WHEN 'MsgReq' THEN FIELDTIME END DESC NULLS LAST ) AS MsgReq FROM PROCESSDATA GROUP BY PROCESSID ``` **[Results](http://sqlfiddle.com/#!4/473c3f/1/0)**: ``` | PROCESSID | IDREQUEST | STATE | MSGREQ | |-----------|------------|-----------|-------------------| | 125869 | 1236968702 | Corrected | correctly updated | | 125870 | 1236968702 | (null) | correctly updated | ```
Try this ``` select t1.PROCESSID, t1.FIELDTIME,t1.FIELDNAME,t1.FIELDVALUE, t1.STATE,t1.IDREQUEST from PROCESSDATA as t1 inner join ( select PROCESSID, max(FIELDTIME) as FIELDTIME from PROCESSDATA where FIELDNAME IN ('IDREQUEST', 'state', 'MsgReq') group by PROCESSID ) as t2 on t1.PROCESSID=t2.PROCESSID and t1.FIELDTIME=t2.FIELDTIME where t1.FIELDNAME IN ('IDREQUEST', 'state', 'MsgReq') ```
Getting a value based on an another within a group by
[ "", "sql", "oracle", "group-by", "" ]
Table : Post ``` id user_id group_id Text 1 1 1 AAA 2 2 2 BBB 3 2 2 CCC 4 3 2 DDD 5 2 3 EEE ``` Table : UserMaster ``` user_id username 1 Raj 2 Meet 3 Kishan ``` Table : Groups ``` group_id groupname 1 JAVA 2 PHP 3 SEO ``` Table : JoinRequest ``` id user_id group_id status 1 1 3 deactive 2 2 2 active 3 2 1 active 4 1 3 active ``` My Query Here: ``` //Help me improve my query to get the desired result. select * FROM Post where status = active in JoinRequest table ``` and user\_id='2' Expected Result : ``` id username Groupname Text 1 Raj JAVA AAA 2 Meet PHP BBB 3 Meet PHP CCC 4 Kishan PHP DDD ```
``` select p.id, u.username, g.groupname, p.text from posts p join userMaster u on u.user_id = p.user_id join groups g on g.group_id = p.group_id join joinRequest r on r.user_id = p.user_id and r.group_id = p.group_id where r.status = 'active' ```
``` Select p.id, um.username, g.Groupname, p.Text From Post as p JOIN UserMaster as um ON p.user_id = um.user_id JOIN Groups as g ON p.group_id = g.group_id JOIN JoinRequest as jr ON p.user_id = jr.user_id AND p.group_id = jr.group_id WHERE jr.status = 'active' ```
fetch records from four different table using join query
[ "", "sql", "sql-server", "database", "" ]
I have a table with an `identity` column in a server and have a other table with same structure in another server.. Now I want to copy all data from one table to other table but I can't help it... I have already created a linked server.. I use this: ``` insert into [server].[database].[dbo].[table1] select * from table2 ``` I also use this query without identity column in the place of `*` ``` insert into [server].[database].[dbo].[table1] select column1, column2 from table2 ``` What should I do ?
If you want to insert into a second table that also has an identity column, then you need to **explicitly define** the list of columns you're inserting into and omit the `identity` column: ``` insert into [server].[database].[dbo].[table1] (col1, col2) select column1, column2 from table2 ``` This way, SQL Server can insert the identity values in the target table as it should **Update:** two scenarios: **(1)** you want to insert the existing values from the identity column from the old table into the new one - in that case, you need to use `SET IDENTITY_INSERT ON/OFF` in your query: ``` SET IDENTITY_INSERT [192.168.1.6].[audit].[dbo].[tmpDTTransfer] ON INSERT INTO [192.168.1.6].[audit].[dbo].[tmpDTTransfer] (id, code, transfer1) SELECT id, code, transfer1 FROM tmpDTTransfer SET IDENTITY_INSERT [192.168.1.6].[audit].[dbo].[tmpDTTransfer] OFF ``` **(2)** if you don't want to insert the existing identity values, but just the other columns and let SQL Server assign new identity values in the target table, then you **don't need to use** `SET IDENTITY_INSERT ON/OFF` in your query: ``` INSERT INTO [192.168.1.6].[audit].[dbo].[tmpDTTransfer] (code, transfer1) SELECT code, transfer1 FROM tmpDTTransfer ``` But in any you, you should **always explicitly define** the list of columns to insert into, in your target table. **DO NOT USE:** ``` INSERT INTO [192.168.1.6].[audit].[dbo].[tmpDTTransfer] ....... ``` But instead use ``` INSERT INTO [192.168.1.6].[audit].[dbo].[tmpDTTransfer] (Code, Transfer1) ....... ``` or ``` INSERT INTO [192.168.1.6].[audit].[dbo].[tmpDTTransfer] (Id, Code, Transfer1) ....... ``` or whatever you need. Be **explicit** about what you want to insert into!
Set [IDENTITY\_INSERT](https://msdn.microsoft.com/en-us/library/ms188059(v=sql.120).aspx) for table: ``` SET IDENTITY_INSERT [server].[database].[dbo].[table1] ON insert into [server].[database].[dbo].[table1] select column1,column2 from table2 SET IDENTITY_INSERT [server].[database].[dbo].[table1] OFF ```
How to copy data in identity column?
[ "", "sql", "sql-server", "sql-server-2008-r2", "" ]
I have date like this ``` CustId Frequency Date ------ --------- ----- C1 2 20-Jan-2015 C2 3 22-Feb-2015 ``` I have to get the output like below ``` C1 20-Jan-2015 C1 20-Jan-2015 C2 22-Feb-2015 C2 22-Feb-2015 C2 22-Feb-2015 ``` Can anyone please help me with this query. Thanks in Advance
You can use a correlated hierarchical query to generate extra rows (without having to create an extra table and ensure that your frequencies do not exceed the match the range of those numbers): ``` SELECT CustID, "Date" FROM CustomerFrequencies c, TABLE( CAST( MULTISET( SELECT LEVEL FROM DUAL CONNECT BY LEVEL <= c.Frequency ) AS SYS.ODCINUMBERLIST ) ) ```
You need a numbers table. There are various ways to generate one. Here is an explicit method: ``` with numbers(n) as ( select 1 from dual union all select 2 from dual union all select 3 from dual ) select d.CustId, d.date + n.n - 1 as date from data d join numbers n on d.frequency <= n.n; ``` If you want to generate a bunch of numbers, a typical method is: ``` with numbers(n) as ( select level as n from dual connect by level <= 365 ) ```
Splitting Aggregated counts into rows
[ "", "sql", "oracle", "" ]
I'm beginner in SQL Server querying. I have assigned to a task where I need to self join a table. [![enter image description here](https://i.stack.imgur.com/YM8LW.png)](https://i.stack.imgur.com/YM8LW.png) Above is table structure. And I need result as below. I have tried using self join , sub query etc. I couldn't get result. ``` ReqStatusId ReqStatus ChildId ChildReqStatus 1 Open 2 On Hold 1 Open 3 Closed 2 On Hold 1 Open 2 On Hold 3 Closed 3 Closed 1 Open 3 Closed 2 On Hold ``` Result should come as: Each row in a table should joined with all other rows
What you are trying to get is achieved through a `cross join`. If you select the table twice you would get the desired result. ``` select a.reqstatusid, a.reqstatus, b.reqstatusid as childreqstatusid, b.reqstatus as childreqstatus from table a, table b where a.reqstatusid <> b.reqstatusid ```
use [*`CROSS JOIN`*](https://technet.microsoft.com/en-us/library/ms190690(v=sql.105).aspx), Which gives you the Cartesian product between two tables ``` Select * From YourTable A CROSS JOIN YourTable B Where A.ReqStatusId <> B.ReqStatusId ```
Joining a sql table rows with all other rows of same table
[ "", "sql", "sql-server", "" ]
I have a table called `cars`. Cars has a one-to-many relation to `tires` table. Tires has a column called `condition` which can either be "flat" or "pumped". I want to do a query where I count how many cars that has no pumped tires left. Meaning that ALL tires has to be flat. In order for the car to be counted. How do I do that I was trying something like: ``` SELECT COUNT(*) FROM "cars" left join tires on cars.id = tires.car_id WHERE "tires"."condition" = 'flat' AND "tires"."id" IS NULL ``` But that does not seem to be exactly right... Here is a data sample. The result should of cause be a count of `1`. Only car with id 3 has no pumped tires ``` # Cars id brand model 1 Audi A4 2 BMW X5 3 Ford Mondeo # Tires id car_id condition 1 1 flat 2 1 pumped 3 1 pumped 4 1 flat 5 2 pumped 6 2 pumped 7 2 pumped 8 2 pumped 9 3 flat 10 3 flat 11 3 flat 12 3 flat ``` **EDIT** `condition` and `car_id` is indexed and there are millions of rows - so it needs to be performant
Ordinarily I would just: ``` SELECT COUNT(*) FROM cars where not exists (select null from tires where tires.car_id = cars.id and tires.condition = 'pumped') ``` If, however, you had an extremely small proportion of cars for which any tire at all was 'flat' then I would consider filtering the cars down first to a candidate list for which to check for no pumped tires. ``` SELECT COUNT(*) FROM cars where id in (select distinct id from tires where tires.condition = 'flat') and not exists (select null from tires where tires.car_id = cars.id and tires.condition = 'pumped') ``` You'd want indexes on (condition, car\_id) *and* (car\_id, condition) for the latter. Another method which might be of interest is: ``` select count(*) from ( select 0 from tires group by car_id having max(condition) = 'flat') ```
Count all cars that don't have any pumped tires: ``` select count(*) from cars c where not exists (select 1 from tires t where c.id = t.car_id and t.condition = 'pumped') ```
SQL: count all where has no relation with certain condition
[ "", "sql", "postgresql", "" ]
I have a data model like the following which is simplified to show you only this problem (**SQL Fiddle** Link at the bottom): [![enter image description here](https://i.stack.imgur.com/M96QV.png)](https://i.stack.imgur.com/M96QV.png) A person is represented in the database as a meta table row with a name and with multiple attributes which are stored in the data table as key-value pair (key and value are in separate columns). **Expected Result** Now I would like to retrieve all users with all their attributes. The attributes should be returned as json object in a separate column. For example: ``` name, data Florian, { "age":23, "color":"blue" } Markus, { "age":24, "color":"green" } ``` **My Approach** Now my problem is, that I couldn't find a way to create a key-value pair in postgres. I tried following: ``` SELECT name, array_to_json(array_agg(row(d.key, d.value))) AS data FROM meta AS m JOIN ( SELECT d.fk_id, d.key, d.value AS value FROM data AS d ) AS d ON d.fk_id = m.id GROUP BY m.name; ``` But it returns this as data column: ``` [{"f1":"age","f2":24},{"f1":"color","f2":"blue"}] ``` **Other Solutions** I know there is the function `crosstab` which enables me to turn the data table into a key as column and value as row table. But this is not dynamic. And I don't know how many attributes a person has in the data table. So this is not an option. I could also create a json like string with the two row values and aggregate them. But maybe there is a *nicer* solution. And no, it is not possible to change the data-model because the real data model is already in use of multiple parties. **SQLFiddle** Check and test out the fiddle i've created for this question: <http://sqlfiddle.com/#!15/bd579/14>
Use the aggregate function `json_object_agg(key, value)`: ``` select name, json_object_agg(key, value) as data from data join meta on fk_id = id group by 1; ``` [Db<>Fiddle.](https://dbfiddle.uk/?rdbms=postgres_9.5&fiddle=349615c77f193e81f428c254bfd3bc06) The function was introduced in [Postgres 9.4](https://www.postgresql.org/docs/9.4/functions-aggregate.html).
I found a way to return crosstab data with dynamic columns. Maybe rewriting this will be better to suit your needs: ``` CREATE OR REPLACE FUNCTION report.usp_pivot_query_amount_generate( i_group_id INT[], i_start_date TIMESTAMPTZ, i_end_date TIMESTAMPTZ, i_interval INT ) RETURNS TABLE ( tab TEXT ) AS $ab$ DECLARE _key_id TEXT; _text_op TEXT = ''; _ret TEXT; BEGIN -- SELECT DISTNICT for query results FOR _key_id IN SELECT DISTINCT at_name FROM report.company_data_date cd JOIN report.company_data_amount cda ON cd.id = cda.company_data_date_id JOIN report.amount_types at ON cda.amount_type_id = at.id WHERE date_start BETWEEN i_start_date AND i_end_date AND group_id = ANY (i_group_id) AND interval_type_id = i_interval LOOP -- build function_call with datatype of column IF char_length(_text_op) > 1 THEN _text_op := _text_op || ', ' || _key_id || ' NUMERIC(20,2)'; ELSE _text_op := _text_op || _key_id || ' NUMERIC(20,2)'; END IF; END LOOP; -- build query with parameter filters _ret = ' SELECT * FROM crosstab(''SELECT date_start, at.at_name, cda.amount ct FROM report.company_data_date cd JOIN report.company_data_amount cda ON cd.id = cda.company_data_date_id JOIN report.amount_types at ON cda.amount_type_id = at.id WHERE date_start between $$' || i_start_date::TEXT || '$$ AND $$' || i_end_date::TEXT || '$$ AND interval_type_id = ' || i_interval::TEXT || ' AND group_id = ANY (ARRAY[' || array_to_string(i_group_id, ',') || ']) ORDER BY date_start'') AS ct (date_start timestamptz, ' || _text_op || ')'; RETURN QUERY SELECT _ret; END; $ab$ LANGUAGE 'plpgsql'; ``` Call the function to get the string, then execute the string. I think I tried executing it in the function, but it didn't work well.
How to convert two rows into key-value json object in postgresql?
[ "", "sql", "json", "postgresql", "aggregate-functions", "key-value", "" ]
I need help getting a certain outcome from sql. **users table:** ``` userid | name -------|----------- 1 | Bob Robert 2 | Steve Smith 3 | Willard Henegar 4 | Max Rockwell 5 | Marion Paley 6 | Marcus Delisle ``` **orders table:** ``` orderid | userid --------|------- 1 |1 2 |1 3 |1 4 |2 5 |3 6 |2 7 |4 8 |4 9 |5 10 |4 11 |4 12 |1 ``` **wanted outcome** ``` numOrders | numPeople ----------|---------- 0 |1 1 |2 2 |1 3 |0 4 |2 ``` so basically i want to show how many people have no orders, how many people have 1 order, 2 orders, etc. I tried to figure it out and came up with this: ``` SELECT Count(orders.orderid) AS numOrders, Count(users.userid) AS numPeople FROM users LEFT JOIN orders ON users.userid = orders.userid GROUP BY numOrders ``` but this gives my an error that i can't group on numOrders. Any suggestions on how i could achieve this?
If you also want Users without any orders, you can do the following: ``` select NumOrders, count(*) as NumPeople from ( select u.UserId, count(*) as NumOrders from users u left join orders o on o.UserId = o.UserId group by o.UserId ) t group by t.NumOrders ``` You could also do: ``` select NumOrders, count(*) as NumPeople from ( select u.UserId, (select count(*) from orders o where o.UserId = u.UserId) as NumOrders from users u ) t group by t.NumOrders ``` As Gordon mentioned, having a number table would help. If you know that no one will have more than say, 1,000 orders, and you want show one row for each number 0 - 1000, you could do something like this: ``` select number as NumPeople, isnull(p.NumOrders, 0) as NumOrders from master..spt_values v left join ( select t.NumOrders, count(*) as NumPeople from ( select u.UserId, (select count(*) from order o where o.UserId = u.UserId) as NumOrders from users u) t group by NumOrders ) p on p.NumOrders = v.number where type = 'P' and number <= 1000 ``` The above would work for SQL Server. I'm not sure if MySQL has a master values table, but otherwise, you could easily create a number table and swap that out.
You want a double histogram. This is the easiest way: ``` select cnt, count(*), min(userid), max(userid) from (select u.userid, count(o.userid) as cnt from users u left join orders o on u.userid = o.userid group by u.userid ) u group by cnt; ``` This will not, however, give you the row with 0 count. That requires a bit more effort. It helps if you have a `numbers` table to fill in the missing values.
SQL count number of people by number of orders
[ "", "mysql", "sql", "" ]
I realize this has already been posted somewhere but due to keyword density I cannot find it. I currently have the following query: ``` SELECT COUNT(t2.*), t1.t2id, t1.c2 FROM t1 LEFT JOIN t2 ON t1.t2id = t2.t2id GROUP BY t1.t2id ``` Note `t2id` is a `nchar(256)` and t1.c2 is an `ntext`. So basically I want to count how many t2's there are for each value record in t1 and get some value from t1. Now the problem is that when I execute this in c# I GET the error > An unhandled exception of type 'System.Data.SqlClient.SqlException' > occurred in System.Data.dll > > Additional information: Column 't1.c2' is invalid in the select list > because it is not contained in either an aggregate function or the > GROUP BY clause. But shouldn't the `GROUP BY t1.t2id` ensure that this won't happen?
You need to to have ``` SELECT COUNT(t2.*), t1.t2id, t1.c2 FROM t1 LEFT JOIN t2 ON t1.t2id = t2.t2id GROUP BY t1.t2id, t1.c2 ``` The r1.c2 isn't in an aggregate (count) or the group by so it needs to go into one. Edit: deal with t1.c2 being ntext ``` SELECT COUNT(t2.*), t1.t2id, cast(t1.c2 as nvarchar(max)) FROM t1 LEFT JOIN t2 ON t1.t2id = t2.t2id GROUP BY t1.t2id, cast(t1.c2 as nvarchar(max)) ``` If your c2 column has more content than nvarchar(max) can hold you have a problem, the performance won't be good either but without know how much data you have its hard to say if it will be a problem.
Problem is that as error message tells you, you need to pass `t1.c2` to `GROUP BY`clause in following: ``` string query = "SELECT COUNT(t2.*),t1.t2id,t1.c2 FROM t1 LEFT JOIN t2 ON t1.t2id = t2.t2id GROUP BY t1.t2id, t1.c2"; ```
How to get a column value after and count after join?
[ "", "sql", "sql-server", "" ]
Say i have a string `ABC&EFG d.d.` , and after calling `initcap` like this ``` select initcap("ABC&EFG d.d.") from dual; ``` I would get `Abc&Efg D.D.` which is perfect for the 1st part but not what i need for the part where letters `d.d.` become `D.D.`. So i would need a version of initcap that would not capitalize letters followed by dot. Anyone knows if there is such posibility? Just a clarification as i'm looking for potential built in function.
``` select regexp_replace( initcap( regexp_replace('ABC&EFG d.d.', '(\w+\.)', chr(1)||'a\1') ), chr(1)||'.') from dual; ```
You can use a regex to filter the string for text only, it will then skip the 'd.d.' since it wont be match the regex and runs `initcap` on the matched string. Then find the 'd.d.' and concatenate it to the string after `initcap`. ``` select initcap(REGEXP_SUBSTR('ABC'||chr(38)||'EFG d.d.','(\w\&?\w)+') ) || REGEXP_SUBSTR('ABC'||chr(38)||'EFG d.d.','(\s(\w\.)+)+') from dual; ``` Note that `REGEXP_SUBSTR` returns a single match so this might not work with more complex strings (i.e multiple matches).
How to camelcase a string like with Initcap but with exception of letters followed by dot?
[ "", "sql", "oracle", "" ]
Considering the following table, how can I group these scores into three buckets (not more): less than equal 150, between 150 and 350, more than 350. ``` id | score ----+------- 1 | 5 2 | 5 3 | 5 4 | 4 5 | 5 6 | 4 7 | 4 8 | 4 9 | 2 10 | 2 11 | 6 12 | 205 13 | 250 13 | 400 14 | 105 15 | 900 16 | 1300 ``` I tried this method: ``` select (score/100)*100 || '-' || (score/100)*100 + 100 as scorerange,count(*) from scores group by score/100 order by score/100; ``` And this is the result: ``` scorerange | count ------------+------- 0-100 | 11 100-200 | 1 200-300 | 2 400-500 | 1 900-1000 | 1 1300-1400 | 1 (6 rows) ``` It groups the scores but not in the three buckets that I need.
This will give you the ranges you requested: ``` SELECT CASE WHEN min(score) <= 150 THEN '*-150' WHEN min(score) <= 350 THEN '151-350' ELSE '350-*' END AS scorerange, #A comma is needed here right after scorerange count(*) FROM scores GROUP BY score <= 150, score <= 350; ```
Group by a `CASE` statement, but make sure to *get the bounds right*: ``` SELECT CASE WHEN score <= 150 THEN '0-150' WHEN score <= 350 THEN '151-350' ELSE '351+' END AS score_range , count(*) AS count FROM scores GROUP BY 1 ORDER BY 1; -- happens to work correctly with the text value. ``` Assuming the column `score` is defined `NOT NULL` and values are positive.
Group records into three predefined buckets in Postgres
[ "", "sql", "postgresql", "" ]
I am starting to build something like system up to the just cooperate with suggestions coming into the database called `ForslagOpslag`. Every time I try to update table with likes, you will see it with this one error: > Update cannot proceed due to validation errors. Please correct the following errors and try again. > > SQL71516 :: The referenced table '[dbo].[ForslagOpslag]' contains no > primary or candidate keys that match the referencing column list in > the foreign key. If the referenced column is a computed column, it > should be persisted. Here is how I built my `ForslagOpslagLikes` table: ``` CREATE TABLE [dbo].[ForslagOpslagLikes] ( [fk_brugerid] INT NOT NULL, [opretdato] DATETIME NOT NULL, [getid] INT NOT NULL, CONSTRAINT [PK_ForslagOpslagLikes] PRIMARY KEY CLUSTERED ([fk_brugerid], [getid]), CONSTRAINT [FK_ForslagOpslagLikes_ToGetid] FOREIGN KEY ([getid]) REFERENCES [dbo].[ForslagOpslag]([Id]), CONSTRAINT [FK_ForslagOpslagLikes_ToForslagBrugerid] FOREIGN KEY ([fk_brugerid]) REFERENCES [dbo].[ForslagOpslag]([fk_brugerid]) ); ``` Reason I have both `fk_brugerid` and `getid` is for sure me that the user can not vote / like more once! The way I have built my `ForslagOpslag` table: ``` CREATE TABLE [dbo].[ForslagOpslag] ( [Id] INT IDENTITY (1, 1) NOT NULL, [text] NVARCHAR (MAX) NOT NULL, [fk_brugerid] INT NOT NULL, [opretdato] DATETIME NOT NULL, PRIMARY KEY CLUSTERED ([Id] ASC) ); ``` like this to be my like system do: ``` ForslagOpslagLikes -> fk_brugerid to ForslagOpslag -> fk_brugerid ForslagOpslagLikes -> getid to ForslagOpslag -> id ```
You sound scandinavian, and Bruger means User (for all the non-scandinavians here). What you appear to want is a `Bruger` (User) table, where `fk_brugerid` in `ForslagOpslag` is the user who created the record with `opretdato` being the creation date, and `ForslagOpslagLikes` is an association table of users who likes the ForslagOpslag with `opretdato` being the date they clicked on "Like". ``` CREATE TABLE [dbo].[Bruger] ( [brugerid] INT IDENTITY (1, 1) NOT NULL, ..., CONSTRAINT [PK_Bruger] PRIMARY KEY CLUSTERED ([brugerid]) ); CREATE TABLE [dbo].[ForslagOpslag] ( [Id] INT IDENTITY (1, 1) NOT NULL, [text] NVARCHAR(MAX) NOT NULL, [fk_brugerid] INT NOT NULL, [opretdato] DATETIME NOT NULL, CONSTRAINT [PK_ForslagOpslag] PRIMARY KEY CLUSTERED ([Id]), CONSTRAINT [FK_ForslagOpslag_Bruger] FOREIGN KEY ([fk_brugerid]) REFERENCES [dbo].[Bruger] ([brugerid]) ); CREATE TABLE [dbo].[ForslagOpslagLikes] ( [fk_brugerid] INT NOT NULL, [opretdato] DATETIME NOT NULL, [getid] INT NOT NULL, CONSTRAINT [PK_ForslagOpslagLikes] PRIMARY KEY CLUSTERED ([fk_brugerid], [getid]), CONSTRAINT [FK_ForslagOpslagLikes_Bruger] FOREIGN KEY ([fk_brugerid]) REFERENCES [dbo].[Bruger] ([brugerid]), CONSTRAINT [FK_ForslagOpslagLikes_ForslagOpslag] FOREIGN KEY ([getid]) REFERENCES [dbo].[ForslagOpslag]([Id]) ); ```
Well - the error seems pretty clear: you're trying to estabslish a foreign key relationship to `ForslagOpslag.fk_brugerid` here: ``` CONSTRAINT [FK_ForslagOpslagLikes_ToForslagBrugerid] FOREIGN KEY ([fk_brugerid]) REFERENCES [dbo].[ForslagOpslag]([fk_brugerid]) ``` but that column **is NOT** the primary key of that other table - and it's not a `UNIQUE` constraint either - so you **cannot** reference that column in a foreign key relationship. But the column(s) that a foreign key references **must be** the primary key of that other table - or in SQL Server, it's good enough if there's a `UNIQUE` constraint on that column. You **must ensure** that the values you reference in `FroslagOpslag` only match a single column in that table - otherwise, you cannot establish a foreign key relationship
CONSTRAINT to foreign keys to one table - causes error
[ "", "sql", "sql-server", "" ]