Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I'm storing hierarchical data in a table. When a resource is accessed by its hierarchical path (grantParent/parent/resource), I need to locate the resource using a CONNECT BY query. Note: SQL commands are exported from EnterpriseDB, but it should work in Oracle too. Table structure: ``` CREATE TABLE resource_hierarchy ( resource_id character varying(100) NOT NULL, resource_type integer NOT NULL, resource_name character varying(100), parent_id character varying(100) ) WITH ( OIDS=FALSE ); ``` Data: ``` INSERT INTO "resource_hierarchy" (resource_id,resource_type,resource_name,parent_id) VALUES ('36d27991', 3, 'areaName', 'a616f392'); INSERT INTO "resource_hierarchy" (resource_id,resource_type,resource_name,parent_id) VALUES ('a616f392', 3, 'townName', 'fcc1ebb7'); INSERT INTO "resource_hierarchy" (resource_id,resource_type,resource_name,parent_id) VALUES ('fcc1ebb7', 2, 'stateName', '8369cc88'); INSERT INTO "resource_hierarchy" (resource_id,resource_type,resource_name,parent_id) VALUES ('8369cc88', 5, 'countryName', null); ``` Now, when I receive a path like ``` countryName/stateName/townName/areaName ``` I'm executing a query like, ``` select LEVEL,* from resource_hierarchy WHERE resource_name = ( CASE LEVEL WHEN 1 THEN 'areaName' WHEN 2 THEN 'townName' WHEN 3 THEN 'stateName' WHEN 4 THEN 'countryName' ELSE '' END ) connect by prior parent_id = resource_id start with resource_name = 'areaName'; ``` My expected results are: ``` LEVEL resource_id resource_type resource_name parent_id ------------------------------------------------------------- 1 36d27991 3 areaName a616f392 2 a616f392 3 townName fcc1ebb7 3 fcc1ebb7 2 stateName 8369cc88 4 8369cc88 5 countryName <null> ``` This query works fine, but I'm not sure if it would run faster, when my table is big like hundreds of thousands of entries. Can you optimize this query for my requirement? Edited: EXPLAIN for the above query: I've defined two indices - one on resource\_id (primary key) and another on parent\_id ``` Sort (cost=66.85..66.86 rows=1 width=694) Sort Key: connectby_cte.siblingssortcol CTE prior -> Recursive Union (cost=0.00..65.83 rows=31 width=151) -> WindowAgg (cost=0.00..3.12 rows=1 width=83) -> Seq Scan on resource_hierarchy (cost=0.00..3.11 rows=1 width=83) Filter: ((resource_name)::text = 'areaName'::text) -> WindowAgg (cost=0.33..6.21 rows=3 width=151) -> Hash Join (cost=0.33..6.15 rows=3 width=151) Hash Cond: ((resource_hierarchy_1.resource_id)::text = (prior.parent_id)::text) Join Filter: connectby_cyclecheck(prior.recursionpath, (resource_hierarchy_1.parent_id)::text) -> Seq Scan on resource_hierarchy resource_hierarchy_1 (cost=0.00..2.89 rows=89 width=83) -> Hash (cost=0.20..0.20 rows=10 width=286) -> WorkTable Scan on prior (cost=0.00..0.20 rows=10 width=286) -> CTE Scan on prior connectby_cte (cost=0.00..1.01 rows=1 width=694) Filter: ((resource_name)::text = CASE level WHEN 1 THEN 'areaName'::text WHEN 2 THEN 'townName'::text WHEN 3 THEN 'stateName'::text WHEN 4 THEN 'countryName'::text ELSE ''::text END) ```
*Disclaimer: My primary experience belongs to Oracle DBMS, so pay attention to details if applying solution to Postgres.* --- `Where` clause applied after full hierarchy already built, therefore in original query database engine started retrieving data with specified `resource_name` at any level and building a full tree for each found record. Filtering occurs only on the next step. [Documentation](https://docs.oracle.com/cd/B28359_01/server.111/b28286/queries003.htm): > 1. Oracle selects the root row(s) of the hierarchy—those rows that > satisfy the START WITH condition. > 2. Oracle selects the child rows of each root row. Each child row must > satisfy the condition of the CONNECT BY condition with respect to one > of the root rows. > 3. Oracle selects successive generations of child rows. Oracle first > selects the children of the rows returned in step 2, and then the > children of those children, and so on. Oracle always selects children > by evaluating the CONNECT BY condition with respect to a current > parent row. > 4. If the query contains a WHERE clause without a join, then Oracle > eliminates all rows from the hierarchy that do not satisfy the > condition of the WHERE clause. Oracle evaluates this condition for > each row individually, rather than removing all the children of a row > that does not satisfy the condition. To optimize this situation query must be changed as follows(hierarchy reversed to more natural top-down order): ``` select level, rh.* from resource_hierarchy rh start with (resource_name = 'countryName') and (parent_id is null) -- roots only connect by prior resource_id = parent_id and -- at each step get only required records resource_name = ( case level when 1 then 'countryName' when 2 then 'stateName' when 3 then 'townName' when 4 then 'areaName' else null end ) ``` Same query may be writed on the basis of CTE syntax ([Oracle recursive subquery factoring](https://oracle-base.com/articles/11g/recursive-subquery-factoring-11gr2)). Following is a variant for [PostgreSQL CTE](http://www.postgresql.org/docs/current/static/queries-with.html), corrected according to @Karthik\_Murugan suggestion: ``` with RECURSIVE hierarchy_query(lvl, resource_id) as ( select 1 lvl, rh.resource_id resource_id from resource_hierarchy rh where (resource_name = 'countryName') and (parent_id is null) union all select hq.lvl+1 lvl, rh.resource_id resource_id from hierarchy_query hq, resource_hierarchy rh where rh.parent_id = hq.resource_id and -- at each step get only required records resource_name = ( case (hq.lvl + 1) when 2 then 'stateName' when 3 then 'townName' when 4 then 'areaName' else null end ) ) select hq.lvl, rh.* from hierarchy_query hq, resource_hierarchy rh where rh.resource_id = hq.resource_id order by hq.lvl ``` It's only half of the work because we need to help database engine to locate records by creating appropriate indexes. Query above contains two search actions: 1. Locate records to start with; 2. Choose records on each next level. For the first action, we need to index `resource_name` field and, possible, `parent_id` field. For the second action fields `parent_id` and `resource_name` must be indexed. ``` create index X_RESOURCE_HIERARCHY_ROOT on RESOURCE_HIERARCHY (resource_name); create index X_RESOURCE_HIERARCHY_TREE on RESOURCE_HIERARCHY (parent_id, resource_name); ``` Maybe creating only `X_RESOURCE_HIERARCHY_TREE` index is enough. It depends on characteristics of data stored in a table. P.S. String for each level can be constructed from full path by using `substr` and `instr` functions like in this example for Oracle: ``` with prm as ( select '/countryName/stateName/townName/areaName/' location_path from dual ) select substr(location_path, instr(location_path,'/',1,level)+1, instr(location_path,'/',1,level+1)-instr(location_path,'/',1,level)-1 ) from prm connect by level < 7 ```
``` select LEVEL, resource_id, resource_type, resource_name, parent_id from resource_hierarchy connect by prior parent_id = resource_id start with UPPER(resource_name)= UPPER(:resource_name); ``` Using this approach, you would not have to use the CASE statements. Just mentioning the resource Name would fetch the parent hierarchies.
Connect by query
[ "", "sql", "oracle", "connect-by", "hierarchical-query", "enterprisedb", "" ]
Good afternoon, I'm having an issue with two tables that I'm trying to join. What I am trying to do is, I have to print a table with all products that is registered in some agenda (codControl), so the person can put his price. But first I have to look into **lctocotacao** to see if he had already given a price to some product. But when I do this, I just get the products that has some price, and the other ones I dont see. Here is an example of my table **cadprodutoscotacao** ``` codProduct desc codControl 1 abc 197 2 cde 197 3 fgh 197 1 abc 198 ``` And my table **lctocotacao** ``` codProduct price codControl codPerson 1 2.5000 197 19 2 3.0000 197 37 3 4.5000 198 37 ``` I have this SQL statement at the moment: ``` SELECT cadc.cod, cadc.desc, lcto.codEnt, lcto.price FROM cadprodutoscotacao cadc JOIN lctocotacao lcto ON cadc.codControl = lcto.codControl AND cadc.codProduct = lcto.codProduct AND cadc.codControl = '197' AND lcto.codPerson = '19' ORDER BY cadc.codControl; ``` What I'm getting: ``` cod desc price codPerson codControl 1 abc 2.5000 19 197 ``` And the table I expect ``` cod desc price codPerson codControl 1 abc 2.5000 19 197 2 cde 197 3 fgh 197 ``` 197 and 19 will be parameters to my query. Any ideas on how to proceed? **E D I T** Basically, I have two queries: ``` SELECT * FROM cadprodutoscotacao WHERE cadc_codControl = '197' ``` This first, to return all products registered in the agenda '197'. And the second one: ``` SELECT * FROM lctocotacao WHERE codPerson = 19 AND codControl = '197' ``` This second one to return products that already has some price added by the Person 19 in the agenda 197. I have to return one table, including all records from the first query, and, if there is some price in the second one, I have to "concatenate" them. [![Example](https://i.stack.imgur.com/WSlVS.png)](https://i.stack.imgur.com/WSlVS.png) Thanks in advance.
You need a `LEFT JOIN`, but you also need to be careful about the filtering conditions: ``` SELECT cadc.cod, cadc.desc, lcto.codEnt, lcto.price FROM cadprodutoscotacao cadc LEFT JOIN lctocotacao lcto ON cadc.codControl = lcto.codControl AND cadc.cod = lcto.cod AND lcto.codEnt = '19' WHERE cadc.codControl = '197' ORDER BY cadc_codigo; ``` A `LEFT JOIN` keeps all rows in the first table, regardless of whether a match is found in the `ON` conditions. This applies to conditions on the first table as well as the second. Hence, you don't want to put filters on the first table in the `ON` clause. The rule is: When using `LEFT JOIN` put filters on the first table in the `WHERE` clause. Filters on the second table go in the `ON` clause (otherwise the outer join is generally turned into an inner join).
Your rows are filtered because you specified `JOIN`, which is a shortcut for `INNER JOIN` If you want all the records from the left table, even if they don't have correlated records in the right table, you should do a `LEFT JOIN`: ``` SELECT cadc.cod, cadc.desc, lcto.codEnt, lcto.price FROM cadprodutoscotacao cadc LEFT JOIN lctocotacao lcto ON cadc.codControl = lcto.codControl AND cadc.cod = lcto.cod AND cadc.codControl = '197' AND lcto.codEnt = '19' ORDER BY cadc_codigo; ```
SQL join two tables and the elements that satisfies one condition
[ "", "sql", "postgresql", "join", "" ]
How can you query on just the time portion of an Orace date field. Ie: ``` select * from mytable where date_column < '05:30:00' ``` Want the query to return any rows where the time represented by date\_column is less than 5:30 regardless of the date.
You can try like this: ``` select * from mytable where to_char( date_column, 'HH24:MI:SS' ) < '05:30:00' ```
You can see how far the date is from midnight, and filter on that: ``` select * from mytable where date_column - trunc(date_column) < 5.5/24 ``` The `date_column - trunc(date_column)` calculation will give you a fraction of a day, as is normal for [date arithmetic](http://docs.oracle.com/cd/E11882_01/server.112/e41084/sql_elements001.htm#sthref170). The `5.5/24` is the fraction of the day represented by the time at 05:30; 5.5 hours out of 24 hours. If the column was a timestamp instead of a date you'd see an interval data type as the result of the subtraction. You can use [an interval literal](http://docs.oracle.com/cd/E11882_01/server.112/e41084/sql_elements003.htm#i38598) anyway if you prefer or find it easier to understand than 5.5/24 (or have more complicated times to compare, which are harder to express as a fraction): ``` select * from mytable where date_column < trunc(date_column) + interval '0 05:30:00' day to second; ``` This way round you're comparing the date in your column with the truncated date (i.e. midnight on that day) with 5 hours 30 minutes added to it, which is 05:30 the same day. Quick demo with simple data in a CTE, and a third very slight variant, but they all get the same result: ``` with mytable (date_column) as ( select to_date('2016-04-15 05:29:29', 'YYYY-MM-DD HH24:MI:SS') from dual union all select to_date('2016-04-14 05:29:29', 'YYYY-MM-DD HH24:MI:SS') from dual union all select to_date('2016-04-15 05:30:30', 'YYYY-MM-DD HH24:MI:SS') from dual ) select * from mytable where date_column < trunc(date_column) + 5.5/24; DATE_COLUMN ------------------- 2016-04-15 05:29:29 2016-04-14 05:29:29 ``` Note though that any manipulation of the column like this will prevent an index being used. If you have to do this regularly it might be worth adding a virtual column/index which does that calculation.
Oracle query on time (and not date)
[ "", "sql", "oracle", "date", "datetime", "" ]
I have a database that contains IDs and their associated coordinates. If I have two ID's what is the most efficient TSQL query that returns the linear distance between these two point? I know how to do it by using 4 variables and 3 select statements but is there a better way? ``` ID | X | Y 1 | 10 | 15 2 | 12 | 20 ``` Given ID 1 and 2 find the linear distance between them.
I'm not sure what you mean by "linear distance", but here is one way to get the Manhattan distance: ``` select abs(p1.x - p2.x) + (abs(p1.y - p2.y) from points p1 cross join points p2 where p1.id = 1 and p2.id = 2; ``` Euclidean distance would use appropriate functions.
Building on GL's code for Euclidean distance: ``` DECLARE @points TABLE (ID INT IDENTITY, X DECIMAL(8,4), Y DECIMAL(8,4)) INSERT INTO @points (X,Y) VALUES (10,15),(12,20) SELECT * FROM @points SELECT ROUND(SQRT((p1.x-p2.x)*(p1.x-p2.x)+(p1.y-p2.y)*(p1.y-p2.y)),2) FROM @points p1 CROSS JOIN @points p2 WHERE p1.id = 1 AND p2.id = 2; ``` Obviously you have your own code for the table itself, but this will run on its own and you can see that it gives the result of 5.39, rounded because I told it to.
SQL Calculate XY distance between two XY Coordinates with one query
[ "", "sql", "sql-server", "t-sql", "" ]
I usually search column names in Oracle as we have 1000+ tables. Is there simpler way to search column names using regex. For example Column name have CURRENCY or COUNTRY etc.
I think it duplicate. But can find it using below query. ``` SELECT column_name, table_name FROM user_tab_columns WHERE column_name like '%CURRENCY%' OR column_name Like '%COUNTRY%'; ```
I would use the answer of [this SO question](https://stackoverflow.com/a/8739400/3227403) and dump all data to a text file. At that point I'd use any good text editor with regex search to search the text file and have an immediate overview of what is what. This works very well as long as tables and columns do not change too often.
How to check all column names of Tables in oracle with regex?
[ "", "sql", "oracle", "" ]
I have postgresql table that looks like this: ``` +----+---------------------+ | id | names | +----+---------------------+ | 1 | foo|bar and biz|pop | +----+---------------------+ ``` I want to select row containing given name. Something like ``` SELECT "id" FROM "table" WHERE "names" LIKE '%foo%'; id ----- 1 (1 row) ``` I want the query to return this row as well if I ask for `bar and biz` but return nothing if I ask from `bar`. For now I'm adding pipe symbols to the beginning and the end of the line and ask `LIKE '%|bar and biz|%'`. Anyway, I wonder is there a way to find that row without additional pipes. Is there a way to do such query in postgresql? **UPD:** It seems like I explain my problem bad. Well, I want following: ``` SELECT "id" FROM "table" WHERE "names" LIKE '%bar and biz%'; id ----- 1 (1 row) ``` and ``` SELECT "id" FROM "table" WHERE "names" LIKE '%bar%'; id ----- (0 rows) ```
First, storing multiple values in a single column is a bad idea: * SQL is not very good at string operations. * Such operations cannot make use of indexes. * You cannot use foreign key relationships to validate values. Instead, you should be using a junction table. Postgres also has other solutions for storing lists, such as arrays and JSON. Sometimes, we are stuck with other people's bad design decisions. One method using `like` is: ``` SELECT "id" FROM "table" WHERE '|' || "names" || '|' LIKE '%|bar|%'; ```
While stuck with your unfortunate design, convert to an array and use the `ANY` construct: ``` SELECT id FROM table WHERE 'bar' = ANY (string_to_array(names, '|')); ``` About `ANY`, `@>`, arrays and indexes: * [Can PostgreSQL index array columns?](https://stackoverflow.com/questions/4058731/can-postgresql-index-array-columns/29245753#29245753)
Is there a way to select like with custom separator
[ "", "sql", "postgresql", "sql-like", "" ]
The below query that should return a row for every Reading\_Type, plus either the saved Reading value for that Reading\_Type and date, or 0 if no Reading has been saved. ``` SELECT t.* , ISNULL(r.Reading, 0) AS Reading FROM Reading_Type t LEFT JOIN Reading r ON t.Reading_Type_ID = r.Reading_Type_ID WHERE r.Reading_Date = @date OR r.Reading_Date IS NULL ``` * It *does* work if there are no Readings saved for any date * It *does* work if the only Readings saved are for the selected date. * It does *not* work if a Reading\_Type has a saved Reading for date X, no saved Reading for date Y, and the search is for date Y. `Reading_Type` Table: ``` Reading_Type_ID Reading_Type ----------------------------- 1 Red 2 Blue 3 Green ``` `Reading` table (table is empty): ``` Reading_ID Reading_Type_ID Reading Reading_Date ----------------------------------------------------- ``` Query with `@date = April 15, 2016` returns: ``` Reading_Type_ID Reading_Type Reading ---------------------------------------- 1 Red 0 2 Blue 0 3 Green 0 ``` `Reading` table (table has data for April 15): ``` Reading_ID Reading_Type_ID Reading Reading_Date ----------------------------------------------------- 1 1 5 April 15, 2016 2 3 8 April 15, 2016 ``` Query with `@date = April 15, 2016` returns: ``` Reading_Type_ID Reading_Type Reading ---------------------------------------- 1 Red 5 2 Blue 0 3 Green 8 ``` Query with `@date = April 7, 2016` returns: ``` Reading_Type_ID Reading_Type Reading ---------------------------------------- 1 Red 0 3 Green 0 ``` The third query should still return a row for Reading\_Type = Blue, with 0 for Reading. How do I fix my query?
Your WHERE criteria is causing your filter problem (done this myself only a million times or so). Try this instead: ``` SELECT t.* , ISNULL(r.Reading, 0) AS Reading FROM Reading_Type t LEFT JOIN Reading r ON t.Reading_Type_ID = r.Reading_Type_ID AND r.Reading_Date = @date ``` Leave out the WHERE clause in this instance (unless you want to further filter your data). Here's some information which helps detail this SQL feature: [Specifying Joins in FROM or WHERE clauses](https://technet.microsoft.com/en-us/library/aa213235(v=sql.80).aspx)
if r.Reading\_Date can be null and you want to include those then ``` SELECT t.*, ISNULL(r.Reading, 0) AS Reading FROM Reading_Type t LEFT JOIN Reading r ON r.Reading_Type_ID = t.Reading_Type_ID AND isnull(r.Reading_Date, @date) = @date ```
SQL JOIN - return a row for each Table A, regardless if Table B has values or not
[ "", "sql", "sql-server", "sql-server-2008", "" ]
[![enter image description here](https://i.stack.imgur.com/XituG.png)](https://i.stack.imgur.com/XituG.png) I have a table of user with differents fields : id, firstname, name. I have a table called friend with differents fields : invite\_id, friend\_sender (id of a user), friend\_receiver (id of a user), validity (boolean). I'm filling the friend table with ``` 1, 1, 2, 0; 2, 3, 1, 1; 3, 1, 5, 1; ``` Let's imagine I'm user 1, and I want to find all my friends. I can be the one who sent the friend invitation (sender), or the one who received it (receiver). When the receiver accept the invitation, the validity of the relation is set to 1. So for example, I'm not friend with user 2 because he didn't accepted. The result I should get from doing the query with user 1 should be : ``` 3, firstnameofuser3, nameofuser3 5, firstnameofuser3, nameofuser3 ``` I tried some SQL things, with double JOIN, renaming table to avoid the "double same table" problems etc ... but I couldn't figure it out. I've found some post about it, but for more complex things, like here : [Finding mutual friend sql](https://stackoverflow.com/questions/36096713/finding-mutual-friend-sql) Thank you in advance for you help.
Try this: ``` SELECT u.* FROM user u WHERE u.id IN ( SELECT f.friend_sender FROM friend f WHERE f.friend_receiver = 2 -- My fixed ID about Jin Jey UNION SELECT f.friend_receiver FROM friend f WHERE f.friend_sender = 2 AND f.validity = 1) ``` I used UNION because you can query two sets of data and merge it. I fixed ID (2) because in your request you want to know all friends about Jin Jey
I know there are already answers, but mine is unique AND I have a fiddle! ;) ``` SELECT id, firstname, name FROM user WHERE id IN ( SELECT CASE WHEN friend_sender = 1 THEN friend_receiver ELSE friend_sender END FROM friend WHERE (friend_sender = 1 OR friend_receiver = 1) AND validity = 1 ) ``` **Fiddle:** <http://sqlfiddle.com/#!9/d8f55a/1>
Finding friends of a user
[ "", "mysql", "sql", "" ]
Sorry for the strange title but I'm having a difficult time thinking of something more descriptive. I need to know how you'd accomplish the following thing in T-SQL: Imagine you have the following 3 tables with the typical 1-to-many relationships you'd expect for these entities. Notice though "SpecialBooleanFlag" on the Items table (more on that in a moment): ``` Customers: CustomerId, CustomerName, (etc....) Orders: OrderId, OrderDtm, CustomerId (etc.....) Items: ItemId, ItemDescripion, OrderId, **SpecialBooleanFlag** ``` This sounds like an odd requirement and beyond my means to explain in this post, but imagine your boss asked you to write a query that returned all of a customer's complete order history with each item they've ever bought. However, if *just one* of a customer's orders has an item with SpecialBooleanFlag = 1, then make that item appear *as if* the customer had ordered the item on every order in their order history. So, if a customer has never ordered an item with SpecialBooleanFlag = 1, then the result count should be equal to the total number of items they've ever ordered. However, if they've placed 5 orders and just one of those orders has an item with SpecialBooleanFlag = 1, then the result count would be 5 + 4, with the 4 extra rows associating the flagged item with the 4 orders which never really matched the item. I accomplished this once already with a cursor/looping but the solution is too slow and I need to know a way to do it with plain old set operations if possible. Edit: For example, imagine the following query/result set: ``` SELECT CustomerName as Name, CustomerId, OrderId, ItemDescription, SpecialBooleanFlag FROM Customers C JOIN Orders O on C.CustomerId = O.CustomerId JOIN Items I on O.OrderId = I.OrderId WHERE C.CustomerId = 99 ``` Results: ``` CustomerName CustomerId OrderId ItemDescription SpecialBooleanFlag George Washington 99 1 Shoes 0 George Washington 99 1 Shirt 0 George Washington 99 1 Tie 0 George Washington 99 2 Socks 0 George Washington 99 2 Hat 1 George Washington 99 2 Bowtie 0 George Washington 99 3 Green Coat 0 George Washington 99 3 Blue Coat 0 George Washington 99 3 Red Coat 0 ``` So, the customer has had 3 orders with 9 total items in all. The Hat is "special" though and was on order #2. I want it to appear as if was ordered each time. This result set is what I'm looking for. The hat shows up on orders 1 and 3 based on the fact that it was flagged rather than an association between the item and orderId's 1 and 3: ``` CustomerName CustomerId OrderId ItemDescription SpecialBooleanFlag George Washington 99 1 Shoes 0 George Washington 99 1 Shirt 0 George Washington 99 1 Tie 0 George Washington 99 1 Hat 1 George Washington 99 2 Socks 0 George Washington 99 2 Hat 1 George Washington 99 2 Bowtie 0 George Washington 99 3 Green Coat 0 George Washington 99 3 Blue Coat 0 George Washington 99 3 Red Coat 0 George Washington 99 3 Hat 1 ``` Does this make more sense?
2 words, CROSS APPLY. ``` IF OBJECT_ID('tempdb..#Customers') IS NOT NULL DROP TABLE #Customers IF OBJECT_ID('tempdb..#Orders') IS NOT NULL DROP TABLE #Orders IF OBJECT_ID('tempdb..#Items') IS NOT NULL DROP TABLE #Items CREATE TABLE #Customers ( CustomerId INT, CustomerName varchar(255) ) CREATE TABLE #Orders ( OrderId INT, OrderDtm DateTime, CustomerId INT ) CREATE TABLE #Items ( ItemId INT, ItemDescripion VARCHAR(255), OrderId INT, SpecialBooleanFlag BIT ) INSERT INTO #Customers ( CustomerId, CustomerName ) VALUES ( 1,'Customer1' ) ,( 2,'Customer2' ) ,( 3,'Customer3' ) ,( 4,'Customer4' ) INSERT INTO #Orders ( OrderId, OrderDtm, CustomerId ) VALUES (1,'2016-01-01',1) ,(2,'2016-01-02',1) ,(3,'2016-01-03',1) ,(4,'2016-01-04',2) ,(5,'2016-01-05',2) ,(6,'2016-01-06',2) ,(7,'2016-01-07',3) ,(8,'2016-01-08',3) ,(9,'2016-01-09',3) ,(10,'2016-01-10',4) INSERT INTO #Items ( ItemId, ItemDescripion, OrderId, SpecialBooleanFlag ) VALUES ( 1,'Order1Item1',1,0) ,( 2,'Order1Item2',1,0) ,( 3,'Order1Item3',1,0) ,( 1,'Order2Item1',2,0) ,( 2,'Order2Item2',2,0) ,( 1,'Order3Item1',3,0) ,( 1,'Order4Item1',4,0) ,( 2,'Order4Item2',4,0) ,( 3,'Order4Item3',4,1) ,( 1,'Order5Item1',5,0) ,( 2,'Order5Item2',5,0) ,( 1,'Order6Item1',6,0) --DECLARE @CustomerId INT = 1 -- no SpecialBooleanFlag DECLARE @CustomerId INT = 2 -- has SpecialBooleanFlag SELECT C.CustomerId, C.CustomerName,O.OrderId,O.OrderDtm,I.ItemId,I.ItemDescripion,SpecialBooleanFlag FROM #Customers C JOIN #Orders O on C.CustomerId = O.CustomerId JOIN #Items I on O.OrderId = I.OrderId WHERE C.CustomerId = @CustomerId AND SpecialBooleanFlag != 1 UNION SELECT C.CustomerId, C.CustomerName,O2.OrderId,O.OrderDtm,I.ItemId,I.ItemDescripion,SpecialBooleanFlag FROM #Customers C JOIN #Orders O on C.CustomerId = O.CustomerId JOIN #Items I on O.OrderId = I.OrderId CROSS APPLY ( SELECT * FROM #Orders O2 WHERE O2.CustomerId = C.CustomerId ) O2 WHERE C.CustomerId = @CustomerId AND SpecialBooleanFlag = 1 ```
I think something like this should work: ``` SELECT c.CustomerName, c.CustomerId, o.OrderId, i.ItemDescription, i.SpecialBooleanFlag FROM Customers c LEFT JOIN Orders o on c.CustomerId = o.CustomerId LEFT JOIN Items i on o.OrderId = i.OrderId OR EXISTS (SELECT 1 FROM orders o1 JOIN items i1 ON o1.OrderId = i1.OrderId WHERE o1.CustomerId = c.CustomerId and i.ItemId = i1.ItemId and i1.SpecialBooleanFlag = 1) ```
t-sql Odd Join Requirement
[ "", "sql", "sql-server", "t-sql", "" ]
I have this table ``` User | SecretId | Status warheat1990 | NULL | REV warheat1990 | NULL | COM warheat1990 | 1 | REV warheat1990 | 1 | COM ``` I want to filter out data with (Status = REV and SecretId IS NOT NULL combined) so the final result will be ``` User | SecretId | Status warheat1990 | NULL | REV warheat1990 | NULL | COM warheat1990 | 1 | COM ``` My query ``` SELECT * FROM TABLE WHERE User = 'warheat1990' AND (Status <> 'REV' AND SecretId IS NULL) ``` but it's not working because it gives me only this ROW. ``` User | SecretId | Status warheat1990 | NULL | COM ``` So how do I exclude data with Status = REV but only if SecretId IS NOT NULL I feel stupid that I can't figure out this simple case. Any help will be appreciated.
I think you should use OR in your SQL: ``` SELECT * FROM TABLE WHERE User = 'warheat1990' AND (Status <> 'REV' OR SecretId IS NULL) ``` Here is the math knowledge: you want to match the condition "not (A and B)", then you can use the equivalent expression ((not A) or (not B)), so you should use OR instead of AND.
So you want to return data where `Status <> 'REV'` **OR** `SecretId IS NULL`: ``` SELECT * FROM TABLE WHERE User = 'warheat1990' AND (Status <> 'REV' OR SecretId IS NULL) ``` When you are using `AND` you get only those rows that satisfy both conditions. When you are using `OR` you get the rows that match at least one criteria.
Filter where on condition
[ "", "sql", "sql-server", "sql-server-2012", "" ]
The below query takes several minutes(never completes sometimes) to execute. I'm using `MySQL` database. ``` select customer_name as cust, SUM(num_visits) AS visits from visit_history where category = "middleMan" and eve_date >= '2014-07-01' and eve_date <= '2015-07-01 and eve_type='XCG' and eve_master IN (select eve_name from master_type_ahj where category = "middleMan" and eve_date >= '2014-07-01' and eve_date <= '2015-07-01') group by cust order by visits desc limit 50 ``` The database table contains over million records. The data is partitioned. If I were to remove the subquery - `and eve_master IN (select eve_name from master_type_ahj where category = "middleMan" and eve_date >= '2014-07-01' and eve_date <= '2015-07-01')`, the query completes in a few min. The columns that go into `WHERE` clause are indexed. Is there anyway I could tune this? Any help here would help me tune a bunch of related queries too. Could I ask for help here please?
Here's another way using a derived table ``` select customer_name as cust, sum(num_visits) as visits from visit_history join ( select distinct eve_name from master_type_ahj where category = "middleMan" and eve_date >= '2014-07-01' and eve_date <= '2015-07-01' ) t on t.eve_name = visit_history.eve_master where category = "middleMan" and eve_date >= '2014-07-01' and eve_date <= '2015-07-01' and eve_type='XCG' group by cust order by visits desc limit 50 ```
MySQL often handles `EXISTS` better than `IN`. So, this is your query: ``` select customer_name as cust, SUM(num_visits) AS visits from visit_history where category = 'middleMan' and eve_date >= '2014-07-01' and eve_date <= '2015-07-01 and eve_type = 'XCG' and exists (select 1 from master_type_ahj m2 where m2.eve_master = vh.eve_master m2.category = 'middleMan' and m2.eve_date >= '2014-07-01' and eve_date <= '2015-07-01' ) group by cust order by visits desc limit 50; ``` I would recommend indexes on `visit_history(category, eve_type, eve_date, eve_master)`. and `master_type_ahj(eve_master, category, eve_date)`.
SQL subquery causing overall query to go slow
[ "", "mysql", "sql", "sql-tuning", "" ]
I have the following ODBC query from SSRS to a MySQL database: ``` SELECT ID, StartTime, StartTimeMS, EndTime, EndTimeMS, TIMEDIFF(EndTime, StartTime) AS CallDuration, CallType, CallerID, DialedNumber, Extension FROM `call` WHERE (CallerID = ?) OR (Extension = ?) AND (StartTime < ?) AND (StartTime > ?) ``` When I run SSRS, I get prompted twice for the first two parameters. Ideally, I'd just have to enter that value once and be able to use it twice. Can I use a positional parameter inside an SSRS query twice so the user isn't prompted twice?
I don't think you can pass named parameters to MySQL like that but I think you can create them in the query: ``` SET @number = ? SET @EndDate = ? SET @BeginDate = ? SELECT ID, StartTime, StartTimeMS, EndTime, EndTimeMS, TIMEDIFF(EndTime, StartTime) AS CallDuration, CallType, CallerID, DialedNumber, Extension FROM `call` WHERE ( (CallerID = @number) OR (Extension = @number) ) AND (StartTime < @EndDate) AND (StartTime > @BeginDate) ``` Incidently, I think you need to put your **OR** clauses in a separate set of parenthesis.
Name the parameters: ``` SELECT ID, StartTime, StartTimeMS, EndTime, EndTimeMS, TIMEDIFF(EndTime, StartTime) AS CallDuration, CallType, CallerID, DialedNumber, Extension FROM `call` WHERE (CallerID = @number) OR (Extension = @number) AND (StartTime < @EndDate) AND (StartTime > @BeginDate) ```
Can I use an ODBC parameter twice?
[ "", "mysql", "sql", "reporting-services", "odbc", "" ]
I have this sql query ``` SELECT SUM(DATEDIFF(MINUTE, InTime , OutTime)) /60 FROM Attendance WHERE InTime BETWEEN '01-01-2016' AND '01-31-2016' AND Employee=63 var AttendofEmp = (from ab in db.Attendances where ab.InTime >= Convert.ToDateTime("01-01-2016") && ab.InTime >= Convert.ToDateTime("01-31-2016") select new { ab.InTime }); ``` it is working fine for sql query,but when I used linq query for getting same results it gives error and thanks in advance...
Try using `new DateTime()` for your date constants. Additionally, you can use [**`SqlMethods.DateDiffMinute`**](https://msdn.microsoft.com/en-us/library/system.data.linq.sqlclient.sqlmethods.datediffminute(v=vs.110).aspx) to get the minute difference and `.Sum()` to get the sum: ``` var AttendofEmp = (from a in db.Attendances where a.InTime >= new DateTime(2016, 1, 4) && a.InTime <= new DateTime(2016, 1, 31) select SqlMethods.DateDiffMinute(a.InTime , a.OutTime) ).Sum() / 60; ```
Convert `Convert.ToDateTime` to a variable and then use it in the query.
why my linq query is not giving me expected results
[ "", "sql", "linq", "sql-to-linq-conversion", "" ]
I have a table with 2 primary key columns : `ID` and `StudentID`. `ID` column is set to `isIdentity = Yes` with auto increment. I've tested it multiple times before, but for some reason this time, when I insert a duplicate value on `StudentID`, it does not throw the error but instead added it on to the database. 2 of the same values are displayed when I show the table data. What can be the problem here?
You have a compound primary key on `ID` and `StudentID`. That means you the combination of ID and StudentID together must be unique. Since `ID` is an identity column that combination of `ID` and `StudentID` will always be unique (because `ID` is already unique on its own). You can change the primary key to be on `ID` only. Then you can add a unique index on `StudentID`. For example: ``` create unique index idx_studentID on yourTable(StudentID) ``` That will insure that the StudentID column, in fact, contains only unique values. It seems like you may not actually need `ID` column, but that's a little wider discussion than your original question.
Do not mix up identity, primary key and unique key. 1. Any table can have identity key which you can setup on table. Here seed can be say 1, then increment it by 1. So incremental order will like 1,2,3...and so on. 2. Primary key, one can define on specific column of the table. Identity key can be used as primary key. But you can have identity column as well primary key on same table. Primary key is one and only for the table.So if you are treating identity as primary key, then you will have no further table column as primary key. 3. Unique key, can be more than one column with your table. While fetching rows from table data, if you provide combination of identity key, primary key and unique key then search will be fastest
SQL Primary Key Duplicate Values
[ "", "sql", "database", "duplicates", "primary-key", "" ]
I have simple sql: ``` SELECT * FROM `oc_artists` WHERE `oc_artists`.`artist_id`=`oc_artists_tags`.`artist_id` AND `oc_artists_tags`.`artist_tag` LIKE '%klass%' ``` When I run this I got: > 1054 - Unknown column 'oc\_artists\_tags.artist\_id' in 'where clause' This is a sql for a search script. I need simple return unique results from oc\_artists if query matches with `oc_artists_tags`.`artist_tag`.
2nd join table is missing from your query, so `include oc_artists_tags` table in your join... Finally your query should be- ``` SELECT * FROM `oc_artists`, `oc_artists_tags` WHERE `oc_artists`.`artist_id`=`oc_artists_tags`.`artist_id` AND `oc_artists_tags`.`artist_tag` LIKE '%klass%' ``` You can also use join or inner join instead of comma join- ``` SELECT * FROM `oc_artists` as oa join `oc_artists_tags` as oat on oa.artist_id=oat.artist_id WHERE oat.artist_tag LIKE '%klass%'; ``` To gain performance follow below points- 1. You should select only required columns instead of \*. 2. join fields must be indexed and better will be that these fields should be integer type. 3. If possible avoid '%'in left side in like clause as it will not use index and slow your query. for example artist\_tag like 'klass%' will use index but '%klass%' will not.
You need to JOIN the table `oc_artists_tags` too and you can achieve this two way, **Option 1** ``` SELECT * FROM `oc_artists` INNER JOIN `test2` on `oc_artists`.`artist_id`=`oc_artists_tags`.`artist_id` AND `oc_artists_tags`.`artist_tag` LIKE '%klass%' ``` **Option 2** ``` SELECT * FROM `oc_artists`,`oc_artists_tags` WHERE `oc_artists`.`artist_id`=`oc_artists_tags`.`artist_id` AND `oc_artists_tags`.`artist_tag` LIKE '%klass%' ```
Unknown column in 'where-clause'
[ "", "mysql", "sql", "where-clause", "" ]
Let's say that I have a table containing all of my Customer records. Each record has a unique ID, a name and possibly a parent record ID. (In case it makes a difference a parent can have multiple children but children can only have one parent. There's also no grandfather records, so a parent may not have a parent and children may not have children) **Customers** ``` +-----+------------+----------+ | ID | Name | ParentID | +-----+------------+----------+ | 100 | Customer A | | | 101 | Customer B | | | 102 | Customer C | 100 | | 103 | Customer D | 100 | | 104 | Customer E | 101 | +-----+------------+----------+ ``` As you can see from this example I have 5 unique Customer records, with C & D being children of A and E a child of B. Now I have a table containing all of the Tickets these Customers raise. Each ticket has a unique ID, a description and a parent customer ID. **Tickets** ``` +-----+-------------+----------+ | ID | Description | ParentID | +-----+-------------+----------+ | 500 | Ticket A | 100 | | 501 | Ticket B | 100 | | 502 | Ticket C | 102 | | 503 | Ticket D | 102 | | 504 | Ticket E | 103 | | 505 | Ticket F | 101 | | 506 | Ticket G | 104 | | 507 | Ticket H | 101 | +-----+-------------+----------+ ``` **Goal** * I will have been given a Customer ID and need to select all Tickets belonging to this record. * If the record has children I also need the tickets belonging to these records. * If the record is a child I'm not interested in its parent. **Example 1** I'm given the ID 100. This is Customer A and has two children, C & D. As the results of my select I would expect the following: * Ticket A - Directly belongs to ID 100 * Ticket B - Directly belongs to ID 100 * Ticket C - Belongs to ID 102, a child of 100 * Ticket D - Belongs to ID 102, a child of 100 * Ticket E - Belongs to ID 103, a child of 100 **Example 2** I'm given ID 104. This is Customer E, a child record. As the results of my select I would expect the following: * Ticket G - Directly belongs to ID 104 I would not expect anything further as the record is a child and therefore has no children and I'm not looking upwards at parent records. **Where I'm stuck...** Getting Tickets belonging to one ID is easy: ``` SELECT tickets.Description FROM Tickets AS tickets LEFT JOIN Customers AS customers ON tickets.ParentID = customers.ID WHERE customers.ID = 100 ``` I'm stuck getting the Tickets belonging to children. It seems like I'd first have to request the Customer belonging to the given ID, then fetch all child Customers where the ParentID matched the given ID, then finally request Tickets belonging to any of these records. Unfortunately I haven't got the faintest idea where to start and require some help. In case it's relevant I'm using SQL Server 2008 R2.
You probably need to use a [recursive common table expression](https://technet.microsoft.com/en-us/library/ms186243(v=sql.105).aspx) to iterate through the ancestry and get all related records: ``` DECLARE @CustomerID INT = 100; -- SAMPLE DATA FOR CUSTOMERS DECLARE @Customers TABLE (ID INT, Name VARCHAR(255), ParentID INT); INSERT @Customers (ID, Name, ParentID) VALUES (100, 'Customer A', NULL), (101, 'Customer B', NULL), (102, 'Customer C', 100), (103, 'Customer D', 100), (104, 'Customer E', 101); -- SAMPLE DATA FOR TICKETS DECLARE @Tickets TABLE (ID INT, Name VARCHAR(255), ParentID INT); INSERT @Tickets (ID, Name, ParentID) VALUES (500, 'Ticket A', 100), (501, 'Ticket B', 100), (502, 'Ticket C', 102), (503, 'Ticket D', 102), (504, 'Ticket E', 103), (505, 'Ticket F', 101), (506, 'Ticket G', 104), (507, 'Ticket H', 101); -- USE RECURSIVE CTE TO LOOP THROUGH HIERARCHY AND GET ALL ANCESTORS WITH RecursiveCustomers AS ( SELECT c.ID, c.Name, c.ParentID FROM @Customers AS c UNION ALL SELECT rc.ID, rc.Name, c.ParentID FROM RecursiveCustomers AS rc INNER JOIN @Customers AS c ON rc.ParentID = c.ID ) SELECT t.ID, t.Name, t.ParentID FROM @Tickets AS t INNER JOIN RecursiveCustomers AS rc ON rc.ID = t.ParentID WHERE rc.ParentID = @CustomerID OR (rc.ID = @CustomerID AND rc.ParentID IS NULL); ``` **RESULT FOR 100** ``` +-----+-------------+----------+ | ID | Description | ParentID | +-----+-------------+----------+ | 500 | Ticket A | 100 | | 501 | Ticket B | 100 | | 502 | Ticket C | 102 | | 503 | Ticket D | 102 | | 504 | Ticket E | 103 | +-----+-------------+----------+ ``` **RESULT FOR 104** ``` +-----+-------------+----------+ | ID | Description | ParentID | +-----+-------------+----------+ | 506 | Ticket G | 104 | +-----+-------------+----------+ ```
``` Select tickets.Description FROM Tickets AS tickets LEFT JOIN Customers ON customers.ID= tickets.ParentID OR customers.ParentID =tickets.ParentID WHERE Tickets.ParentID = 100 ```
Select Ticket records from one table that are associated with a Customer or the Customers children in another table
[ "", "sql", "select", "sql-server-2008-r2", "" ]
I have the following table `tableA` in PostgreSQL: ``` +-------------+-------------------------+ | OperationId | Error | +-------------+-------------------------+ | 1 | MajorCategoryX:DetailsP | | 2 | MajorCategoryX:DetailsQ | | 3 | MajorCategoryY:DetailsR | +-------------+-------------------------+ ``` How do I group the MajorErrorCategory such that I get the following? ``` +----------------+------------+ | Category | ErrorCount | +----------------+------------+ | MajorCategoryX | 2 | | MajorCategoryY | 1 | +----------------+------------+ ``` `Category` is the first part of `Error` after splitting on ':'.
Assuming the length before the `:` can vary you could use `substring` in combination with `strpos` to achieve your results: ``` SELECT SUBSTRING(error, 0, STRPOS(error, ':')) AS Category, COUNT(*) AS ErrorCount FROM t GROUP BY SUBSTRING(error, 0, STRPOS(error, ':')) ``` [Sample SQL Fiddle](http://www.sqlfiddle.com/#!15/d7f41/7) If you don't want to repeat the function calls you could of course wrap that part in a suquery or common table expression.
`split_part()` seems simplest ([as @ub3rst4r mentioned](https://stackoverflow.com/a/36671167/939860)): * [Cut string after first occurrence of a character](https://stackoverflow.com/questions/29522829/cut-string-after-first-occurrence-of-a-character/29522894#29522894#) But you don't need a subquery: ``` SELECT split_part(error, ':', 1) AS category, count(*) AS errorcount FROM tbl GROUP BY 1; ``` And `count(*)` is slightly faster than `count(<expression>)`. `GROUP BY 1` is a positional reference to the first `SELECT` item and a convenient shorthand for longer expressions. Example: * [Select first row in each GROUP BY group?](https://stackoverflow.com/questions/3800551/select-first-row-in-each-group-by-group/7630564#7630564)
How to group on part of a column in PostgreSQL?
[ "", "sql", "postgresql", "aggregation", "string-matching", "" ]
I have two tables `debitTable` and `creditTable`. `debitTable` has the following records: ``` +----+-------+ | id | debit | +----+-------+ | a | 10000 | | b | 35000 | +----+-------+ ``` and `creditTable` has these records: ``` +----+--------+ | id | credit | +----+--------+ | b | 5000 | +----+--------+ ``` How about the SQL Server query to produce these results: ``` +----+----------------+--------------+ | id | debit | credit | debit-credit | +----+----------------+--------------+ | a | 10000 | 0 | 10000 | | b | 35000 | 5000 | 30000 | +----+-------+--------+--------------+ ```
You want to use a `join`. However, it is important to aggregate before joining: ``` select coalesce(d.id, c.id) as id, coalesce(credit, 0) as credit, (coalesce(debit, 0) - coalesce(credit, 0)) as DebitMinusCredit from (select id, sum(debit) as debit from debit group by id ) d full outer join (select id, sum(credit) as credit from debit group by id ) c on d.id = c.id; ``` This uses `full outer join` to ensure that all records from both tables are included, even if an id is not in one of the tables. The aggregation before joining is to avoid Cartesian products when there are multiple rows for a single id in both tables.
You can try "Left Join" ``` Select * from debit d left join credit c on d.id = c.id ```
SQL query for insert into with update on duplicate key
[ "", "sql", "sql-server", "" ]
Given the table structure: ``` Comment ------------- ID (PK) ParentCommentID (FK) ``` I want to run `DELETE FROM Comments` to remove all records. However, the relationship with the parent comment record creates a FK conflict if the parent comment is deleted before the child comments. To solve this, deleting in reverse ID order would work. How do I delete all records in a table in reverse ID order?
The following will delete all rows that are not themselves parents. If the table is big and there's no index on ParentCommentID, it might take a while to run... ``` DELETE Comment from Comment co where not exists (-- Correlated subquery select 1 from Comment where ParentCommentID = co.ID) ``` If the table is truly large, a big delete can do bad things to your system, such as locking the table and bloating the transaction log file. The following will limit just how many rows will be deleted: ``` DELETE top (1000) Comment -- (1000 is not very many) from Comment co where not exists (-- Correlated subquery select 1 from Comment where ParentCommentID = co.ID) ``` As deleting some but not all might not be so useful, here's a looping structure that will keep going until everything's gone: ``` DECLARE @Done int = 1 --BEGIN TRANSACTION WHILE @Done > 0 BEGIN -- Loop until nothing left to delete DELETE top (1000) Comment from Comment co where not exists (-- Correlated subquery select 1 from Comment where ParentCommentID = co.ID) SET @Done = @@Rowcount END --ROLLBACK ``` This last, of course, is dangerous (note the begin/end transaction used for testing!) You'll want `WHERE` clauses to limit what gets deleted, and something or to ensure you don't somehow hit an infinite loop--all details that depend on your data and circumstances.
With separate Parent and Child tables, ON DELETE CASCADE would ensure that deleting the parent also deletes the children. Does it work when both sets of data are within the same table? Maybe, and I'd love to find out! [How do I use cascade delete with SQL server.](https://stackoverflow.com/questions/6260688/how-do-i-use-cascade-delete-with-sql-server)
SQL delete records in order
[ "", "sql", "sql-server", "sql-delete", "" ]
have 3 tables *product\_tags* ``` product_id | tag ___________________ 50 | new 50 | blac 66 | new 50 | green 111 | new 111 | white ``` *products\_to\_categories* ``` product_id | category_id ____________________ 50 | 69 50 | 68 111 | 40 111 | 70 ``` *categories* ``` category_id | parent_id (parent category id) ____________________ 68 | 0 69 | 68 70 | 68 ``` need **all tags** sorted by popularity (count product) within category 68 and its all subcategories (all categories with parent id 68) my start query give wrong result ``` SELECT tag FROM product_tags opd LEFT JOIN products_to_categories optc ON optc.product_id = opd.product_id LEFT JOIN categories optx ON optx.parent_id = '68' WHERE opd.tag <> '' AND optx.parent_id = '68' ORDER BY optc.product_id DESC ``` Result I need ``` tags _____ new (2) white (1) ```
Do this step by step. Use `EXISTS` or `IN` when checking whether a record exists. You want product\_ids that are in the set of category\_ids 68 and its children: ``` select tag, count(*) from product_tags where product_id in ( select product_id from products_to_categories where category_id = 68 or category_id in ( select category_id from categories where parent_id = 68 ) ) group by tag order by count(*) desc; ```
First, your `Join` on categories was incorrect. It should be: ``` LEFT JOIN categories optx ON optx.parent_id = optc.category_id ``` Then to get the correct `count()` you should do a `GROUP BY` tag: ``` SELECT CONCAT(opd.tag, ' (', count(*), ')' ) FROM product_tags opd LEFT JOIN products_to_categories optc ON optc.product_id = opd.product_id LEFT JOIN categories optx ON optx.parent_id = optc.category_id WHERE opd.tag <> '' AND optx.parent_id = '68' GROUP BY opd.tag ```
Mysql multi join with count
[ "", "mysql", "sql", "join", "count", "" ]
i need advice how to get fastest result for querying on big size table. I am using SQL Server 2012, my condition is like this: I have 5 tables contains transaction record, each table has 35 millions of records. All tables has 14 columns, the columns i need to search is GroupName, CustomerName, and NoRegistration. And I have a view that contains 5 of all these tables. The GroupName, CustomerName, and NoRegistration records is not unique each tables. My application have a function to search to these column. The query is like this: Search by Group Name: ``` SELECT DISTINCT(GroupName) FROM TransactionRecords_view WHERE GroupName LIKE ''+@GroupName+'%' ``` Search by Name: ``` SELECT DISTINCT(CustomerName) AS 'CustomerName' FROM TransactionRecords_view WHERE CustomerName LIKE ''+@Name+'%' ``` Search by NoRegistration: ``` SELECT DISTINCT(NoRegistration) FROM TransactionRecords_view WHERE LOWER(NoRegistration) LIKE LOWER(@NoRegistration)+'%' ``` My question is how can i achieve fastest execution time for searching? With my condition right now, every time i search, it took 3 to 5 minutes. My idea is to make a new tables contains the distinct of GroupName, CustomerName, and NoRegistration from all 5 tables. Is my idea is make execution time is faster? or any other idea? Thank you --- EDIT: This is query for view "TransactionRecords\_view" ``` CREATE VIEW TransactionRecords_view AS SELECT * FROM TransactionRecords_1507 UNION ALL SELECT * FROM TransactionRecords_1506 UNION ALL SELECT * FROM TransactionRecords_1505 UNION ALL SELECT * FROM TransactionRecords_1504 UNION ALL SELECT * FROM TransactionRecords_1503 ```
Ok, so you don't need to make those new tables. If you create Non-Clustered indexes based upon these fields it will (in effect) do what you're after. The index will only store data on the columns that you indicate, not the whole table. Be aware, however, that indexes are excellent to aid in SELECT statements but will negatively affect any write statements (INSERT, UPDATE etc). Next you want to run the queries with the actual execution plan switched on. This will show you how the optimizer has decided to run each query (in the back end). Are there any particular issues here, are any of the steps taking up a lot of the overall operator cost? There are plenty of great instructional videos about execution plans on youtube, check them out if you haven't looked at exe plans before.
You must show sql of TransactionRecords\_view. Do you have indexes? What is the collation of NoRegistration column? Paste the Actual Execution Plan for each query.
Fastest execution time for querying on Big size table
[ "", "sql", "sql-server", "sqlperformance", "" ]
i have a list of number in mysql like that ``` column 1 column2 column 3 4 6 7 88 21 29 30 31 ``` How can i get all sequential blocks, result should be ``` 6 7 29 30 31 ```
This is one way to it using self-join and `union`. ``` select t1.val from t t1 join t t2 on t1.val = t2.val-1 union select t2.val from t t1 join t t2 on t1.val = t2.val-1 order by 1 ``` Edit: I realized this could be done with a single query instead of using `union`. ``` select distinct t1.val from t t1 join t t2 on t1.val = t2.val-1 or t1.val = t2.val+1 order by 1 ```
You can use `exists`: ``` select t.* from t where exists (select 1 from t t2 where t2.col1 = t.col1 + 1 ) or exists (select 1 from t t2 where t2.col1 = t.col1 - 1 ) ; ``` You can combine the `exists` into a single subquery: ``` select t.* from t where exists (select 1 from t t2 where t2.col1 in (t.col1 - 1, t.col1 + 1) ); ``` The first version should be able to make use of an index on the column. It might be more difficult for an optimize to use an index for the second. Also note that these versions allow you to include other columns from the rows as well.
Get all sequential block from a list
[ "", "mysql", "sql", "" ]
I've created a SQL Fiddle (<http://sqlfiddle.com/#!9/e0536/1>) with similar data I've got at work (there are actually more columns in the table). Table contains employment details. An employee can have more than one record in the table (couple of fixed-term contracts) as well as different employee\_ID (change from 'tixxxxx' into 'pixxxxx'). The PESEL number is the unique personal identification number. ID for past contract can be higher than for actual one as the table is populated with data every day as an extract based on HR data. What I need to get is: * at least up-to-date employee\_ID (the line where expirationdate is max) * a whole line with all columns for up-to-date employee\_ID * best if I could get a whole line for up-to-date employee\_ID including the very first startdate (important if employee had more than one contract) It's been some time since I used SQL every day so I'd appreciate any help here. I was thinkig of some nested queries with group by clause, but I never understood well correlated subqueries. Expected result: ``` ID Employee_ID PESEL StartDate ExpirationDate ----------- ----------- ----------- ---------- -------------- 1 pi39764 1111 2014-01-01 2016-06-01 2 pi12986 1234 2015-12-01 2099-12-31 5 pi12345 4321 2015-02-01 2099-12-31 ``` where the startdate is the very first startdate.
probably you looking for query like this: ``` SELECT e.*, CASE WHEN actual = StartDate THEN 1 ELSE 0 END AS actual_e, first_startdate FROM Employees AS e INNER JOIN(SELECT PESEL, MIN(startdate) AS first_startdate , MAX(startdate) AS actual FROM Employees AS e GROUP BY PESEL) AS g ON g.PESEL = e.PESEL ``` EDIT: to get actual Employee\_ID on every row use sub query: ``` , CASE WHEN actual = StartDate THEN null ELSE (SELECT max(a.Employee_ID) FROM Employees AS a WHERE a.PESEL = e.PESEL and a.StartDate = actual) END AS actual_Employee_ID ``` EDIT: in Fidde you write MySQL query, for sql server (tag) it is much simplest: ``` SELECT e.* , LEAD(Employee_ID) OVER (PARTITION BY PESEL ORDER BY startdate) actual_Employee_ID , MIN(startdate) OVER (PARTITION BY PESEL) first_startdate FROM Employees AS e ``` EDIT (result with last ti): for all data: ``` SELECT e.* , first_startdate , last_t_startdate , last_startdate , (SELECT max(employee_ID) FROM dbo.Employees t WHERE startdate = last_t_startdate AND PESEL = e.PESEL) AS last_t_id , (SELECT max(employee_ID) FROM dbo.Employees t WHERE startdate = last_startdate AND PESEL = e.PESEL) AS last_id FROM dbo.Employees AS e OUTER APPLY ( SELECT Min(startdate) AS first_startdate , Max(Case When employee_ID LIKE 'ti%' Then startdate End) last_t_startdate , Max(startdate) AS last_startdate FROM dbo.Employees WHERE PESEL = e.PESEL --GROUP BY PESEL ) AS g ``` output: ``` ID Employee_ID PESEL StartDate ExpirationDate first_startdate last_t_startdate last_startdate last_t_id last_id 1 pi39764 1111 2015-01-01 2016-06-01 2014-01-01 2014-01-01 2015-01-01 ti00001 pi39764 2 pi12986 1234 2015-12-01 2099-12-31 2015-12-01 NULL 2015-12-01 NULL pi12986 3 ti00001 1111 2014-01-01 2014-12-31 2014-01-01 2014-01-01 2015-01-01 ti00001 pi39764 4 pi12345 4321 2015-02-01 2015-06-30 2015-02-01 NULL 2016-01-01 NULL pi12345 5 pi12345 4321 2016-01-01 2099-12-31 2015-02-01 NULL 2016-01-01 NULL pi12345 6 pi12345 4321 2015-07-01 2015-12-31 2015-02-01 NULL 2016-01-01 NULL pi12345 ``` for grouped data: ``` SELECT pesel , first_startdate , last_t_startdate , last_startdate , (SELECT max(employee_ID) FROM dbo.Employees t WHERE startdate = last_t_startdate AND PESEL = g.PESEL) last_t_id , (SELECT max(employee_ID) FROM dbo.Employees t WHERE startdate = last_startdate AND PESEL = g.PESEL) last_id FROM ( SELECT PESEL , Min(startdate) AS first_startdate , Max(Case When employee_ID LIKE 'ti%' Then startdate End) AS last_t_startdate , Max(startdate) AS last_startdate FROM dbo.Employees GROUP BY PESEL) AS g ``` output: ``` pesel first_startdate last_t_startdate last_startdate last_t_id last_id 1111 2014-01-01 2014-01-01 2015-01-01 ti00001 pi39764 1234 2015-12-01 NULL 2015-12-01 NULL pi12986 4321 2015-02-01 NULL 2016-01-01 NULL pi12345 ```
``` SELECT e1.employee_id, e.pesel, e.maxdate FROM ( SELECT pesel, MAX(expirationdate) as maxdate FROM employees GROUP BY pesel ) e INNER JOIN employees e1 ON e.pesel = e1.pesel AND e.maxdate = e1.expirationdate ``` Output: ``` | Employee_ID | pesel | maxdate | |-------------|-------|----------------------------| | pi39764 | 1111 | June, 01 2016 00:00:00 | | pi12986 | 1234 | December, 31 2099 00:00:00 | | pi12345 | 4321 | December, 31 2099 00:00:00 | ``` To find the first date and the last date for each `PESEL`, use: ``` SELECT e1.employee_id, e.pesel, e.startdate, e.enddate FROM ( SELECT pesel, MIN(startdate) as startdate, MAX(expirationdate) as enddate FROM employees GROUP BY pesel ) e INNER JOIN employees e1 ON e.pesel = e1.pesel AND e.enddate = e1.expirationdate ```
SQL WHERE MAX(date) within GROUP BY
[ "", "sql", "sql-server-2008", "" ]
I have a query that takes roughly four minutes to run on a high powered SSD server with no other notable processes running. I'd like to make it faster if possible. The database stores a match history for a popular video game called Dota 2. In this game, ten players (five on each team) each select a "hero" and battle it out. The intention of my query is to create a list of past matches along with how much of a "XP dependence" each team had, based on the heroes used. With 200,000 matches (and a 2,000,000 row matches-to-heroes relationship table) the query takes about four minutes. With 1,000,000 matches, it takes roughly 15. I have full control of the server, so any configuration suggestions are also appreciated. Thanks for any help guys. Here are the details... ``` CREATE TABLE matches ( * match_id BIGINT UNSIGNED NOT NULL, start_time INT UNSIGNED NOT NULL, skill_level TINYINT NOT NULL DEFAULT -1, * winning_team TINYINT UNSIGNED NOT NULL, PRIMARY KEY (match_id), KEY start_time (start_time), KEY skill_level (skill_level), KEY winning_team (winning_team)); CREATE TABLE heroes ( * hero_id SMALLINT UNSIGNED NOT NULL, name CHAR(40) NOT NULL DEFAULT '', faction TINYINT NOT NULL DEFAULT -1, primary_attribute TINYINT NOT NULL DEFAULT -1, group_index TINYINT NOT NULL DEFAULT -1, match_count BIGINT UNSIGNED NOT NULL DEFAULT 0, win_count BIGINT UNSIGNED NOT NULL DEFAULT 0, * xp_from_wins BIGINT UNSIGNED NOT NULL DEFAULT 0, * team_xp_from_wins BIGINT UNSIGNED NOT NULL DEFAULT 0, xp_from_losses BIGINT UNSIGNED NOT NULL DEFAULT 0, team_xp_from_losses BIGINT UNSIGNED NOT NULL DEFAULT 0, gold_from_wins BIGINT UNSIGNED NOT NULL DEFAULT 0, team_gold_from_wins BIGINT UNSIGNED NOT NULL DEFAULT 0, gold_from_losses BIGINT UNSIGNED NOT NULL DEFAULT 0, team_gold_from_losses BIGINT UNSIGNED NOT NULL DEFAULT 0, included TINYINT UNSIGNED NOT NULL DEFAULT 0, PRIMARY KEY (hero_id)); CREATE TABLE matches_heroes ( * match_id BIGINT UNSIGNED NOT NULL, player_id INT UNSIGNED NOT NULL, * hero_id SMALLINT UNSIGNED NOT NULL, xp_per_min SMALLINT UNSIGNED NOT NULL, gold_per_min SMALLINT UNSIGNED NOT NULL, position TINYINT UNSIGNED NOT NULL, PRIMARY KEY (match_id, hero_id), KEY match_id (match_id), KEY player_id (player_id), KEY hero_id (hero_id), KEY xp_per_min (xp_per_min), KEY gold_per_min (gold_per_min), KEY position (position)); ``` **Query** ``` SELECT matches.match_id, SUM(CASE WHEN position < 5 THEN xp_from_wins / team_xp_from_wins ELSE 0 END) AS radiant_xp_dependence, SUM(CASE WHEN position >= 5 THEN xp_from_wins / team_xp_from_wins ELSE 0 END) AS dire_xp_dependence, winning_team FROM matches INNER JOIN matches_heroes ON matches.match_id = matches_heroes.match_id INNER JOIN heroes ON matches_heroes.hero_id = heroes.hero_id GROUP BY matches.match_id ``` **Sample Results** ``` match_id | radiant_xp_dependence | dire_xp_dependence | winning_team 2298874871 | 1.0164 | 0.9689 | 1 2298884079 | 0.9932 | 1.0390 | 0 2298885606 | 0.9877 | 1.0015 | 1 ``` **EXPLAIN** ``` id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra 1 | SIMPLE | heroes | ALL | PRIMARY | NULL | NULL | NULL | 111 | Using temporary; Using filesort 1 | SIMPLE | matches_heroes | ref | PRIMARY,match_id,hero_id | hero_id | 2 | dota_2.heroes.hero_id | 3213 | 1 | SIMPLE | matches | eq_ref | PRIMARY | PRIMARY | 8 | dota_2.matches_heroes.match_id | 1 | ``` **Machine Specs** * Intel Xeon E5 * E5-1630v3 4/8t * 3.7 / 3.8 GHz * 64 GB of RAM * DDR4 ECC 2133 MHz * 2 x 480GB of SSD SOFT **Database** * MariaDB 10.0 * InnoDB
In all likelihood, the main performance driver is the `GROUP BY`. Sometimes, in MySQL, it can be faster to use correlated subuqeries. So, try writing the query like this: ``` SELECT m.match_id, (SELECT SUM(h.xp_from_wins / h.team_xp_from_wins) FROM matches_heroes mh INNER JOIN heroes h ON mh.hero_id = h.hero_id WHERE m.match_id = mh.match_id AND mh.position < 5 ) AS radiant_xp_dependence, (SELECT SUM(h.xp_from_wins / h.team_xp_from_wins) FROM matches_heroes mh INNER JOIN heroes h ON mh.hero_id = h.hero_id WHERE m.match_id = mh.match_id AND mh.position >= 5 ) AS dire_xp_dependence, m.winning_team FROM matches m; ``` Then, you want indexes on: * `matches_heroes(match_id, position)` * `heroes(hero_id, xp_from_wins, team_xp_from_wins)` For completeness, you might want this index as well: * `matches(match_id, winning_team)` This would be more important if you added `order by match_id` to the query.
As has already been mentioned in a comment; there is little you can do, because you select all data from the table. The query looks perfect. The one idea that comes to mind are covering indexes. With indexes containing all data needed for the query, the tables themselves don't have to be accessed anymore. ``` CREATE INDEX matches_quick ON matches(match_id, winning_team); CREATE INDEX heroes_quick ON heroes(hero_id, xp_from_wins, team_xp_from_wins); CREATE INDEX matches_heroes_quick ON matches_heroes (match_id, hero_id, position); ``` There is no guarantee for this to speed up your query, as you are still reading all data, so running through the indexes may be just as much work as reading the tables. But there is a chance that the joins will be faster and there would probably be less physical read. Just give it a try.
Please help me optimize this MySQL SELECT statement
[ "", "mysql", "sql", "sql-tuning", "" ]
I am trying to get the next value from this filed `SU - 1 /2016` Query I used ``` SELECT RIGHT('000' + CAST(ISNULL(MAX(SUBSTRING(InvoiceNO,4, 1)), 0) + 1 AS VARCHAR(4)), 4) from [dbo].[Invoice] ``` The query output is `0001`, it should be `0002`.
You substring this value `SU - 1 /2016` from 4th position which gives you "-". So to get the 1 you need to start from 6th position which gives you expected output. ``` SELECT RIGHT('000' + CAST(ISNULL(MAX(SUBSTRING(InvoiceNO,6, 1)), 0) + 1 AS VARCHAR(4)), 4) from [dbo].[Invoice] ```
**use this code.** ``` SELECT RIGHT('000' + CAST((ISNULL(MAX(SUBSTRING(exampleColumn,6, 1)), 0) + 1) AS VARCHAR(4)), 4) from [dbo].tblExample ``` You have take a char from 4th position but your data "1" is at 6th position.
Getting next value from sequence sql
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have this schema. ``` create table "user" (id serial primary key, name text unique); create table document (owner integer references "user", ...); ``` I want to select all the documents owned by the user named "vortico". Can I do it in one query? The following doesn't seem to work. ``` select * from document where owner.name = 'vortico'; ```
``` SELECT * FROM document d INNER JOIN "user" u ON d.owner = u.name WHERE u.name = 'vortico' ```
You can use subquery. For your example it can be faster ``` SELECT * FROM document WHERE owner = (SELECT id FROM users WHERE name = 'vortico'); ```
Use WHERE clause on a column from another table
[ "", "sql", "postgresql", "" ]
This is the given table: [![enter image description here](https://i.stack.imgur.com/lfgDT.png)](https://i.stack.imgur.com/lfgDT.png) I need to get this data without creating temporary table based on the above table: [![enter image description here](https://i.stack.imgur.com/PcNqC.png)](https://i.stack.imgur.com/PcNqC.png) We can't use temporary table. I need to display the data with sql query only.
Try this ``` SELECT company name,'No' as value, clicks as data from table1 union all SELECT company,'Yes', (clicks - impression) from table1 order by name,val ```
You can use `UNION ALL` to unpivot your table: ``` SELECT company, 'No' AS val, impression AS data FROM tbl UNION ALL SELECT company, 'Yes' AS val, clicks - impression AS data FROM tbl ORDER BY company, val ```
disply two rows from single row
[ "", "sql", "" ]
I would like to sum only the last 2 weeks for each user and GROUP BY User. MY TABLE: ``` +----+------+--------+------+ | ID | User | Income | Week | +----+------+--------+------+ | 1 | John | 50 | 1 | +----+------+--------+------+ | 2 | John | 20 | 2 | +----+------+--------+------+ | 3 | John | 25 | 3 | +----+------+--------+------+ | 4 | John | 10 | 4 | +----+------+--------+------+ | 5 | Mike | 45 | 1 | +----+------+--------+------+ | 6 | Mike | 15 | 2 | +----+------+--------+------+ | 7 | Mike | 10 | 3 | +----+------+--------+------+ | 8 | Mike | 5 | 4 | +----+------+--------+------+ ``` DESIRED RESULT: ``` +------+--------+ | User | Income | +------+--------+ | John | 35 | +------+--------+ | Mike | 15 | +------+--------+ ``` As you can see, I'm summarizing week 4 and 3 for each user because those are the last 2 weeks. Thanks in advance.
You could use a row\_number() to sort out the top two weeks for a given user. Thereafter, you could aggregate. Works in postgresql. ``` Select user, sum(income) from( select user, income, row_number() over (partition by user order by week desc) rn from your_table ) where rn<3 group by user; ```
MySql--- ``` SELECT User,Sum(Income) as Income FROM mytable WHERE week > (SELECT MAX(week) FROM mytable) - 2 GROUP BY User ``` [sqlfiddle](http://sqlfiddle.com/#!9/bf327/3) PostgreSQL--- ``` SELECT "user",Sum(Income) as Income FROM mytable WHERE week > (SELECT MAX(week) FROM mytable) - 2 GROUP BY "user" ``` [sqlfiddle](http://sqlfiddle.com/#!15/0016f/4)
SUM AND GROUP BY WITH LIMITED ROWS
[ "", "sql", "postgresql", "" ]
For a given search string **s**, I want to find values from an indexed varchar(255) field (~1m rows), so that s.startsWith(value) == true. Example: **s** = "hello world" matches: "h", "hello", "hello world" Is that possible?
You can use [`INSTR`](http://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_instr) with opposite arguments than one would do in most cases: ``` SELECT * FROM mytable WHERE INSTR('hello world', mycol) = 1 ``` This will return records where *mycol* has a substring of "hello world" starting at position 1. So any of the following will match: > h > he > hel > hell > hello > hello (with trailing space) > hello w > hello wo > hello wor > hello worl > hello world You could maybe get better performance with the addition of the following redundant condition, which could hint the SQL engine to choose an index on mycol: ``` SELECT * FROM mytable WHERE INSTR('hello world', mycol) = 1 AND mycol like 'h%' ``` Just be aware that even with use of an index this does not guarantee a faster output. Imagine a table with values: > hel > helaaaaaaa > helaaaaaab > helaaaaaac > ...(1000 more records like that, and finally:) > hello world ... then the engine would still scan all records, and only get the first and the last. If you have an application executing this query, you could let it build the SQL dynamically, so that it looks like this: ``` SELECT * FROM mytable WHERE mycol IN ('h', 'he', 'hel', 'hell', 'hello', 'hello ', 'hello w', 'hello wo', 'hello wor', 'hello worl', 'hello world') ``` This would have potentially the best performance. If that doesn't do it, then certainly this elaborate SQL will do it: ``` SELECT * FROM mytable WHERE mycol = 'h' UNION ALL SELECT * FROM mytable WHERE mycol = 'he' UNION ALL SELECT * FROM mytable WHERE mycol = 'hel' UNION ALL SELECT * FROM mytable WHERE mycol = 'hell' UNION ALL SELECT * FROM mytable WHERE mycol = 'hello' UNION ALL SELECT * FROM mytable WHERE mycol = 'hello ' UNION ALL SELECT * FROM mytable WHERE mycol = 'hello w' UNION ALL SELECT * FROM mytable WHERE mycol = 'hello wo' UNION ALL SELECT * FROM mytable WHERE mycol = 'hello wor' UNION ALL SELECT * FROM mytable WHERE mycol = 'hello worl' UNION ALL SELECT * FROM mytable WHERE mycol = 'hello world' ```
You want to use `like` for this: ``` where col like 'hello%' -- or whatever ``` or ``` where col like concat(@s, '%') ``` The reason for using `like` is that it can make use of an index, because the pattern does not start with wildcard characters. EDIT: I might have the logic backwards. If so, you can still use like: ``` where @s like concat(col, '%') ``` In this case, though, an index cannot be used, because the column is an argument to a function.
Find values that are start of SearchString
[ "", "mysql", "sql", "" ]
I am trying to do a simple SQL Query (Oracle) that consists of two tables, "Revised" and "Projected". I would like to always pull records from the "Revised" table, if present. If the revised table is blank, then I need to pull from the "Projected" table. The different join combinations I have been trying do not return the data that I am looking for. Any help is appreciated! Thanks.
Have you tried something like this? ``` select your columns from revised union all select your columns from projected where not exists (select 1 from revised); ``` Select all from revised and all from projected if there is nothing in revised. Or, without `exists`: ``` select your columns from revised union all select your columns from projected where (select count(*) from revised) = 0; ``` **Edit**: to get your requirement at project level, just filter per project: ``` select your columns from revised union all select your columns from projected p where (select count(*) from revised r where r.Revised_ProjectID = p.Projected_ProjectID) = 0; ```
You can use case and a left join to achieve this. ``` select p.Project_ID, case when r.Project_ID is null then p.ProjectedPaymentAmount else r.RevisedPaymentAmount end as PaymentAmount, case when r.Project_ID is null then p.ProjectedPaymentDate else r.RevisedPaymentDate end as PaymentDate from ProjectPayments_Projected p left join ProjectPayments_Revised r on p.Project_ID = r.Project_ID ``` If a project can have multiple revisions use this. ``` ;WITH cte AS ( SELECT *, ROW_NUMBER() OVER (PARTITION BY Project_ID ORDER BY RevisedPaymentDate DESC) AS rn FROM ProjectPayments_Revised ) select p.Project_ID, case when r.Project_ID is null then p.ProjectedPaymentAmount else r.RevisedPaymentAmount end as PaymentAmount, case when r.Project_ID is null then p.ProjectedPaymentDate else r.RevisedPaymentDate end as PaymentDate from ProjectPayments_Projected p left join cte r on r.rn = 1 and r.Project_ID = p.Project_ID ```
SQL Two tables, need to pull depending on what data is in each
[ "", "sql", "oracle", "" ]
Firstly my sql: ``` SELECT DISTINCT(CarTable.CarPlate), CarTable.CarImage, EventTable.FirstDate FROM CarTable JOIN EventTable ON CarTable.CarId = EventTable.CarId ORDER BY 3 DESC ``` I am trying order rows by date and that works fine but I want all plates to be written only one time. I mean I need to see all car's last event but it seems I am using distinct wrong. What is the best way of doing this ?
Use `MAX` and `GROUP BY`: ``` SELECT c.CarPlate, c.CarImage, FirstDate = MAX(e.FirstDate) FROM CarTable c INNER JOIN EventTable e ON c.CarId = e.CarId GROUP BY c.CarPlate, c.CarImage ORDER BY MAX(e.FirstDate) DESC ``` Note: * Use meaningful alias to improve readability.
see,if this work without using aggregate function. ``` SELECT c.CarPlate ,c.CarImage ,ev.FirstDate FROM CarTable C CROSS APPLY ( SELECT max(FirstDate) FirstDate FROM EventTable E WHERE c.CarId = e.CarId ) ev ORDER BY ev.FirstDate DESC ```
Order by with distinct
[ "", "sql", "sql-server", "sql-order-by", "distinct", "" ]
I have 5 databases(or schemas) which have more than 100 tables in MySQL and now I need to create new database which have tables like older tables. New tables' names are in this format > **"OldDatabaseName"\_"OldTableName"\_History** and their columns are almost same, just I need to add 2 new columns like **START** and **END** timestamps for every table. There won't be any foreign keys in new tables, and also not any auto increment column, just ***ID and START*** columns will be unique key. So, What kind of SQL do I need to use for this purpose?
"Are there any loop through solution for this?" Yes. write a stored procedure and: * Declare Variables, Cursor and Exception\_Handler (we use the Exception\_Handler to find an exit point for our loop) * load all schema\_names/table\_names from information\_schema.tables that should be duplicated into the cursor (the cursor allows us to fetch the data into variables so we can use them in our statements). * after that you can use the schema\_names/table\_names in a loop to create a copy of every table in the new schema. just fetch the schema\_name and table\_name into variables and use prepared statements to create your tables. with every run-through a table will be created from inside the loop. Once all tables have been created an exception will occur to tell you that there is no more data. as people have pointed out: * empty tables without keys : ``` create table new_schema_name.new_table_name select * from old_schema_name.old_table_name limit 0; ``` * filled tables without keys (includes all data of the original table) : ``` create table new_schema_name.new_table_name select * from old_schema_name.old_table_name; ``` * empty table with all properties of the original table (in case its needed): ``` create table new_schema_name.new_table_name like old_schema_name.old_table_name ; ``` in case you have small tables (<5 million records [also depends on hardware]) and you don't want to repeat this job, you can use a GUI to copy the tables by drag and drop. Navicat for MySQL supports this function. after that you can use a texteditor + excel to build your alter commands to adjust the table names and add the new columns. the list of table\_names for that can be selected from information\_schema.tables. this will take much more time than running the procedure but should still be faster for beginners compared to writing and testing the procedure. i recommend the stored procedure approach
first create the table using this one ``` create table new_table select * from old_table; ``` and then use alter table query to alter your wish
Mysql create tables like other tables
[ "", "mysql", "sql", "database", "create-table", "" ]
Below is my query, I think in principle it should work, but am not sure if it is indeed possible or I am thinking too outside the box for this one. ``` SELECT (SELECT `orders`.`Status`, COUNT(*) AS COUNT_2 FROM `orders` `sw_orders` WHERE STATUS = 'booking' AND Date(OrderDate) <= CURDATE() AND Date(OrderDate) > DATE_SUB(CURDATE(),INTERVAL 30 DAY)) / (SELECT `orders`.`Status`, COUNT(*) AS COUNT_2 FROM `orders` `sw_orders` WHERE STATUS = 'quote' AND Date(OrderDate) <= CURDATE() AND Date(OrderDate) > DATE_SUB(CURDATE(),INTERVAL 30 DAY)) AS result ``` That should return the value of 2 results where bookings is divided by quotes
``` SELECT count(case when STATUS = 'booking' then 1 end) / count(case when STATUS = 'quote' then 1 end) FROM `sw_orders` WHERE Date(OrderDate) <= CURDATE() AND Date(OrderDate) > DATE_SUB(CURDATE(),INTERVAL 30 DAY) ```
``` select count(status ='booking' or null) / count(status = 'quote') as result from table_name where Date(OrderDate) <= CURDATE() AND Date(OrderDate) > DATE_SUB(CURDATE(),INTERVAL 30 DAY) ``` Please watch out for syntax error. I have not taken care of.
I am trying to divide two select queries using sql, not sure if this is possible or not
[ "", "mysql", "sql", "" ]
I have this function which is used on my view. ``` FUNCTION [dbo].[CalculateAmount] ( @ID INT, @PRICE DECIMAL(24,4)) declare @computedValue decimal(24,4) set @computedValue = case when (select TABLE1.ID FROM dbo.[TABLE1] TABLE1 JOIN dbo.[TABLE2] TABLE2 ON TABLE2.ID = TABLE1.ID WHERE TABLE1.ID = @ID // some conditions here // Is null then @PRICE else @PRICE * CAST('1.0368' as decimal(24,4)) end ``` So basically, I wanted to check if the ID passed as parameter is existing on TABLE1. If it returned a row, multiply the Price to 1.0368, if not then the Price will remain as is. My problem here is the performance. I am using this on view which returns 17000 rows. Now my view is currently running, 45 minutes on 12000th row. Any ideas or suggestion to improve the performance of my view? **EDIT** I am calling this on my view like this: ``` [dbo].[CalculateAmount](ID, PRICE) AS PRICE ``` along with the select statement.
If the only place you use your function is in the view, then why not just encapsulate the logic in the view: ``` ALTER VIEW dbo.YourView AS SELECT <columns>, CalculatedPrice = CASE WHEN t1.ID IS NULL THEN <tables>.Price ELSE 1.0368 * <tables>.Price END FROM <tables> OUTER APPLY ( SELECT TOP 1 t1.ID FROM dbo.Table1 AS t1 INNER JOIN dbo.Table2 AS t2 ON t2.ID = t1.ID WHERE t1.ID = <tables>.ID -- More Conditions ) AS t1 WHERE <predicates>; ``` The outer apply simply does the same check as your function to see if a record exists, then in the select statement, when a match is not found the price is multiplied by your constant, otherwise it is multiplied by 1. You could create an inline table valued function for this. Unlike a scalar UDF this is not executed [RBAR](https://www.simple-talk.com/sql/t-sql-programming/rbar--row-by-agonizing-row/), but the query plan is expanded out into the outer query: ``` CREATE FUNCTION dbo.CalculateAmount (@ID INT, @Price DECIMAL(24, 4) RETURNS TABLE AS RETURN ( SELECT Price = CASE WHEN COUNT(*) = 0 THEN @Price ELSE @Price * 1.0368 END FROM dbo.Table1 AS t1 INNER JOIN dbo.Table2 AS t2 ON t2.ID = t1.ID WHERE t1.ID = @ID ); ``` Then you would call it as: ``` SELECT <columns>, CalculatedPrice = ca.Price FROM <tables> OUTER APPLY dbo.CalculateAmount(ID, Price) AS ca WHERE <predicates>; ```
From performance point of view,knowing more details is important. i ) first of all,it is known fact that UDF always degrade performance specially when it fire for lot of rows. ii) What is written inside view.May be your requirement can be adjusted inside view itself,so no need of function of any kind. like you side you want to know only if is exists in table1 so you can use outer apply your sample view code , ``` Select ,case when tvf.id is null then price else price*1.0368 end as Price from other table outer apply (Select id from dbo.table1 A where A.id=othertable.id )tvf ``` I think you should do this only .you should remove all UDF inside this view. iii) Most importantly,how much time your view take to execute without this function ?
SQL function performance
[ "", "sql", "sql-server", "sql-server-2008-r2", "" ]
Is it possible to retrieve specific columns of specific rows in an `SQL query`? Let's say I'm selecting from my `SQL` table, called `my_table`, rows whose names are: a, b, using this query text: ``` "select * from my_table where row_names in ('a', 'b') order by row_names" ``` How can I modify this query text to select only columns 2,12,22,32,42 rather all its 1000 columns?
Replace the wildcard symbol `*` with the column names you want to retrieve. But please read up the documentation on SQL standard. It is very unlikely you need 1.000 columns in a table.
Try and read the sql, as it really does what it says ``` select [column name 1, column name 2] from [table name] where [column name 3] in ('column value 1', 'column value 2') order by [column name 3] ``` please select the values of "column name 1" and "column name 2" from rows in the table called "table name" where those rows have values equal to 'column value 1' and 'column value 2' in the column called "column name 3"
Select specific rows and columns from an SQL database
[ "", "mysql", "sql", "" ]
My question in kind of replace with key in SQL Server. Can anyone give me a query to do this? Thanks for your answers! `Table1`: ``` ID | Code | Des | more columns ---+------+-----+------------- 1 | 100 | a | ... 2 | 200 | b | ... 3 | 300 |data3| ... ``` `Table2`: ``` ID | Code | Des ---+------+------ 1 | 100 | data1 2 | 200 | data2 ``` The result must be this: ``` ID | Code | Des | more columns ---+------+-----+------------- 1 | 100 |data1| ... 2 | 200 |data2| ... 3 | 300 |data3| ... ```
Do a `LEFT JOIN`, if there are no table2.Des value, take table1.Des instead: ``` select t1.ID, t1.Code, coalesce(t2.Des, t1.Des), t1.more Column from table1 t1 left join table2 t2 on t1.code = t2.code ``` Or, perhaps you want this: ``` select * from table2 union all select * from table1 t1 where not exists (select 1 from table2 t2 where t2.code = t1.code) ``` I.e. return table2 rows, and if a code is in table1 but not in table2, also return that row.
Use `JOIN`. **Query** ``` SELECT t1.ID, t1.Code, CASE WHEN t1.Des LIKE 'data%' THEN t1.Des ELSE t2.Des END AS Des FROM Table1 t1 LEFT JOIN Table2 t2 ON t1.ID = t2.ID; ```
Replace Data With Key on 2 table in SQL Server
[ "", "sql", "sql-server", "" ]
I have a table of employees that describe his last and first name, his Card ID and times he log in to an enterprise. I want to select the first time he logs in. thise is a example of the table [thise is a example of the table](https://i.stack.imgur.com/Po00h.png) and this is the result that I want [this is the result that I want](https://i.stack.imgur.com/DjyUF.png)
You could group by the last name, first name and truncated date and return the minimal date per group. In modern SQL Server versions this truncation is pretty simple - you just cast the `field_time` to a `date`: ``` SELECT last_name, first_name, CAST(field_time AS DATE) AS log_day, MIN(field_time) AS first_log FROM mytable GROUP BY last_name, first_name, CAST(field_time AS DATE) ```
I maintain table design is bad. try this, ``` insert into @t values(1, 'A.','A.','01.01.2016 10:20:00') ,(2,'A.','A.','01.01.2016 12:22:00') ,(3,'A.','A.','03.01.2016 08:50:00') ,(4,'B.','B.','01.01.2016 15:08:00') ;WITH CTE AS ( SELECT * ,row_number() OVER ( PARTITION BY lastname ,firstname ,cast(feildtime AS DATE) ORDER BY feildtime ) rn FROM @t ) SELECT * FROM CTE WHERE rn = 1 ```
Select the first log in Time using from Table using SQL Server 2008 R2
[ "", "sql", "sql-server", "select", "" ]
Apologies in advance if this is an easy solution but since I'm not sure what to call this I didn't have any luck trying to search for it. Given the following example data: ``` ID, QUANTITY, Date 1,2,01-APR-16 1,1,02-APR-16 1,0,03-APR-16 1,1,04-APR-16 1,0,05-APR-16 1,1,06-APR-16 1,0,07-APR-16 ``` I would like a query to return the ID of the item and the corresponding date when the quantity equals zero, **but only if there is a later date** where the quantity is greater than zero. So in the above example, the select would return ``` 1,03-APR-16 1,05-APR-16 ``` I'm learning and have learned a lot from this site but I'm not sure how to accomplish this one. I know how to do a basic select and how to use subqueries but it would seem in this case I need to pass a result from one row into a subquery for another? Thank you for any direction and again sorry for being a newbie. Also a quick link to how to show sample tables in a table format would be helpful, the advanced help doesn't show that part I'm probably looking in the wrong place. Thank you.
Suppose your table is called t. I changed column name "date" to "dt" - don't use reserved Oracle words as column names. ``` with a as (select id, max(dt) max_dt from t where quantity > 0 group by id) select id, dt from t join a using (id) where quantity = 0 and t.dt < a.max_dt ``` ADDED: OP asked for an additional condition in a comment (below). This will answer that additional request. OP (yes Mobilemike, that is you!): The idea is the same. With some practice you will be able to do it on our own. Note: I am deleting a record with quantity 0 only if the oldest record had the value 0 (I am not deleting the oldest record *with quantity zero* if it is not the absolute oldest record for that id!) Good luck! ``` with a as (select id, max(dt) max_dt from t where quantity > 0 group by id), b as (select id, min(dt) min_dt group by id) select id, dt from t join a using (id) join b using (id) where quantity = 0 and t.dt < a.max_dt and t.dt != b.min_dt ```
Another solution using a subquery rather than a temporary table might look like ``` select t1.id, t1.dt from t t1, (select id, max(dt) max_dt from t where quantity>0 group by id) t2 where t1.id=t2.id and t1.dt < t2.max_dt and t1.quantity =0 ``` I too changed the column names to match those given by mathguy. UPDATE: I've changed the query to account for the IDs.
Limiting query results based upon data returned in the same query?
[ "", "sql", "oracle", "" ]
Here is a table called posts\_votes ``` id|discussion_id|post_id|user_id|vote_sign| __________________________________________ 1 | 1 | 1 | 1 | 1 | 2 | 1 | 1 | 2 | -1 | 3 | 1 | 2 | 3 | 1 | 4 | 1 | 2 | 4 | 1 | 5 | 2 | 3 | 1 | -1 | 6 | 2 | 4 | 2 | 1 | ``` I want to create a view with theses results: ``` discussion_id|post_id|score 1 | 2 | 2 2 | 4 | 1 ``` With : * post\_id is the post with best score * score is SUM(vote\_sign) I'm torturing my mind with group by and having max but I find no way to do it.. =( If somebody has an idea... Thanks ;)
Use sub-queries to first calculate the scores and select max score for each discussion\_id. Then `join` the result sets to get the post with max score for each discussion\_id. ``` select t1.* from (select discussion_id,post_id,sum(vote_sign) as score from posts_votes group by discussion_id,post_id) t1 join (select discussion_id,max(score) as maxscore from (select discussion_id,post_id,sum(vote_sign) as score from posts_votes group by discussion_id,post_id) t group by discussion_id) t2 on t1.discussion_id = t2.discussion_id and t1.score = t2.maxscore ```
``` select SUBSTRING_INDEX(GROUP_CONCAT(post_id ORDER BY sm DESC), ',', 1) AS top_post, discussion_id, max(score) as score from ( select discussion_id, post_id, sum(vote_sign) as score from posts_votes group by post_id, discussion_id ) c group by discussion_id ```
MYSQL Multi group by and max
[ "", "mysql", "sql", "sum", "max", "" ]
SQL Server 2005. I am **not** after a coded answer here (although it would be nice). I'm really after advice on the best way forward to get the result I need. I have some knowledge of pivot/unpivot/cte//rownumber and dynamic queries but cannot get my head around this particular problem! An example of the data follows. **Note:** The occurrence of type,location,name and description can be none to many. ``` drop table #temp create table #temp ( event int, type varchar(20), locations varchar(20), name varchar(30), description varchar(50) ) insert into #temp values (1,'support','r1','fred','desc 1') insert into #temp values (1,'support','r1','fred','desc 2') insert into #temp values (1,'support','r1','fred','desc 3') insert into #temp values (1,'support','r1','jim','desc 1') insert into #temp values (1,'support','r1','jim','desc 2') insert into #temp values (1,'support','r1','jim','desc 3') insert into #temp values (1,'support','r2','fred','desc 1') insert into #temp values (1,'support','r2','fred','desc 2') insert into #temp values (1,'support','r2','fred','desc 3') insert into #temp values (1,'support','r2','jim','desc 1') insert into #temp values (1,'support','r2','jim','desc 2') insert into #temp values (1,'support','r2','jim','desc 3') insert into #temp values (1,'work','r1','fred','desc 1') insert into #temp values (1,'work','r1','fred','desc 2') insert into #temp values (1,'work','r1','fred','desc 3') insert into #temp values (1,'work','r1','jim','desc 1') insert into #temp values (1,'work','r1','jim','desc 2') insert into #temp values (1,'work','r1','jim','desc 3') insert into #temp values (1,'work','r2','fred','desc 1') insert into #temp values (1,'work','r2','fred','desc 2') insert into #temp values (1,'work','r2','fred','desc 3') insert into #temp values (1,'work','r2','jim','desc 1') insert into #temp values (1,'work','r2','jim','desc 2') insert into #temp values (1,'work','r2','jim','desc 3') select * from #temp ``` The result I am after is this .. ``` 1,support;work,r1;r2,fred;jim,desc1;desc2;desc3 ```
Your goal seem to select all distinct value of all columns, then Concatenate into one string. And you only need advice, so I recommend you go here: [multiple rows into a single row](https://www.mssqltips.com/sqlservertip/2914/rolling-up-multiple-rows-into-a-single-row-and-column-for-sql-server-data/) It seem that you need more help: ``` select distinct stuff((SELECT distinct'; ' + type-- as type FROM #temp --order by type FOR XML PATH('')),1,1,'') + (SELECT distinct'; ' + locations FROM #temp FOR XML PATH('')) + (SELECT distinct'; ' + name FROM #temp FOR XML PATH('')) + (SELECT distinct'; ' + description FROM #temp FOR XML PATH('')) from #temp; ``` If you need 4 columns, then change `+ (SELECT` to `, stuff((SELECT` The query is just that simple: get distinct of one column, change into string, then concatenate + string of (next column)...
This is slightly unrelated, but when inserting data like this it would be easier (for you) to do it like this (also, try to get into the habit of naming the fields you are inserting into); ``` INSERT INTO #temp (event, type, locations, name, description) VALUES (1,'support','r1','fred','desc 1') ,(1,'support','r1','fred','desc 2') ,(1,'support','r1','fred','desc 3') ,(1,'support','r1','jim','desc 1') ,(1,'support','r1','jim','desc 2') ```
SQL - advice on grouping
[ "", "sql", "sql-server-2005", "" ]
Here is my tables [![enter image description here](https://i.stack.imgur.com/N2b6r.jpg)](https://i.stack.imgur.com/N2b6r.jpg) My question is How to get CourseNames for a specific student id I tried this but didn't work ``` select Course.CourseName from Course where Course.CourseId in ( select Student.studentname ,StudentCourse.CourseId from Student inner join StudentCourse on Student.StudentId = StudentCourse.StudentId where Student.StudentId = 1) ``` You can forget my query because i am new in SQL Server just tell me what exactly developers do in SQL Server real-world to get course names of a specific student
I'm using left joins just in case your student doesn't have any courses assigned, otherwise, if you use inner joins, you'll get no results; ``` SELECT s.StudentID ,s.StudentNam ,sc.CourseID ,c.CourseName FROM Student s LEFT JOIN StudentCourse sc ON s.StudentID = sc.StudentID LEFT JOIN Course c ON sc.CourseID = c.CourseID WHERE s.StudentID = 1 ```
As you said you want to know the approach this is just basic viewpoint 1) We want to look at CourseName's. ``` SELECT CourseName FROM Course ``` 2) One Student may have more than one Courses. 3) So we have one more table which is StudentCourse to achieve this. 4) We have to look CourseName's ID'S in this table ``` SELECT CourseID FROM StudentCourse ``` 5) to find which students(X is a number you seach for) takes those courses. ``` WHERE StudentID = X ``` 6) If we look them together, we now have all CourseName's via step 1. But we don't want all CourseName's and we have all CourseID's which X numbered student takes. So if we get them together, now we will just select CourseName's which X takes. ``` WHERE CourseID IN ``` 7) So our final result is ``` SELECT CourseName FROM Course WHERE CourseID IN (SELECT CourseID FROM StudentCourse WHERE StudentID = X) ``` ## Check [this](http://ideone.com/vLFNeM) or [this one](http://sqlfiddle.com/#!9/2acee/1) to see how it works.
How to query for many to many relationship in Sql Server
[ "", "sql", "sql-server", "select", "" ]
i have postgresql db with a table t1 and i want to calculate a threshold. the threshold should be for example car 1 uses more fuel than 75 % of all cars, car2 uses more fuel than 50% of all cars, .... mathematically i understand what i want to do, but i dont know how to build the query ``` id | name | value | threshold ________________________ 1 | car1 | 30 | ...% 2 | car2 | 15 | ..% 3 | car3 | 7 | 4 | car4 | 5 | ``` here is a sql fiddle <http://sqlfiddle.com/#!15/1e914/1> ``` UPDATE t1 SET threshold = select count(value) from t1 ``` where (value > [over each row]) and followed by \*100/the overall count() sorry for that bad try but i am kind of lost. also tried some aggregate functions.
You can solve this quite elegantly with a [window function](http://www.postgresql.org/docs/current/static/functions-window.html): ``` UPDATE t1 SET threshold = sub.thr FROM ( SELECT id, 100. * (rank() OVER (ORDER BY value) - 1) / count(*) OVER () AS thr FROM t1) sub WHERE t1.id = sub.id; ``` The `rank()` function gives the rank (starting from 1) in an ordered set, in this case over the column `value`, which is then divided by the total number of rows in the set. Note that `count(*) OVER ()` calculates the total number of rows in th *partition* but it does not aggregate the rows like a regular `count(*)` would.
``` WITH q AS ( SELECT *, (RANK() OVER (ORDER BY value) - 1) * 100. / COUNT(*) OVER () nt FROM mytable ) UPDATE mytable SET threshold = nt FROM q WHERE mytable.id = q.id ```
PostgreSQL calculate threshold query
[ "", "sql", "postgresql", "math", "window-functions", "threshold", "" ]
I have a requirement for a query. It needs to select every number from a list that IS NOT present in a column. Currently, I have this working fine. This query returns every number between `1833` and `2000` that is not present in the `ATTR` table. ``` SELECT LEVEL + 1833 FROM DUAL CONNECT BY LEVEL <= (2000 - 1833) MINUS SELECT ID_TX FROM ATTR WHERE ID_TX BETWEEN 1834 AND 2000; ``` What I want to do is make this as user-friendly as possible. To do that, I can enter two variables, a `STARTING_ID` and `LIST_LENGTH`. Now my query looks like this. ``` SELECT LEVEL + &STARTING_ID FROM DUAL CONNECT BY LEVEL <= &LIST_LENGTH MINUS SELECT ID_TX FROM ATTR WHERE ID_TX BETWEEN &STARTING_ID AND &STARTING_ID + &LIST_LENGTH; ``` At first, I was using `&&`, but then I could only use this query once. `UNDEFINE` couldn't be placed in the code block, and wasn't cleaning my variables anyway. Now my issue is that it considers each `&` variable to be different, so it's making the user enter 5 variables instead of 2. How do I make it where I'm still using temporary variables (with or without the popup to enter the variable), but the person running the query only has to enter two values `1833` and `67`?
How about using : as prompt ? ``` SELECT LEVEL + :STARTING_ID FROM DUAL CONNECT BY LEVEL <= :LIST_LENGTH MINUS SELECT ID_TX FROM ATTR WHERE ID_TX BETWEEN :STARTING_ID AND :STARTING_ID + :LIST_LENGTH; ``` This employs the concept of bind variables. Thus, user could enter the necessary values and proceed. [![enter image description here](https://i.stack.imgur.com/BEJy8.png)](https://i.stack.imgur.com/BEJy8.png) [![enter image description here](https://i.stack.imgur.com/BGCDM.png)](https://i.stack.imgur.com/BGCDM.png)
A bit of a fudge but if you want the prompt for substitution variables then you can use bind variables but just populate them using substitution variables like this: *(Run it as a script using `F5` and not as a statement using `Ctrl+Enter`)* ``` VARIABLE list_length NUMBER; VARIABLE start_value NUMBER; BEGIN :list_length := &ll; :start_value := &sv; END; / SELECT LEVEL + :start_value FROM DUAL CONNECT BY LEVEL <= :list_length MINUS SELECT ID_TX FROM ATTR WHERE ID_TX BETWEEN :start_value + 1 AND :start_value + :list_length; ``` Otherwise, just use bind variables (i.e. the query at the bottom of the script).
Using a temporary variable once in Oracle SQL Developer
[ "", "sql", "oracle", "oracle-sqldeveloper", "" ]
I want to get row from based on parameter value. Either it could be value 'ABC' or NULL. Below is the source table and expected result, which I'm trying to achieve. **SourceTable** ``` column1 column2 -------------------------- value1 NULL value2 ABC ``` Tried with query, but it is getting two rows which are with value1 and value2. ``` Declare @Param1 varchar(20) = 'ABC' Select * from SourceTable where column2 = @Param1 Or column2 is NULL ``` If value is 'ABC' then **Result** - ``` column1 column2 -------------------------- value2 ABC ``` If value is NULL then **Result** - ``` column1 column2 -------------------------- value1 NULL ```
You can try something like: Only problem you might encounter with this is if your column2 has blanks. ``` SELECT * FROM SourceTable WHERE ISNULL(column2, '') = ISNULL(@Param1, '') ```
Perhaps this would work for you? ``` select * from SourceTable where column2 = @Param1 or (@Param1 is null and column2 is null) ```
Get row from table based on parameter
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have a table tblsumDemo with the following structure ``` billingid qty Percent_of_qty cumulative 1 10 5 5 2 5 8 13(5+8) 3 12 6 19(13+6) 4 1 10 29(19+10) 5 2 11 40(11+10) ``` this is what I have tried ``` declare @s int SELECT billingid, qty, Percent_of_qty, @s = @s + Percent_of_qty AS cumulative FROM tblsumDemo CROSS JOIN (SELECT @s = 0) AS var ORDER BY billingid ``` but I'm not able to get the desired output,any help would be much appreciated , Thanks
You can use `CROSS APPLY`: ``` SELECT t1.*, x.cumulative FROM tblSumDemo t1 CROSS APPLY( SELECT cumulative = SUM(t2.Percent_of_Qty) FROM tblSumDemo t2 WHERE t2.billingid <= t1.billingid )x ``` --- For SQL Server 2012+, you can use `SUM OVER()`: ``` SELECT *, cummulative = SUM(Percent_of_Qty) OVER(ORDER BY billingId) FROM tblSumDemo ```
**You can use subquery which works in all versions:** ``` select billingid,qty,percentofqty, (select sum(qty) from tblsumdemo t2 where t1.id<=t2.id) as csum from tblsumdemo t1 ``` **you can use windows functions as well from sql 2012:** ``` select *, sum(qty) over (order by qty rows between unbounded PRECEDING and current row) as csum from tblsumdemo ``` Here i am saying get me sum of all rows starting from first row for every row(unbounded preceeding and current row).you can ignore unbounded preceeding and current row which is default
Calculating cumulative sum in ms-sql
[ "", "sql", "sql-server", "sql-server-2008", "cumulative-sum", "" ]
Here is my query: ``` SELECT AccountTitle, CASE WHEN SourceDocDR < 1 THEN REPLACE(CAST(SourceDocDR AS int), 0, '') ELSE SourceDocDR END AS 'Debit', CASE WHEN SourceDocCR < 1 THEN REPLACE(CAST(SourceDocCR AS int), 0, '') ELSE SourceDocCR END AS 'Credit' FROM tblAccounting_GL WHERE month(PostingDate) = month(getdate()) GROUP BY AccountTitle, SourceDocDR, SourceDocCR; ``` Result with `ELSE` statement: ``` +----------------------------------+---------+--------+ | Account Title | Debit | Credit | +----------------------------------+---------+--------+ | Accounts Payable | 0.00 | 100.00 | | Accounts Receivable -VAT | 0.00 | 300.00 | | Cash in Bank BPI Mia Road - PHP | 2600.00 | 0.00 | +----------------------------------+---------+--------+ ``` Result without `ELSE` statement: ``` +----------------------------------+-------+--------+ | Account Title | Debit | Credit | +----------------------------------+-------+--------+ | Accounts Payable | | NULL | | Accounts Receivable -VAT | | NULL | | Cash in Bank BPI Mia Road - PHP | NULL | | +----------------------------------+-------+--------+ ``` The above query should replace zero or negative values by blank. However, it still returns the actual value. If I remove the `ELSE` statement, that's only the time when the zero or negative values are replaced by blanks. However, in that case, all values greater than `0` are not returned also. The above query seems to be perfect for me but I don't know why it does not work.
This happens because with the `ELSE` part, the return values gets converted to `DECIMAL` *(or whatever data type `SourceDocDr` has)*. Remember that in a `CASE` expression, if the return values have different data types, they will be converted to the datatype with the higher [data type precedence.](https://msdn.microsoft.com/en-us/library/ms190309.aspx?f=255&MSPPError=-2147217396) From the [**`CASE`**](https://msdn.microsoft.com/en-us/library/ms181765.aspx) documentation: > Returns the highest precedence type from the set of types in > result\_expressions and the optional else\_result\_expression. To achieve the desired result, you have to `CAST` the results of the `CASE` expression to `VARCHAR`: ``` SELECT AccountTitle, CASE WHEN SourceDocDR < 1 THEN '' ELSE CAST(SourceDocDR AS VARCHAR(MAX)) END AS 'Debit', CASE WHEN SourceDocCR < 1 THEN '' ELSE CAST(SourceDocCR AS VARCHAR(MAX)) END AS 'Credit' FROM tblAccounting_GL WHERE MONTH(PostingDate) = MONTH(GETDATE()) GROUP BY AccountTitle, SourceDocDR, SourceDocCR; ```
you can try like this : Since you need blank values for zero's and negative value you select blank instead of replacing values ``` SELECT AccountTitle, CASE WHEN SourceDocDR < 1 THEN '' ELSE SourceDocDR END AS 'Debit', CASE WHEN SourceDocCR < 1 THEN '' ELSE SourceDocCR END AS 'Credit' FROM tblAccounting_GL WHERE month(PostingDate) = month(getdate()) GROUP BY AccountTitle, SourceDocDR, SourceDocCR; ```
Case condition does not return desirable results
[ "", "sql", "sql-server", "" ]
I have a database query like: ``` SELECT Foo, Foo2, some_calc as Bar, some_other_calc as Bar2, From FooBar -- some inner joins for the calcs GROUP BY FOO ORDER BY Bar DESC, Bar2 DESC; ``` I want to order by database with the order query, and then group together `FOO`s so that that first grouped block contains the `FOO` with the greatest Bar. The second grouped block of `FOO`s contains the seconds highest Bar, etc. But this doesn't work as Postgres doesn't allow random grouping: `column "Bar" must appear in the GROUP BY clause or be used in an aggregate function`. How can I fix this? Sample data and output: ``` ╔═════╦══════════╦════╦════╦ ║ FO ║ Bar ║ Bar 2 ║ ╠═════╬══════════╬═════════╬ ║ 6 ║ 10 ║ ║ ║ 4 ║ 110 ║ ║ ║ 3 ║ 120 ║ ║ ║ 8 ║ 140 ║ ║ ║ 3 ║ 180 ║ ║ ║ 3 ║ 190 ║ ║ ╚═════╩══════════╩════╩════╩ ``` Output: ``` ╔═════╦══════════╦════╦════╦ ║ FO ║ Bar ║ Bar 2 ║ ╠═════╬══════════╬═════════╬ ║ 3 ║ 190 ║ ║ ║ 3 ║ 180 ║ ║ ║ 3 ║ 120 ║ ║ ║ 8 ║ 140 ║ ║ ║ 4 ║ 110 ║ ║ ║ 6 ║ 10 ║ ║ ╚═════╩══════════╩════╩════╩ ```
``` SELECT foo, <some calc> AS bar, bar2 FROM foobar ORDER BY max(<some calc>) OVER (PARTITION BY foo) DESC NULLS LAST -- can't refer to bar , bar DESC NULLS LAST -- but you can here , foo DESC NULLS LAST; ``` `bar` does not have to be a column, can be any valid expression, even an aggregate function (in combination with `GROUP BY`) - just not another window function, which can't be nested. Example: * [PostgreSQL - Referencing another aggregate column in a window function](https://stackoverflow.com/questions/23085609/postgresql-referencing-another-aggregate-column-in-a-window-function/23089169#23089169) You **cannot**, however, refer to a column alias (output column name) on the same query level within a window function. You have to spell out the expression again, or move the calculation to a subquery or CTE. You *can* refer to output column names in `ORDER BY` and `GROUP BY` otherwise (but not in the `WHERE` or `HAVING` clause). Explanation: * [GROUP BY + CASE statement](https://stackoverflow.com/questions/19848930/group-by-case-statement/19849537#19849537) * [PostgreSQL Where count condition](https://stackoverflow.com/questions/8119489/postgresql-where-count-condition/8119815#8119815) Since it has not been defined we must expect NULL values. Typically you want NULL values last, so add `NULLS LAST` in descending order. See: * [Sort by column ASC, but NULL values first?](https://stackoverflow.com/questions/9510509/postgresql-sort-by-datetime-asc-null-first/9511492#9511492) Assuming you want bigger `foo` first in case of ties with `bar`.
You just want `order by`. `Group by` reduces (in general) the number of rows by aggregation. You can accomplish this using window functions: ``` SELECT Foo, Bar, Bar2, From FooBar ORDER BY MAX(Bar) OVER (PARTITION BY Foo) DESC, Foo; ```
Postgres GROUP BY, then sort
[ "", "sql", "postgresql", "group-by", "sql-order-by", "aggregate", "" ]
My table looks as follows: ``` author | group daniel | group1,group2,group3,group4,group5,group8,group10 adam | group2,group5,group11,group12 harry | group1,group10,group15,group13,group15,group18 ... ... ``` I want my output to look like: ``` author1 | author2 | intersection | union daniel | adam | 2 | 9 daniel | harry| 2 | 11 adam | harry| 0 | 10 ``` THANK YOU
Try below (for BigQuery) ``` SELECT a.author AS author1, b.author AS author2, SUM(a.item=b.item) AS intersection, EXACT_COUNT_DISTINCT(a.item) + EXACT_COUNT_DISTINCT(b.item) - intersection AS [union] FROM FLATTEN(( SELECT author, SPLIT([group]) AS item FROM YourTable ), item) AS a CROSS JOIN FLATTEN(( SELECT author, SPLIT([group]) AS item FROM YourTable ), item) AS b WHERE a.author < b.author GROUP BY 1,2 ``` > Added solution for BigQuery Standard SQL ``` WITH YourTable AS ( SELECT 'daniel' AS author, 'group1,group2,group3,group4,group5,group8,group10' AS grp UNION ALL SELECT 'adam' AS author, 'group2,group5,group11,group12' AS grp UNION ALL SELECT 'harry' AS author, 'group1,group10,group13,group15,group18' AS grp ), tempTable AS ( SELECT author, SPLIT(grp) AS grp FROM YourTable ) SELECT a.author AS author1, b.author AS author2, (SELECT COUNT(1) FROM a.grp) AS count1, (SELECT COUNT(1) FROM b.grp) AS count2, (SELECT COUNT(1) FROM UNNEST(a.grp) AS agrp JOIN UNNEST(b.grp) AS bgrp ON agrp = bgrp) AS intersection_count, (SELECT COUNT(1) FROM (SELECT * FROM UNNEST(a.grp) UNION DISTINCT SELECT * FROM UNNEST(b.grp))) AS union_count FROM tempTable a JOIN tempTable b ON a.author < b.author ``` What I like about this one: * much simpler / friendlier code * no CROSS JOIN and extra GROUP BY needed When/If try - *make sure to uncheck `Use Legacy SQL` checkbox under `Show Options`*
I propose this option that scales better: ``` WITH YourTable AS ( SELECT 'daniel' AS author, 'group1,group2,group3,group4,group5,group8,group10' AS grp UNION ALL SELECT 'adam' AS author, 'group2,group5,group11,group12' AS grp UNION ALL SELECT 'harry' AS author, 'group1,group10,group13,group15,group18' AS grp ), tempTable AS ( SELECT author, grp FROM YourTable, UNNEST(SPLIT(grp)) as grp ), intersection AS ( SELECT a.author AS author1, b.author AS author2, COUNT(1) as intersection FROM tempTable a JOIN tempTable b USING (grp) WHERE a.author > b.author GROUP BY a.author, b.author ), count_distinct_groups AS ( SELECT author, COUNT(DISTINCT grp) as count_distinct_groups FROM tempTable GROUP BY author ), join_it AS ( SELECT intersection.*, cg1.count_distinct_groups AS count_distinct_groups1, cg2.count_distinct_groups AS count_distinct_groups2 FROM intersection JOIN count_distinct_groups cg1 ON intersection.author1 = cg1.author JOIN count_distinct_groups cg2 ON intersection.author2 = cg2.author ) SELECT *, count_distinct_groups1 + count_distinct_groups2 - intersection AS unionn, intersection / (count_distinct_groups1 + count_distinct_groups2 - intersection) AS jaccard FROM join_it ``` A full cross join on Big Data (tens of thousands x millions) fails for too much shuffling while the second proposal takes hours to execute. That one takes minutes. The consequence of this approach though is that pairs having no intersection will not appear, so it will be the responsibility of the process that uses it to handle IFNULL. Last detail: the union on Daniel and Harry is 10 rather than 11 as group15 is repeated in the initial example.
SQL- jaccard similarity
[ "", "sql", "google-bigquery", "" ]
I am trying to filter the last entry in a table closet to a defined date and I am having difficulties. Any input is greatly appreciated. Thanks! I am running Microsoft SQL Server 2008. Table: ``` code | account | date | amount 1 | 1234 | 2016-02-28 | 500 2 | 1234 | 2016-03-01 | 650 3 | 1234 | 2016-03-05 | 842 4 | 7890 | 2016-02-28 | 500 5 | 7890 | 2016-03-30 | 550 ``` I want to select only entries with a date closest to March 31 ('2016-03-31'). In this example, the entry closest to 2016-03-31 for account 1234 is entry #3 and the entry closest to 2016-03-31 for account 7890 is entry #5. In other words, I want the last entry for all accounts equal to or before a date. ``` 3 | 1234 | 2016-03-05 | 842 5 | 7890 | 2016-03-30 | 550 ```
Most DBMSes (including MS SQL Server) support Analytical Functions: ``` select * from ( select *, row_number() -- create a ranking over (partition by account -- for each account order by date desc) as rn -- based on descending dates from tab where date <= date '2016-03-31' ) dt where rn = 1 -- return the row with the "closest" date ```
Since no DBMS is specified, here's a kind of hacky way to do this in SQL Server. It grabs the record just before and just after the specified date: ``` select * from ( select top(1) * FROM mytable where date >= '2016-03-31' order by date asc ) t1 union select * from ( select top(1) * FROM mytable where date <= '2016-03-31' order by date desc ) t2 ```
SQL Find Last Entry Closest to a Date
[ "", "sql", "sql-server-2008", "" ]
``` Dim SALESINSERT As New SqlCommand("INSERT INTO Tbl_Sales (Sale_id, Transaction_No, Customer_id, Item_id, Amount, Date) VALUES(" _ & SalesIdMax + 1 & "," & Transaction_label.Text & "," & 1 & "," & Label4.Text & "," & TextBox1.Text & _ "," & DateTimePicker1.Value.Date & ")", sqlcon) sqlcon.Open() SALESINSERT.ExecuteNonQuery() sqlcon.Close() SALESINSERT = Nothing ``` I have this code. Everything works just fine, but the problem is with the date. For some reason it inserts the same date every time: "1/1/1900". When I debugged the code to see the SQL command text it was fine and the date was fine and I executed it in SQL query and it was perfectly fine. But in VB it doesn't. I do not know why it is not working. Please can I have suggestions to fix it.
If you always use parameterized queries then you will avoid problems with representing dates as strings. You can use SQL parameters (I had to guess at the database column data types) for your query like this: ``` Dim salesinsert As New SqlCommand("INSERT INTO Tbl_Sales ([Sale_id], [Transaction_No], [Customer_id], [Item_id], [Amount], [Date])" & " VALUES(@SaleId, @TransactionNo, @CustomerId, @ItemId, @Amount, @Date)", sqlcon) salesinsert.Parameters.Add(New SqlParameter With {.ParameterName = "@SaleId", .SqlDbType = SqlDbType.Int, .Value = SalesIdMax + 1}) salesinsert.Parameters.Add(New SqlParameter With {.ParameterName = "@TransactionNo", .SqlDbType = SqlDbType.NVarChar, .Size = 20, .Value = Transaction_label.Text}) salesinsert.Parameters.Add(New SqlParameter With {.ParameterName = "@CustomerId", .SqlDbType = SqlDbType.Int, .Value = 1}) salesinsert.Parameters.Add(New SqlParameter With {.ParameterName = "@ItemId", .SqlDbType = SqlDbType.NVarChar, .Size = 20, .Value = Label4.Text}) salesinsert.Parameters.Add(New SqlParameter With {.ParameterName = "@Amount", .SqlDbType = SqlDbType.Decimal, .Value = CDec(TextBox1.Text)}) salesinsert.Parameters.Add(New SqlParameter With {.ParameterName = "@Date", .SqlDbType = SqlDbType.DateTime, .Value = DateTimePicker1.Value}) sqlcon.Open() salesinsert.ExecuteNonQuery() sqlcon.Close() salesinsert.Dispose() ``` * I escaped the column names with square brackets - this avoids problems with using [SQL reserved keywords](https://msdn.microsoft.com/en-us/library/ms189822.aspx) as column names. It is easier to always escape the column names. * You should not set `SALESINSERT = Nothing` - instead, use `salesinsert.Dispose()` as this cleans up unmanaged resources properly. * You need to change each `.SqlDbType` (and `.Size` for strings) to match the datatypes of the database columns. The Decimal values *ought* to have the `.Scale` and `.Precision` defined too. * The controls could do with descriptive names - `TextBox1` does not suggest that it will have an amount in it. * The values should be validated before running the query, e.g. can the amount text be converted to a Decimal and is it a sensible value.
Use the single quotes for the date value `",'" & DateTimePicker1.Value.Date & "')"` Or ``` ",#" & DateTimePicker1.Value.Date & "#)" ```
Insert date into SQL Server database through vb.net
[ "", "sql", "sql-server", "vb.net", "" ]
The following query returns > "single-row subquery returns more than one row" ``` select * from sampleTable where status = 'A' and (SELECT SUBSTR(some_code_column, 1, 4) from sampleTable) = 9999 ``` I need to fetch all the rows of the table where status is A and All the rows with `SUBSTR(some_code_column, 1, 4) = 9999` How to change the query so that it fetches the required result?
No need for that sub-select, simply AND the conditions: ``` select * from sampleTable where status = 'A' and SUBSTR(some_code_column, 1, 4) = 9999 ```
*This was before you clarified you wanted to return data that satisfies both conditions not one or the other*. I would use a `UNION` in this scenario. ``` SELECT * FROM sampleTable WHERE status = 'A' UNION SELECT * FROM sampleTable WHERE SUBSTR(some_code_column, 1, 4) = 9999 ``` More reading on performance here <https://stackoverflow.com/a/13866221/2641576>
Oracle - single-row subquery returns more than one row , Need to fetch all the rows
[ "", "sql", "oracle", "" ]
I think I’m missing something obvious here. I'm tasked to duplicate a mapping that works fine, by changing ONLY the source qualifier portion. The original mapping looks like this, [![OriginalMapping](https://i.stack.imgur.com/XSGY8.png)](https://i.stack.imgur.com/XSGY8.png) First of all, I don’t understand how the original mapping simply connects from the Source Qualifier to the Expression. The column names are supposed to be changed completely because of the user defined query. eg. ``` INSERT_DM to max(HVOLE.INSERT_DM) ``` In my new duplicated mapping, my new Source qualifier is giving me this error when I click "Validate", [![enter image description here](https://i.stack.imgur.com/AK9ae.png)](https://i.stack.imgur.com/AK9ae.png) It's weird that it mentions "exactly 3 fields", when my query actually outputs 5 separate columns. Note that I've created o\_BEADHEIGHT1 and o\_BEADHEIGHT2 for the columns that don't exist. These columns are newly created by my user-defined query.
It does not matter if the port names in SQ does not match with the select query fields. Only the order of ports matters. Also it only considers the ports that are connected with the next transformation.
**The reason why you are getting this issue is 2 out of the 5 ports in the Source Qualifier is not linked with the Source Definition.** This validation considers only the Source Qualifier ports you have linked with the source definition as well as the next transformation. The funda is 1) The number of fields selected in the SQL override query should match the number the number of ports in the Source qualifier which are LINKED to the next transformation. The names are not required to be the same but order needs to be the same. Interestingly Informatica maps the fields from the SQL query to the Source qualifier output links instead of Source Qualifier ports. So the first column in the SQL query gets mapped to the first link, second column to the second link and so on. 2) Also all the ports in the Source Qualifier transformation needs to be linked with the Source definition. You can delete the unused ports in the Source Qualifier transformation to avoid confusion.
Output column mismatch. User defined SQL query in Source Qualifier
[ "", "sql", "sql-server", "relational-database", "informatica", "" ]
I'm trying to extract an ID (5384) that is inside an array contained in JSON using wildcards. The problem I'm having is that the position of the ID doesn't have a fixed position for each element in that array. An example of my JSON in that array is like this (where "id":5384 could occupy different indexed positions): ``` { "id":7465115, "name":"BCA_WS_FBX_Nielsen PRIZM_Test_Unlock_1x1", "advertiser_id":155085, "pixels":[ { "id":416491, "pixel_template_id":null, }, { "id":5384, "pixel_template_id":null, } ] } ``` My query is as follows: ``` SELECT id, json FROM PROD_APPNEXUS.dimension_json_creatives WHERE JSON LIKE ('%pixels%_%"id":5384,%') AND MEMBER_ID = 364 ``` I'm trying to extract only items that are in the pixels array and have an ID of 5384. Any comments as to how to achieve this would be highly valued, thanks! UPDATE: MySQL version 5.6.17 Sam
The only way I would think of is to use the REGEXP syntax of MySQL: ``` SELECT id, json FROM PROD_APPNEXUS.dimension_json_creatives WHERE (JSON REGEXP '("pixels":\[)?.*"id":5384') AND MEMBER_ID = 364 ```
Try: [12.6 The JSON Data Type](https://dev.mysql.com/doc/refman/5.7/en/json.html). MySQL 5.7.8+ ``` SELECT `id`, `json` -- Data Type JSON FROM `PROD_APPNEXUS`.`dimension_json_creatives` WHERE JSON_CONTAINS(`json`, '{"id": 5384}', '$.pixels') AND `MEMBER_ID` = 364; ``` **UPDATE** Using 5.6.17, one option is: ``` SELECT `id`, `json` FROM `PROD_APPNEXUS`.`dimension_json_creatives` WHERE `json` REGEXP '"pixels":\\[.*"id":5384.*]' AND MEMBER_ID = 364; ``` Performance may be affected depending on the number of tuples in the table.
MySQL Wildcard on JSON array
[ "", "mysql", "sql", "arrays", "json", "" ]
I want to find specific field of specific column from a particular table based on user provided value. It is unknown what value contain that specific column field. To be clear: ``` Table1 -------------- | range|value| -------------- | 100 |0 | | 200 |2 | | 300 |9 | | 400 |15 | | 500 |20 | -------------- ``` From Table1 if user provided value is between 0 to 100 then result will be 0, if user provided value is between 101 to 200 then result will be 2, if user provided value is between 201 to 300 then result will be 9 and so on. As example if user provided value is 50 then result will 0, if user provided values is 499 then result will 20. If range column value is fixed then we can easily find the result using BETWEEN function or <= , >= operator. But here range column value are unknown. How can I solve this problem?
You can use a combination of `ORDER BY` and the `ROWNUM` pseudo column to select only the best row: ``` select value from ( select * from table1 where range >= :p_Range order by range ) where rownum < 2 ``` Alternatively, you can compute an explicit ordering using the `ROW_NUMBER()` analytic function and use that to filter the results: ``` select value from ( select t.*, row_number() over (partition by 1 order by range) as rn from table1 t where range >= :p_Range order by range ) where rn = 1 ```
``` select value from table1 where range = (select min(range ) from table1 where range >= :rangeparam) ```
How can I find a particular row from a particular table?
[ "", "sql", "oracle11g", "" ]
Is there a way to select pairs of values sequentially from one column in SQL? i.e. If i have a table with one column of numbers ``` SomeID ------ 1 2 3 5 7 11 ``` I need to return a set of two columns like so: ``` FirstID SecondID ------------------- 1 2 2 3 3 5 5 7 7 11 ``` Can this be done? Edit: I should've mentioned, the order of the first result set matters, and may not be sequential. i.e. could be ``` SomeID 5 3 9 8 ... FirstID SecondID 5 3 3 9 9 8 ... ... ```
``` SELECT t1.SomeID as FirstID, t2.SomeID as SecondID FROM ( SELECT SomeID, ROW_NUMBER()OVER(ORDER BY SomeID) as Inc FROM TABLE ) t1 LEFT JOIN ( SELECT SomeID, ROW_NUMBER()OVER(ORDER BY SomeID)-1 as Inc FROM TABLE ) t2 ON t2.Inc = t1.Inc ``` works on sql server >= 2005
You can do this with the windowed function, `LEAD` (or `LAG`) ``` ;WITH My_CTE AS ( SELECT some_id as first_id, LEAD(some_id, 1, NULL) OVER (ORDER BY some_id) AS second_id FROM My_Table ) SELECT first_id, second_id FROM My_CTE WHERE second_id IS NOT NULL -- to not get 11, NULL at the end ORDER BY first_id ``` If you don't care about getting that last row then you can just use the CTE query by itself without even using a CTE.
SQL Server - Select pairs of values from one column
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "" ]
With Oracle dynamic SQL one is able to execute a string containing a SQL statement. e.g. ``` l_stmt := 'select count(*) from tab1'; execute immediate l_stmt; ``` Is it possible to not execute `l_stmt` but check that the syntax and semantics is correct programmitically?
I think that the only "solution" is to use `DBMS_SQL.PARSE()`. It is not perfect but it is the best that you can get
[`EXPLAIN PLAN`](http://docs.oracle.com/database/121/SQLRF/statements_9010.htm#SQLRF01601) will check the syntax and semantics of almost all types of SQL statements. And unlike `DBMS_SQL.PARSE` it will not implicitly execute anything. The point of the explain plan is to show how Oracle will execute a statement. As a side-effect of generating the plan it must also check syntax, privileges, and generally do everything except actually run the statement. The explain plan itself is pointless and can be ignored, the statement is only run to check for any errors. As long as there are no errors, the statement is valid. For example, the PL/SQL blocks below check the validity of a `SELECT` statement and a `CREATE TABLE` statement. They run without error so the syntax is fine. ``` begin execute immediate 'explain plan for select * from dual'; end; / begin execute immediate 'explain plan for create table just_some_table(a number)'; end; / ``` Running a bad statement will generate an error. In at least this one test case, it generates the same error as if the statement was run by itself. ``` begin execute immediate 'explain plan for select * from this_table_does_not_exist'; end; / ORA-00942: table or view does not exist ORA-06512: at line 2 ``` The syntax diagram in the manual implies it should run for *all* statements. However, there appear to be at least a few statement types that do not work, such as `ALTER SESSION`. ``` begin execute immediate 'explain plan for alter session set optimizer_features_enable = ''11.2.0.4'''; end; / ORA-00900: invalid SQL statement ORA-06512: at line 2 ``` Slightly off-topic - are you trying to build a completely generic SQL interface, like a private SQL Fiddle built in PL/SQL? Do you need to worry about things like preventing users from attempting to run certain statement types, and ensuring there are no trailing semicolons? If so I can edit the question to help with some of those difficult dynamic SQL tasks.
Dynamic SQL - Check syntax and semantics
[ "", "sql", "oracle", "dynamic-sql", "" ]
I would like to remove all rows from the table **EXCEPT** when `FirstName` is **Ben** and `isAdmin` is true Here is my SQL ``` DELETE FROM Table1 WHERE (FirstName <> 'Ben' AND isAdmin = 1); ``` However, my issue is that when `isAdmin` is false... it should remove that row as well but it doesn't remove it. What is my issue here?
The correct SQL should be ``` DELETE FROM Table1 WHERE (FirstName <> 'Ben' OR isAdmin = 0); ```
You want: ``` DELETE FROM t WHERE NOT (a AND b) ``` The negation of `a AND b` is `NOT a OR NOT b` so your query should be ``` DELETE FROM Table1 WHERE NOT (FirstName = 'Ben' AND isAdmin = 1); ``` or ``` DELETE FROM Table1 WHERE FirstName <> 'Ben' OR isAdmin <> 1); ``` Personally I think the first option identifies the intent more clearly. There should not be any performance difference between the two.
Remove all rows Except condition
[ "", "sql", "database", "where-clause", "" ]
I was hoping to get some guidance on a SQL script I am trying to put together for Oracle database 11g. I am attempting to perform a count of claims from the 'claim' table, and order them by year / month / and enterprise. I was able to get a count of claims and order them like I would like, however I need to pull data from another table and I am having trouble combining the 'row\_number' function with a join. Here is my script so far: ``` SELECT TO_CHAR (SYSTEM_ENTRY_DATE, 'YYYY') YEAR, TO_CHAR (SYSTEM_ENTRY_DATE, 'MM') MONTH, ENTERPRISE_IID, COUNT (*) CLAIMS FROM (SELECT CLAIM.CLAIM_EID, CLAIM.SYSTEM_ENTRY_DATE, CLAIM.ENTERPRISE_IID, ROW_NUMBER () OVER (PARTITION BY CLAIM.CLAIM_EID, CLAIM.ENTERPRISE_IID ORDER BY CLAIM.SYSTEM_ENTRY_DATE DESC) RN FROM CLAIM WHERE CLAIM_IID IN (SELECT DISTINCT (CLAIM_IID) FROM CLAIM_LINE WHERE STATUS <> 'D') AND CLAIM.CONTEXT = '1' AND CLAIM.CLAIM_STATUS = 'A' AND CLAIM.LAST_ANALYSIS_DATE IS NOT NULL) WHERE RN = 1 GROUP ENTERPRISE_IID, TO_CHAR (SYSTEM_ENTRY_DATE, 'YYYY'), TO_CHAR (SYSTEM_ENTRY_DATE, 'MM'); ``` So far all of my data is coming from the 'claim' table. This pulls the following result: ``` YEAR MONTH ENTERPRISE_IID CLAIMS ---- ----- -------------- ---------- 2016 01 6 1 2015 08 6 3 2016 02 6 2 2015 09 6 2 2015 07 6 2 2015 09 5 22 2015 11 5 29 2015 12 5 27 2016 04 5 8 2015 07 5 29 2015 05 5 15 2015 06 5 5 2015 10 5 45 2016 03 5 54 2015 03 5 10 2016 02 5 70 2016 01 5 55 2015 08 5 32 2015 04 5 12 19 rows selected. ``` The enterprise\_IID is the primary key on the 'enterprise' table. The 'enterprise' table also contains the 'name' attribute for each entry. I would like to join the claim and enterprise table in order to show the enterprise name for this count, and not the enterprise\_IID. As you can tell I am rather new to Oracle and SQL, and I am a bit stuck on this one. I was thinking that I should do an inner join between the two tables, but I am not quite sure how to do that when using the row\_number function. Or perhaps I am taking the wrong approach here, and someone could push me in another direction. Here is what I tried: ``` SELECT TO_CHAR (SYSTEM_ENTRY_DATE, 'YYYY') YEAR, TO_CHAR (SYSTEM_ENTRY_DATE, 'MM') MONTH, ENTERPRISE_IID, ENTERPRISE.NAME, COUNT (*) CLAIMS FROM (SELECT CLAIM.CLAIM_EID, CLAIM.SYSTEM_ENTRY_DATE, CLAIM.ENTERPRISE_IID, ROW_NUMBER () OVER (PARTITION BY CLAIM.CLAIM_EID, CLAIM.ENTERPRISE_IID ORDER BY CLAIM.SYSTEM_ENTRY_DATE DESC) RN FROM CLAIM, enterprise INNER JOIN ENTERPRISE ON CLAIM.ENTERPRISE_IID = ENTERPRISE.ENTERPRISE_IID WHERE CLAIM_IID IN (SELECT DISTINCT (CLAIM_IID) FROM CLAIM_LINE WHERE STATUS <> 'D') AND CLAIM.CONTEXT = '1' AND CLAIM.CLAIM_STATUS = 'A' AND CLAIM.LAST_ANALYSIS_DATE IS NOT NULL) WHERE RN = 1 GROUP BY ENTERPRISE.NAME, ENTERPRISE_IID, TO_CHAR (SYSTEM_ENTRY_DATE, 'YYYY'), TO_CHAR (SYSTEM_ENTRY_DATE, 'MM'); ``` Thank you in advance! "Desired Output" ``` YEAR MONTH NAME CLAIMS ---- ----- ---- ---------- 2016 01 Ent1 1 2015 08 Ent1 3 2016 02 Ent1 2 2015 09 Ent1 2 2015 07 Ent1 2 2015 09 Ent2 22 2015 11 Ent2 29 2015 12 Ent2 27 2016 04 Ent2 8 2015 07 Ent2 29 2015 05 Ent2 15 2015 06 Ent2 5 2015 10 Ent2 45 2016 03 Ent2 54 2015 03 Ent2 10 2016 02 Ent2 70 2016 01 Ent2 55 2015 08 Ent2 32 2015 04 Ent2 12 19 rows selected. ```
You can try this. Joins can be used when calculating row numbers with `row_number` function. ``` SELECT TO_CHAR (SYSTEM_ENTRY_DATE, 'YYYY') YEAR, TO_CHAR (SYSTEM_ENTRY_DATE, 'MM') MONTH, ENTERPRISE_IID, NAME, COUNT (*) CLAIMS FROM (SELECT CLAIM.CLAIM_EID, CLAIM.SYSTEM_ENTRY_DATE, CLAIM.ENTERPRISE_IID, ENTERPRISE.NAME, ROW_NUMBER () OVER (PARTITION BY CLAIM.CLAIM_EID, CLAIM.ENTERPRISE_IID ORDER BY CLAIM.SYSTEM_ENTRY_DATE DESC) RN FROM CLAIM --, enterprise (this is not required as the table is being joined already) INNER JOIN ENTERPRISE ON CLAIM.ENTERPRISE_IID = ENTERPRISE.ENTERPRISE_IID INNER JOIN (SELECT DISTINCT CLAIM_IID FROM CLAIM_LINE WHERE STATUS <> 'D') CLAIM_LINE ON CLAIM.CLAIM_IID = CLAIM_LINE.CLAIM_IID WHERE CLAIM.CONTEXT = '1' AND CLAIM.CLAIM_STATUS = 'A' AND CLAIM.LAST_ANALYSIS_DATE IS NOT NULL) t WHERE RN = 1 GROUP BY NAME, --ENTERPRISE.NAME (The alias ENTERPRISE is not accessible here.) ENTERPRISE_IID, TO_CHAR(SYSTEM_ENTRY_DATE, 'YYYY'), TO_CHAR(SYSTEM_ENTRY_DATE, 'MM'); ```
I'd write the query like this: ``` SELECT TO_CHAR(TRUNC(c.system_entry_date,'MM'),'YYYY') AS year , TO_CHAR(TRUNC(c.system_entry_date,'MM'),'MM') AS month , e.enterprise_name AS name , COUNT(*) AS claims FROM ( SELECT r.claim_eid , r.enterprise_iid , MAX(r.system_entry_date) AS system_entry_date FROM ( SELECT DISTINCT l.claim_iid FROM claim_line l WHERE l.status <> 'D' ) d JOIN claim r ON r.claim_iid = d.claim_iid AND r.context = '1' AND r.claim_status = 'A' AND r.last_analysis_date IS NOT NULL GROUP BY r.claim_eid , r.enterprise_iid ) c JOIN enterprise e ON e.enterprise_iid = c.enterprise_iid GROUP BY c.enterprise_iid , TRUNC(c.system_entry_date,'MM') , e.enterprise_name ORDER BY e.enterprise_name , TRUNC(c.system_entry_date,'MM') ``` A few notes: I prefer to qualify *ALL* column references with the table name or short table alias, and assign aliases to all inline views. Since the usage of ROW\_NUMBER() appears to be get the "latest" system\_entry\_date for a claim and eliminate duplicates, I'd prefer to use a GROUP BY and a MAX() aggregate. I prefer to use a join operation rather than the NOT IN (subquery) pattern. (Or, I would tend to use a NOT EXISTS (correlated subquery) pattern. I don't think it matters too much if you use TO\_CHAR or EXTRACT. The TO\_CHAR gets you the leading zero in the month, I don't think EXTRACT(MONTH ) gets you the leading zero. I'd use whichever gets me closest to the resultset I need.Personally, I would return just a single column, either containing the year and month as one string e.g. TO\_CHAR( , 'YYYYMM') or just a DATE value. It all depends what I'm going to be doing with that.
SQL NOOB - Oracle joins and Row Number
[ "", "sql", "database", "oracle", "join", "" ]
I have a view like this: ``` ID| Key | Product | Item | Block | Source | Title | Text | Type | H1 | H2 | H3 | ------------------------------------------------------------------------------- 1 | 456 | abcd | def | 1 | TP | QWERT | YUIP | tgr | A1 | A2 | A3 | 2 | 567 | fhrh | klo | 1 | GT | TREWQ | ITGF | trp | A1 | A2 | A3 | 3 | 891 | ufheu | yut | 2 | FR | WERTY | MNBV | uip |NULL|NULL|NULL| ``` I want to export some of these columns into a existing, empty table. I want to select the first six columns and then select the other columns like an hierarchy going from right to left. If H1, H2 and H3 are NOT null, they should come in the output and Title, Text and Type should be NULL (even though they contain values). If H1, H2 and H3 are NULL, i want thet Title, Text and Type to be in the output. It should be something like this: ``` ID| Key | Product | Item | Block | Source | Title | Text | Type | H1 | H2 | H3 | ------------------------------------------------------------------------------- 1 | 456 | abcd | def | 1 | TP | NULL | NULL | NULL | A1 | A2 | A3 | 2 | 567 | fhrh | klo | 1 | GT | NULL | NULL | NULL | A1 | A2 | A3 | 3 | 891 | ufheu | yut | 2 | FR | WERTY | MNBV | uip |NULL|NULL|NULL| ``` Can anybody help me with this? Help would be very much appreciated!
Ok, I've wrapped all column names in [square brackets] because you're using reserved names (Key, Text, Type) and I like consistency, it's worth breaking this habit as soon as possible. If your criteria is that all three columns (H1, H2, H3) need to be NULL then you'll want something like this; ``` SELECT [ID] ,[key] ,[Product] ,[Item] ,[Block] ,[Source] ,CASE WHEN H1 IS NULL AND H2 IS NULL AND H3 IS NULL THEN [Title] ELSE NULL END AS [Title] ,CASE WHEN H1 IS NULL AND H2 IS NULL AND H3 IS NULL THEN [Text] ELSE NULL END AS [Text] ,CASE WHEN H1 IS NULL AND H2 IS NULL AND H3 IS NULL THEN [Type] ELSE NULL END AS [Type] ,H1 ,H2 ,H3 FROM DataTable ```
If you want the comparison column by column, then use `coalesce()`: ``` select ID, Key, Product, Item, Block, Source, (case when h1 is not null then null else title end) as title, (case when h2 is not null then null else text end) as text, (case when h3 is not null then null else type end) as type, coalesce(h1, title) as h1, coalesce(h2, text) as h2, coalesce(h3, type) as h3 from t; ``` However, I'm not sure if you mean all three columns at the same time: ``` select ID, Key, Product, Item, Block, Source, (case when h1 is null and h2 is null and h3 is null then title end) as title, (case when h1 is null and h2 is null and h3 is null then text end) as text, (case when h1 is null and h2 is null and h3 is null then type end) as type, (case when h1 is null and h2 is null and h3 is null then NULL else h1 end) as h1, (case when h1 is null and h2 is null and h3 is null then NULL else h2 end) as h2, (case when h1 is null and h2 is null and h3 is null then NULL else h3 end) as h3 from t; ```
SQL Select Columns.. IF NULL then Select other Columns
[ "", "sql", "t-sql", "select", "" ]
Let's say I have a query: ``` select product_id, price, price_day from products where price>10 ``` and I want to join the result of this query with itself (if for example I want to get in the same row product's price and the price in **previous** day) I can do this: ``` select * from ( select product_id, price, price_day from products where price>10 ) as r1 join ( select product_id, price, price_day from products where price>10 ) as r2 on r1.product_id=r2.product_id and r1.price_day=r2.price_day-1 ``` but as you can see I am copying the original query, naming it a different name just to join its result with itself. Another option is to create a temp table but then I have to remember to remove it. Is there a more elegant way to join the result of a query with itself?
self join query will help ``` select a.product_ID,a.price ,a.price_day ,b.price as prevdayprice ,b.price_day as prevday from Table1 a inner join table1 b on a.product_ID=b.product_ID and a.price_day = b.price_day+1 where a.price >10 ```
You could do a bunch of things, just a few options could be: * Just let mysql handle the optimization + this will likely work fine until you hit **many** rows * Make a view for your base query and use that + could increase performance but mostly increases readability (if done right) * Use a table (non temporary) and insert your initial rows in there. [(unfortunately you cannot refer to a temporary table more than once in a query)](https://stackoverflow.com/questions/343402/getting-around-mysql-cant-reopen-table-error) + this will likely be more expensive performance wise until a certain number of rows is reached. Depending on how important performance is for your situation and how many rows you need to work with the "best" choice would change.
Join query result with itself in MySQL
[ "", "mysql", "sql", "" ]
I want to concatenate a select stament for AddressLine1 up to AddressLine4, PostalCode and a PhoneNo columns in sql server 2008 such that each field will begin in a new line. This is to be use for reporting purposes. Which is the best way to have this done? Desired outcome is: ``` 2 Jojo Street Kenyon Express Way Exeee UY19 78DF 08945392847 ```
You can concatenate the required fields, and use the `char(13)+char(10)` in between them for a new line. ``` select AddressLine1 + char(13)+char(10) + AddressLine2 + char(13)+char(10) + AddressLine3 +char(13)+char(10) + AddressLine4 + char(13)+char(10) + PostalCode + char(13)+char(10) + PhoneNo from table1 ```
You need to CONCAT the fields with the newline character of your target system, e.g. \n or CHAR(13)+CHAR(10) eg. [this](https://stackoverflow.com/questions/31057/how-to-insert-a-line-break-in-a-sql-server-varchar-nvarchar-string) could be helpful for you
Concatenate Address field in Sql
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I working on a little web project and I was wondering what SQL I would need to use find the month with the least amount of bookings. I have a `Booking` table: [![enter image description here](https://i.stack.imgur.com/CcPPa.png)](https://i.stack.imgur.com/CcPPa.png) I have a `Package` table: [![enter image description here](https://i.stack.imgur.com/AQaK5.png)](https://i.stack.imgur.com/AQaK5.png) I have a `HolidayMaker` table: [![https://i.gyazo.com/b3459cd20e4fc795c46305f357f6016e.png](https://i.stack.imgur.com/ggv0I.png)](https://i.stack.imgur.com/ggv0I.png) I think this might have something to do with nested `SELECT` statements, however I am not entirely sure. Thanks, James. :-)
In case you are interested in seeing results 'only' for months, not taking in account years: (I.e. All reservations from any January, no matter what year): ``` SELECT MONTHNAME(Bo_Datebooked) as month, COUNT(1) as num_reservations FROM Booking GROUP BY month ORDER BY num_reservations ASC /* additionality add... /* LIMIT 1 */ /* ...to see only the lower result */ ``` In case you're concerned in differenciate months from each year (I.e: reseravations from January 2016, reservations from January 2012...): ``` SELECT CONCAT(MONTHNAME(Bo_Datebooked), YEAR(Bo_Datebooked)) as date, COUNT(1) as num_reservations FROM Booking GROUP BY date ORDER BY num_reservations ASC /* additionality add... /* LIMIT 1 */ /* ...to see only the lower result */ ``` Warning: any of this won't show you months with 0 reservations!!!
Use `MONTHNAME` and `GROUP BY` a `COUNT`. ``` SELECT MONTHNAME(Bo_Datebooked), COUNT(Booking_ID) FROM Booking GROUP BY MONTHNAME(Bo_Datebooked) ORDER BY COUNT(Booking_ID) ASC ```
How to retrieve the month with the least amount of bookings in MySQL?
[ "", "mysql", "sql", "" ]
I have a query that needs to count the field Points. Then returns the highest value. This query does that fine however I now want to link another table 'Team(PlayerID) with Player(PlayerID), So it shows the player team details etc. I attempted to do that normally on how you would join table but keep getting errors. I also do not want to use the order by desc - First row only. (Oracle) Query: ``` SELECT PlayerID, COUNT(Points) ```
``` SELECT t.*, p.* FROM team t INNER JOIN (SELECT PlayerID, COUNT(Points) FROM Player WHERE Points = 1 group by PlayerID HAVING COUNT(Points) = (SELECT MAX(count(Points)) from Player WHERE Points = 1 group by PlayerID) ) p ON t.PlayerID = p.PlayerID ```
``` SELECT A.PlayerID, a.TotalPoints, b.[stuff] FROM (SELECT PlayerID, COUNT(Points) FROM Player WHERE Points = 1 group by PlayerID HAVING COUNT(Points) = (SELECT MAX(count(Points)) from Player WHERE Points = 1 group by PlayerID)) a join TEAM b on b.playerid = a.playerid; ```
How do I use MAX and Count with two table
[ "", "sql", "oracle", "" ]
In a calendar control, we can see some dates from the previous month and next month also. Sample image below [![enter image description here](https://i.stack.imgur.com/PxYly.png)](https://i.stack.imgur.com/PxYly.png) ``` (ie Apr-2016: Starts from Mar-28 and ends in May-08 Mar-2016: Starts from Apr Feb-29 and ends in Apr-10) ``` Here, i need to generate a list of all the dates in a calendar control for a particular year month. My week start is **Monday**. Here is the tsql script i have tried so far. ``` DECLARE @V_DATE DATE = GETDATE() ;WITH CTE_DATE AS ( SELECT DATEADD(dd,-(DAY(@V_DATE)-1),@V_DATE) CDATE UNION ALL SELECT DATEADD(dd,1,CDATE) FROM CTE_DATE WHERE DATEADD(dd,1,CDATE) <= DATEADD(dd,-(DAY(DATEADD(mm,1,CDATE))),DATEADD(mm,1,CDATE)) ) SELECT * FROM CTE_DATE ``` Result Is: ``` 2016-04-01 2016-04-02 . . 2016-04-29 2016-04-30 ``` It will list all the days from a inputted year month, but i need to include the missing dates from the previous month as well as next month. Expected result for `Apr-2016` ``` 2016-03-28 2016-03-29 . 2016-04-15 . 2016-05-07 2016-05-08 ``` Expected result for `May-2016` ``` 2016-04-25 2016-04-26 . 2016-05-15 . 2016-06-04 2016-06-05 ``` Note:- The calendar control is always showing 42 days.
since your week is starts on Monday,you can take referece to date 0 '1900-01-01' which is a Monday. Adding 41 days would gives you your end date ``` select date_fr = dateadd(day, datediff(day, 0, '2016-05-01') / 7 * 7, 0), date_to = dateadd(day, datediff(day, 0, '2016-05-01') / 7 * 7, 41) ``` the following gives you date 1900-01-01 and Monday ``` select convert(datetime, 0), datename(weekday, 0) ```
Have you considered creating a dates table in your database. You would have columns for dates and a column for week number. Linking to this table you could find the week number for your start and end dates, you could then re-link to the table to find the first date of your start week and the last date of your end week. This would probably be more efficient than calculations at each step each time, it is a simple link.
How to get all the dates in a full calendar month
[ "", "sql", "sql-server", "t-sql", "calendar", "" ]
I have several hundred Hex numbers (32 char long) that were pulled from a sql db. I have them stored in an excel table and need to convert them to GUID with dashes. I have found an online converter, but it only does one at a time and this would be very time consuming (<http://www.windowstricks.in/online-windows-guid-converter>). Is there a way, either in Excel with VBA or Formulas or in SQL to convert these? It is not as simple as just adding the dashes into the correct places. I've tried that and it is not what I need to have happen. An example of the Hex and the converted dash separated GUID: Hex 1. 6F414B9DFB178945A3641E40BC2A4AAB 2. C58C415E215CEC4D9B5100532573D3FA 3. 2B0BBF00A1403E41A333C805961CEA9F GUID converted from the Hex above 1. 48a6c53b-941c-46e2-9964-680754f71666 2. ea0ba3f4-4905-4d9c-9d83-76c57bdb060a 3. 18cea3f7-e1d1-4609-a4bc-9bf6fec6a2d4 Any help you can give would be very appreciated. Thanks
This function converts an hexadecimal String to a formatted GUID string: ``` Public Function ConvHexToGuid(hexa As String) As String Dim guid As String * 36 Mid$(guid, 1) = Mid$(hexa, 7, 2) Mid$(guid, 3) = Mid$(hexa, 5, 2) Mid$(guid, 5) = Mid$(hexa, 3, 2) Mid$(guid, 7) = Mid$(hexa, 1, 2) Mid$(guid, 9) = "-" Mid$(guid, 10) = Mid$(hexa, 11, 2) Mid$(guid, 12) = Mid$(hexa, 9, 2) Mid$(guid, 14) = "-" Mid$(guid, 15) = Mid$(hexa, 15, 2) Mid$(guid, 17) = Mid$(hexa, 13, 2) Mid$(guid, 19) = "-" Mid$(guid, 20) = Mid$(hexa, 17, 4) Mid$(guid, 24) = "-" Mid$(guid, 25) = Mid$(hexa, 21, 16) ConvHexToGuid = guid End Function ```
The GUID to HEX is transposed as follows: `0x00112233445566778899AABBCCDDEEFF` `{33221100-5544-7766-8899-AABBCCDDEEFF}`
Convert a Hex number to a GUID with dashes
[ "", "sql", "excel", "vba", "" ]
I have three tables: Calendar, Employees, Task. Employees usually complete their tasks during the week and they don't work during the weekend. What I want to accomplish, is join the tables so I will see every day in a year even if no employee completed any task. Here is an sql that works for me: ``` SELECT c.date, t.task, e.name FROM calendar c LEFT JOIN tasks t ON (c.date = t.date) INNER JOIN employees e ON (t.emp_id = e.id) WHERE c.date >= "2016-01-01" AND c.date <= "2016-01-07"; ``` A result looks like this: ``` Date Task Name ... 2016-01-05 Driving John 2016-01-05 Cooking Rob 2016-01-06 Installing Jane 2016-01-07 null null ``` My problem is, that when I add an employees into `WHERE` clause (`WHERE e.name in("John", "Rob", "Jane")`), the last row (`2016-01-07 null null`) disappears. What should I change to keep even the dates without employees and tasks in the result? I need these dates to keep in for the final report.
The other answer here is right. Turn the inner join into a left join to return days without an employee then ``` WHERE c.date between "2016-01-01" and "2016-01-07" AND (e.name in("John", "Rob", "Jane") or e.name is null) ```
Change the inner join on employees to a left join, and put the name filter in the join clause, not the where clause.
How to join tables to see dates with null values?
[ "", "mysql", "sql", "" ]
[sql fiddle demo here](http://sqlfiddle.com/#!6/555d4/6) I have this table structure for Diary table: ``` CREATE TABLE Diary ( [IdDiary] bigint, [UserId] int, [IdDay] numeric(18,0), [IsAnExtraHour] bit ); INSERT INTO Diary ([IdDiary], [UserId], [IdDay], [IsAnExtraHour]) values (51, 1409, 1, 0), (52, 1409, 1, 1), (53, 1409, 3, 0), (54, 1409, 5, 0), (55, 1409, 5, 1), (56, 1408, 2, 0); ``` And this structure for DiaryTimetable table: ``` CREATE TABLE DiaryTimetable ( [IdDiary] bigint, [Hour] varchar(50) ); INSERT INTO DiaryTimetable ([IdDiary], [Hour]) VALUES (51, '09:00'), (51, '09:30'), (51, '10:00'), (51, '10:30'), (51, '11:00'), (52, '15:00'), (52, '15:30'), (52, '16:00'), (52, '16:30'), (52, '17:00'), (53, '11:00'), (53, '11:30'), (53, '12:00'), (53, '12:30'), (53, '13:00'), (54, '10:00'), (54, '10:30'), (54, '11:00'), (54, '11:30'), (54, '12:00'), (55, '16:00'), (55, '16:30'), (55, '17:00'), (55, '17:30'), (55, '18:00'), (56, '15:00'), (56, '15:30'), (56, '16:00'), (56, '16:30'), (56, '17:00'); ``` I used this query to get the max hour and the min hour for the userid 1409, to get for each day the time thats enter and the time thats leave the work. The idday correspond with the number of the day of the week. For example 1 is monday, 2 is tuesday etc... ``` SELECT d.IdDiary, d.IdDay, MIN(Hour) as 'Start Time', MAX(Hour) as 'End Time', IsAnExtraHour FROM Diary AS d LEFT JOIN DiaryTimetable AS dt ON d.IdDiary = dt.IdDiary where userid = 1409 GROUP BY d.IdDiary, d.IdDay, IsAnExtraHour ``` This query give this result: [![enter image description here](https://i.stack.imgur.com/ICPcs.png)](https://i.stack.imgur.com/ICPcs.png) I want to get this result: ``` Day Start Time End Time Start Extra Time End Extra Time ----- ---------- -------- --------------- --------------- Monday 09:00 11:00 15:00 17:00 Wednessday 11:00 13:00 Friday 10:00 12:00 16:00 18:00 ``` I have a column (IsAnExtraHour) this column indicates if this row has extra hours in a day, for example an employe start work in monday at 09:00 to 11:00 and then works again in the afternoon at 15:00 to 17:00, so I want to know how can I group this hours in the same row, I hope I've been able to express clearly, I accept suggestions thanks.
``` SELECT d.IdDay, MIN(CASE WHEN isAnExtraHour = 0 THEN hour END) as 'Start Time', MAX(CASE WHEN isAnExtraHour = 0 THEN hour END) as 'End Time', MIN(CASE WHEN isAnExtraHour = 1 THEN hour END) as 'Start Extra Time', MAX(CASE WHEN isAnExtraHour = 1 THEN hour END) as 'End Extra Time' FROM Diary AS d LEFT JOIN DiaryTimetable AS dt ON dt.IdDiary = d.IdDiary WHERE userid = 1409 GROUP BY d.IdDay ```
I use the code from @Quassnoi and I added this: ``` SELECT DATENAME(weekday, d.idday-1) as 'Day' , MIN(CASE WHEN isAnExtraHour = 0 THEN hour END) AS 'Start Time', MAX(CASE WHEN isAnExtraHour = 0 THEN hour END) AS 'End Time', MIN(CASE WHEN isAnExtraHour = 1 THEN hour END) AS 'Start Extra Time', MAX(CASE WHEN isAnExtraHour = 1 THEN hour END) AS 'End Extra Time' FROM Diary AS d LEFT JOIN DiaryTimetable AS dt ON dt.IdDiary = d.IdDiary WHERE userid = 1409 GROUP BY d.IdDay ``` I hope this help someone, thanks all for your answers.
How I can group the results by day in this query with SQL Server?
[ "", "sql", "sql-server", "time", "group-by", "inner-join", "" ]
I'm very new to SQL and I don't know how to query 2 different items within the same field and the same table. I'm writing this in Excel VBA using SQL via oledb to attach to a PostGreSQL datasource Basically I have 2 queries that I need to combine into one query. The first query is the primary group. I need to first find all those people with the code C10%. Then of those with C10 who also have the code R110% The codes are in the srch table and the people names are in the person table, these are joined by master\_id=p.entity\_id Here are the 2 queries I need to combine: ``` Dim DIAG As String DIAG = "SELECT DISTINCT master_id, eventdate, code, term, surname, forename " _ & "FROM srch INNER JOIN person p ON master_id=p.entity_id " _ & "WHERE code LIKE 'C10..%' " _ & "ORDER BY master_id " Dim DIAG As String DIAG = "SELECT DISTINCT master_id, eventdate, code, term, surname, forename " _ & "FROM srch INNER JOIN person p ON master_id=p.entity_id " _ & "WHERE code LIKE 'R110%' " _ & "ORDER BY master_id " ``` The tables have 100 of rows and each has a master\_id that identifies the person. Therefore row would be master\_id = 1 and code = C10. The next could be master\_id= 1 and code R110. You are correct in that different codes cannot exist on the same row Does this help. ``` person table entity_id | surname | forename 1 | Smith | John 2 | Mouse | Mickey 3 srch table master_id | code | term | eventdate 1 |C10 | DM | 01/01/2000 2 |R110 | AL | 01/01/2001 1 |R110 | AL | 01/01/2002 ``` I need to find person 1 ``` Result master_id|code|term| eventdate |surname|forename 1 |R110| AL | 01/01/2002| Smith | John ```
Since the column references aren't qualified, we can't tell which table each column reference refers to. --- SUGGESTION: as an aid to future readers of the SQL, please consider qualifying *all* column references with the table name, or even better, a short (unique) table alias. Then the future reader won't be scouring the table definitions to figure out which table contains which column.) Also consider what is going to happen with this SQL statement when a column of the same name is added to another table used in the query.) --- We'll just have to guess at which table contains which columns. I'd suggest a pattern using a GROUP BY clause, rather than DISTINCT. And I suggest computing an aggregate on the result of an expression that tests whether an included row satisfies the particular condition. And then performing a test on the aggregate, to see if any rows existed (within the group) where the condition evaluated to TRUE. As an example: (not tested) ``` SELECT s.master_id , s.eventdate , s.code , s.term , p.surname , p.forename FROM srch s INNER JOIN person p ON p.entity_id = s.master_id WHERE ( s.code LIKE 'C10..%' OR s.code LIKE 'R110%' ) GROUP BY s.master_id , s.eventdate , s.code , s.term , p.surname , p.forename HAVING SUM(CASE WHEN s.code LIKE 'C10..%' THEN 1 ELSE 0 END) > 0 AND SUM(CASE WHEN s.code LIKE 'R110%' THEN 1 ELSE 0 END) > 0 ORDER BY s.master_id ``` The **CASE** expressions will evaluate to either 1 or 0, for each row. The SUM() aggregate will total the 1s and 0s up. The return from the aggregate will be greater than zero if there was any row (within a "group") that satisfied the condition. If there are no rows (within the group) with code LIKE 'R110%', the SUM() will evaluate to zero, and the comparison will evaluate to FALSE, and the row will not be returned. NOTE: The comparison of the aggregates is in a HAVING clause because the results from the aggregates is not available when the conditions in the WHERE clause are evaluated, when the rows are accessed. **FOLLOWUP** Doh! That query above isn't going to return rows. That's my bad. There's a reason we test against some test cases. It helps us identify doofus problems like the one in the query I suggested above. It's impossible for that query to return any rows. The code column (as I hadn't really noticed) is in the GROUP BY clause. So at least one of the aggregate functions in the HAVING clause is guaranteed to evaluate to zero. (My problem was that I hadn't noticed that code was a column in the GROUP BY. Doh!) --- If all of the columns in the SELECT list need to match, *except* for the "code" column... (I hate to use an expensive correlated subquery when we don't have to...) we could add "EXISTS (correlated subquery). If master\_id is a foreign key reference to entity\_id in person, and entity\_id is the primary key of person... we could put off the join operation until *after* we had the results from srch. Does term and event also need to match, or just the code? How we write the query depends on that... --- Based on the responses in the comments, term and event\_date don't need to match. We're looking for rows in srch for the same person (master\_id) that have at least one row with the C10 code and at least one row with the R110 code. Identifying those values of master\_id follows the same pattern in the query above, using the GROUP BY and conditional tests on aggregates in the HAVING clause. This is a query that should return the master\_id values that have *both* a C10 code and an R110 code. This is something we can test... it doesn't return the whole resultset, in only gets us the master\_id values we want to return: ``` SELECT r.master_id FROM srch r WHERE ( r.code LIKE 'C10..%' OR r.code LIKE 'R110%' ) GROUP BY r.master_id HAVING SUM(CASE WHEN r.code LIKE 'C10..%' THEN 1 ELSE 0 END) > 0 AND SUM(CASE WHEN r.code LIKE 'R110%' THEN 1 ELSE 0 END) > 0 ``` Once we get that, we can use that query as an inline view... wrap in parens, assign an alias and reference it like it was a table. For example: ``` SELECT q.* FROM ( SELECT r.master_id FROM srch r WHERE ( r.code LIKE 'C10..%' OR r.code LIKE 'R110%' ) GROUP BY r.master_id HAVING SUM(CASE WHEN r.code LIKE 'C10..%' THEN 1 ELSE 0 END) > 0 AND SUM(CASE WHEN r.code LIKE 'R110%' THEN 1 ELSE 0 END) > 0 ) q ``` We should test whether PostgreSQL will run that. If we can't get that to run, there's no point in building on it. Once we confirm that runs, we can add a join to the srch table, get the rows that have a matching master\_id with a C10 or R110 code. ``` SELECT q.master_id , s.code , s.term , s.event_date FROM ( SELECT r.master_id FROM srch r WHERE ( r.code LIKE 'C10..%' OR r.code LIKE 'R110%' ) GROUP BY r.master_id HAVING SUM(CASE WHEN r.code LIKE 'C10..%' THEN 1 ELSE 0 END) > 0 AND SUM(CASE WHEN r.code LIKE 'R110%' THEN 1 ELSE 0 END) > 0 ) q JOIN srch s ON s.master_id = q.master_id AND s.code LIKE 'R110%' ``` We can add a join to the person table, to retrieve the given name and surname by primary key lookup. ``` JOIN person p ON p.entity_id = s.master_id ``` add the appropriate column references to the SELECT list. The original queries had DISTINCT keyword. We can add that, or we could add GROUP BY clause. Whichever. We can reference master\_id from either the inline view or the srch table, or even the entity\_id from the person table. The join conditions guarantee us they will all be non-null and equal to each other. And wind up with something like this (desk checked only, not tested): ``` SELECT s.master_id , s.code , s.term , s.event_date , p.surname , p.forename FROM ( SELECT r.master_id FROM srch r WHERE ( r.code LIKE 'C10..%' OR r.code LIKE 'R110%' ) GROUP BY r.master_id HAVING SUM(CASE WHEN r.code LIKE 'C10..%' THEN 1 ELSE 0 END) > 0 AND SUM(CASE WHEN r.code LIKE 'R110%' THEN 1 ELSE 0 END) > 0 ) q JOIN srch s ON s.master_id = q.master_id AND s.code LIKE 'R110%' JOIN person p ON p.entity_id = s.master_id GROUP BY s.master_id , s.code , s.term , s.event_date , p.surname , p.forename ORDER BY s.master_id , s.code ``` And, if I haven't made some other doofus mistake again, I expect this will return the result specified by OP. (I've attempted to provide a few comments along the way, about how we went about building the query.) I'd be interested in finding out how big of smoke ball this one makes. **ANOTHER FOLLOWUP** As an alternative, since we don't need to return the C10 rows, the query could actually be a little simpler. The inline view could just return us master\_id values related to the C10 codes, and we can dispense with the HAVING clause with tests on the aggregates. This should return a result equivalent to the one above, perhaps even a little faster: ``` SELECT s.master_id , s.code , s.term , s.event_date , p.surname , p.forename FROM ( SELECT r.master_id FROM srch r WHERE r.code LIKE 'C10..%' GROUP BY r.master_id ) q JOIN srch s ON s.master_id = q.master_id AND s.code LIKE 'R110%' JOIN person p ON p.entity_id = s.master_id GROUP BY s.master_id , s.code , s.term , s.event_date , p.surname , p.forename ORDER BY s.master_id , s.code ```
You could try this wildcard and AND approach, to return both: I'm assuming that `R110` might be followed by `C10..` or vice versa. ``` Dim DIAG As String DIAG = "SELECT DISTINCT master_id, eventdate, code, term, surname, forename " _ & "FROM srch INNER JOIN person p ON master_id=p.entity_id " _ & "WHERE code LIKE '*R110*' AND code LIKE '*C10..*' " _ & "ORDER BY master_id " ```
Query 2 different items within the same field and the same table
[ "", "sql", "excel", "vba", "" ]
I have two tables `t1` and `t2`. Both have `id` and `name` columns. The name column of `t1` is defined as not null and it has the default value of 'Peter'. I want to insert all the values from `t2` into my `t1` table. But I have some null values in `t2` table. When I try to insert the values: ``` Insert into t1 select * from t2; ``` It throws this error: > Msg 515, Level 16, State 2, Line 1 > Cannot insert the value NULL into column 'Name', table 'T1'; column does not allow nulls. Is there any possibilities to set the default value to the column when we try to `insert` the `null` value.
First Solution, ``` insert into t1 select id,isnull(name,'Peter') from t2 ``` Second solution ``` ALTER TABLE T1 ALTER COLUMN name varchar(255) NULL insert into t1 select id,name from t2 ALTER TABLE T1 ALTER COLUMN name varchar(255) NOT NULL ```
So instead of ``` Insert into t1 select * from t2 ``` you can rewrite your query as ``` Insert into t1 select col1,col2, ISNULL(name, 'Peter'), othercolumns from t2 ```
How to set default value while insert null value into not null column SQL Server?
[ "", "sql", "sql-server", "insert", "ssms", "default", "" ]
Packages like `RMySQL` and `sqldf` allow one to interface with local or remote database servers. I'm creating a portable project which involves importing sql data in cases (or on devices) which do not always have access to a running server, but which *do* always have access to the latest .sql dump of the database. **The goal seems simple enough: import an .sql dump into R without the involvement of a MySQL server.** More specifically, I'd like to create a list of lists in which the elements correspond to any databases defined in the .sql dump (there may be multiple), and those elements in turn consist of the tables in those databases. To make this reproducible, let's take the sample sportsdb SQL file [here](http://sportsdb.org/modules/sd/assets/downloads/sportsdb_sample_mysql.zip) — if you unzip it it's called sportsdb\_sample\_mysql\_20080303.sql. One would think sqldf might be able to do it: `read.csv.sql('sportsdb_sample_mysql_20080303.sql', sql="SELECT * FROM addresses") Error in sqliteSendQuery(con, statement, bind.data) : error in statement: no such table: addresses` This even though there certainly is a table addresses in the dump. [This post](https://groups.google.com/forum/#!searchin/sqldf/read$20.sql/sqldf/3N-02wkEx4Y/cQUaI6Tv_cMJ) on the sqldf list mentions the same error, but no solution. Then there is an `sql.reader` function in the package `ProjectTemplate`, which looks promising. Poking around, the source for the function can be found [here](https://github.com/johnmyleswhite/ProjectTemplate/blob/master/R/sql.reader.R), and it assumes a running database server and relies on `RMySQL` — not what I need. So... we seem to be running out of options. Any help from the hivemind appreciated! (To reiterate, I am *not* looking for a solution that relies on access to an SQL server; that's easy with `dbReadTable` from the `RMySQL` package. I would very much like to bypass the server and get the data straight from the .sql dump file.)
depending on what you want to extract from the table, here is how you can play around with the data ``` numLines <- R.utils::countLines("sportsdb_sample_mysql_20080303.sql") # [1] 81266 linesInDB <- readLines("sportsdb_sample_mysql_20080303.sql",n=60) ``` Then you can do some regex to get tables names (after CREATE TABLE), column names (between first brackets) and VALUES (lines after CREATE TABLE and between second brackets) Reference: [Reverse engineering a mysqldump output with MySQL Workbench gives "statement starting from pointed line contains non UTF8 characters" error](https://stackoverflow.com/questions/31185528/reverse-engineering-a-mysqldump-output-with-mysql-workbench-gives-statement-sta) --- EDIT: in response to OP's answer, if i interpret the python script correct, it is also reading it line by line, filter for INSERT INTO lines, parse as csv, then write to file. This is very similar to my original suggestion. My version below in R. If the file size is too large, it would be better to read in the file in chunks using some other R package ``` options(stringsAsFactors=F) library(utils) library(stringi) library(plyr) mysqldumpfile <- "sportsdb_sample_mysql_20080303.sql" allLines <- readLines(mysqldumpfile) insertLines <- allLines[which(stri_detect_fixed(allLines, "INSERT INTO"))] allwords <- data.frame(stri_extract_all_words(insertLines, " ")) d_ply(allwords, .(X3), function(x) { #x <- split(allwords, allwords$X3)[["baseball_offensive_stats"]] print(x[1,3]) #find where the header/data columns start and end valuesCol <- which(x[1,]=="VALUES") lastCols <- which(apply(x, 2, function(y) all(is.na(y)))) datLastCol <- head(c(lastCols, ncol(x)+1), 1) - 1 #format and prepare for write to file df <- data.frame(x[,(valuesCol+1):datLastCol]) df <- setNames(df, x[1,4:(valuesCol-1)]) #type convert before writing to file otherwise its all strings df[] <- apply(df, 2, type.convert) #write to file write.csv(df, paste0(x[1,3],".csv"), row.names=F) }) ```
I don't think you will find a way to import a sql dump (which contains multiple tables with references) and then perform arbitrary sql queries on them within R. This would basically require the R package to run a complete database server (compatible with the one creating the dump) within R. I would suggest exporting the tables/select statements you need as CSV from your database [(see here)](https://stackoverflow.com/questions/356578/how-to-output-mysql-query-results-in-csv-format). If you can only work from the dump and don't want to setup a server for the conversion you could use some simple regular expressions to turn the `insert` statements in your dump into a bunch of CSV files for the tables using a tool of your choosing like `sed` or `awk` (or even R as suggested by the other answer but that might be rather slow for this file size).
Import MySQL dump into R (without requiring MySQL server)
[ "", "sql", "r", "database", "mysql", "" ]
I have two tables like this: **Table1** ``` Emp_ID Emp_Name Age Level 1 Tom 22 2 Susan 50 ``` **Table2** ``` Level Min_Age Max_Age Associate 20 30 Senior 31 45 ``` What I want to do is to find which level each employee is at. As such, I want to: Do for all records in Table1 (All employees) i. Select Level from Table2 where Table1.Age is between Min\_Age and Max\_Age in Table2 ii.Move to next employee I have tried the following: ``` DECLARE @level int, DECLARE @employee_age int DELCARE @min_age varcahr int, @max_age varchar int DECLARE testcursor CURSOR LOCAL SCROLL STATIC FOR Select Emp_ID FROM Table1 OPEN testcursor FETCH NEXT FROM testcursor WHILE @@FETCH_STATUS = 0 BEGIN FETCH NEXT FROM testcursor @min_age=select Min_age from Table2 @max_age= select Max_age from Table2 @employee_age=Select age FROM Table1 @level=select Level from Table2 where update Table1 set level=@level where @employee_age between @min_age and max_age END CLOSE cursorName DEALLOCATE cursorName ``` This doesn't completely work since I have to cursor through the records in Table2 as well. Any help would be appreciated.
You shouldn't have to use a cursor at all. Just join the two tables together. ``` UPDATE t1 SET t1.[age level] = t2.level FROM table1 t1 JOIN table2 t2 ON t1.[age level] BETWEEN t2.min_age and t2.max_age ```
You can join- or you can use the following ``` BEGIN for i in (select level, [min age] as MinA, [max age] as MaxA from Table2) LOOP Update Table1 set Table1.[age level] = i.level where Table1.[age level] between i.MinA and i.MaxA; END LOOP; END ```
Cursor through and update database record by looking for value within a range from another table
[ "", "sql", "sql-server", "" ]
[![enter image description here](https://i.stack.imgur.com/cuwrx.png)](https://i.stack.imgur.com/cuwrx.png) Hi, I have table with row like the above picture and i would like to sum QTY of all row but i need to exclude row where AISLE = POS and QTY < 0 on the same row. I made some try to get what i want but i can't find solution : ``` SELECT ROUND(SUM(QTY), 2) AS INVENTORY FROM INV_QTY_LOCATION WHERE PRODUCT = 143459 AND AISLE != 'PHY' AND AISLE != 'RET' AND case when AISLE = 'POS' AND QTY > 0 ``` Another try ``` SELECT ROUND(SUM(QTY), 2) AS INVENTORY FROM INV_QTY_LOCATION WHERE PRODUCT = 143459 AND AISLE != 'PHY' AND AISLE != 'RET' AND (AISLE = 'POS' AND QTY > 0) ``` In this particular case the result should be 161. **Solution** ``` SELECT ROUND( SUM( CASE WHEN(AISLE = 'POS' AND QTY > 0) OR AISLE != 'POS' THEN QTY ELSE 0 END ), 2 ) AS INVENTORY FROM INV_QTY_LOCATION WHERE PRODUCT = 143459 AND AISLE != 'PHY' AND AISLE != 'RET' ```
I want to note that `NOT IN` would be helpful for this query: ``` WHERE PRODUCT = 143459 AND AISLE NOT IN ('PHY', 'RET') AND (AISLE <> 'POS' OR QTY > 0) ``` Note: This assumes that `AISLE` is never `NULL`. Or, alternatively: ``` WHERE PRODUCT = 143459 AND AISLE NOT IN ('PHY', 'RET') AND NOT (AISLE = 'POS' AND QTY > 0) ```
your second statement was almost correct, but since you are trying to "exclude AISLE = POS", then it should be `AISLE <> 'POS'`: ``` SELECT ROUND(SUM(QTY), 2) AS INVENTORY FROM INV_QTY_LOCATION WHERE PRODUCT = 143459 AND AISLE != 'PHY' AND AISLE != 'RET' AND (AISLE <> 'POS' OR QTY > 0) ```
MySQL - Query with grouped conditions
[ "", "mysql", "sql", "" ]
**Wants to sort reviews with comment length <= 250 and >= 50** All short or long reviews should be at the end..currently i have to filter them... but this is not what I wants to have current query ``` select c.id, c.name, DATE_FORMAT(c.created,'%d %b %Y') as date_new, r.ratings, c.comments, ROUND((r.ratings_sum / r.ratings_qty),1) as total_rating from commentsAS c , rating AS r , id_script i where c.pid = i.sub_cat_id AND i.cat_id = 118 AND r.reviewid = c.id AND c.published = '1' AND LENGTH(c.comments) <= 250 AND LENGTH(c.comments) >= 50 ORDER BY c.created DESC ``` I don't want filter them with below ``` AND LENGTH(c.comments) <= 250 AND LENGTH(c.comments) >= 50 ```
In MySQL, you can do this simply using a boolean expression: ``` order by ( length(c.comments) < 50 or length(c.comments) > 250) desc ``` MySQL treats booleans in a numeric context as integers, with true as 1. An alternative formulation is even shorter: ``` order by (length(c.comments) between 50 and 249) ```
``` ORDER BY CASE WHEN LENGTH(C.comments) > 250 OR LENGTH(C.comments) < 50 THEN 1 ELSE 0 END ```
Sort by character length but don't want filter
[ "", "mysql", "sql", "filter", "" ]
I've got a databases table of account users. There are two types of account:- * Administrator Account * Standard Account The data table has two additional columns, Account Number and Parent Account Number. Every record regardless gets assigned a new Account Number, but if an account is a Standard Account, then it gets assigned a Parent Account Number. I can tell who is an administrator by the fact that the Parent Account Number field is NULL. I'm wanting to print a list of these users, ordered by the administrator account and then any children of that administrator, before moving onto the next administrator account. I'm expecting a list of:- * Administrator (Account 1250, Parent NULL) + Standard Account (Account 1255, Parent 1250) + Standard Account (Account 1256, Parent 1250) * Administrator (Account 1375, Parent NULL) + Standard Account (Account 1403, Parent 1375) I've did a SQL query of:- ``` SELECT * FROM [LWC].[dbo].[AspNetUsers] ORDER BY AccountNumber, ParentAccountNumber ``` but this isn't ordering correctly, because every Account Number is different. I presume this would sort as I expected it to, if the AccountNumber was the same for multiple records but it isn't. Can anyone suggest how I can sort this correctly? Thanks!
I think simply: ``` ORDER BY ISNULL(ParentAccountNumber, AccountNumber), ParentAccountNumber, AccountNumber ``` would do what you want.
As this is a recursive relationship a [recursive cte](https://technet.microsoft.com/en-us/library/ms186243(v=sql.105).aspx) could be used , or a [hierarchyId](https://msdn.microsoft.com/en-us/library/bb677290.aspx) type would / may be better.
SQL - Ordering the results of a recursive relationship
[ "", "sql", ".net", "sql-server", "linq", "t-sql", "" ]
Can you please help me with a query that would display a table like this: ``` Dept_ID Dept_Name 10 Admin 10 Whalen 20 Sales 20 James 20 King 20 Smith 40 Marketing 40 Neena ``` and so on...The Schema is HR Display the Department Id and the Department Name and then the subsequent employees last names working under that department
When you union two data sets, there is NO implicit ordering, you could get the results in any order. The get a particular order you *must* use `ORDER BY`. To use `ORDER BY`, then you *must* have fields to do that ordering by. In your case, the pseudo code would be... - `ORDER BY [dept_id], [depts-then-employees], [dept_name]` The middle of those three is something that YOU are going to have to create. One way of doing that is as follows. ***note***: Just because you have a field to order by, does not mean that you have to select it. ``` SELECT dept_id, dept_name FROM ( SELECT d.dept_id, d.dept_name, 0 AS entity_type_ordinal FROM department d UNION ALL SELECT d.dept_id, e.employee_name, 1 AS entity_type_ordinal FROM department d INNER JOIN employee e ON e.dept_id = d.dept_id ) dept_and_emp ORDER BY dept_id, entity_type_ordinal, dept_name ```
``` SELECT Dept_ID, Dept_Name FROM Your_Table ``` Simple as I can make it. It's very difficult (near impossible) to tell exactly what the query should be without more detail in terms of your table structure and some sample data. From your edit, you may need something more like this; ``` SELECT DT.Dept_ID, DT.Dept_Name, ET.Emp_Name FROM Dept_Table AS DT INNER JOIN Emp_Table AS ET ON DT.Dept_ID = ET.Dept_ID ORDER BY Dept_ID ``` This shows the employees in each department on the next column, you don't really want all that in the same column.
What would the query be for the following sample table?
[ "", "sql", "union-all", "" ]
I think this should be simple, but I can't figure it out or find a solution here. I have a table ITEM\_PROPERTIES ``` item_ID int property_name char(20) property_value char(20) ``` Sample data ``` 5 Colour Black 5 Size M 6 Colour Blue 6 Size L 7 Colour Purple 7 Size M 8 Colour Blue 8 Size S 9 Colour Wine 9 Size M ``` If I do: ``` SELECT property_value, count(property_value) FROM ITEM_PROPERTIES where property_name='Colour' group by property_value ``` I get : ``` Black 1 Blue 2 Purple 1 Wine 1 ``` How can I get this on one row like: ``` Black 1, Blue 2, Purple 1, Wine 1 ``` I tried : ``` SELECT group_concat(property_value, count(property_value)) FROM ITEM_PROPERTIES where property_name='Colour' group by property_value SELECT group_concat(concat(property_value, count(property_value))) FROM ITEM_PROPERTIES where property_name='Colour' group by property_value ``` But both give an error Invalid use of group function
``` select group_concat(concat(property_value, ' ', cnt)) from ( SELECT property_value, count(property_value) as cnt FROM ITEM_PROPERTIES WHERE property_name = 'Colour' GROUP BY property_value ) c ```
You can use two levels of aggregation: ``` SELECT group_concat(property_value, ' ', cnt separator ', ') FROM (SELECT property_value, count(property_value) as cnt FROM ITEM_PROPERTIES WHERE property_name = 'Colour' GROUP BY property_value ) ip; ```
MySQL - count and group by - display all results in one row
[ "", "mysql", "sql", "database", "" ]
I cannot for the life of me work out a query that will take a list of Skill\_ID's and return a list of Buildings that contain a person/people with all of the skills searched for I currently have these tables ``` Building =========== ID | name =========== 1 | BlockA 2 | BlockB People ============================ ID | name | Building_ID ============================ 1 | PersonA | 1 2 | PersonB | 2 Skills =========== ID | name =========== 1 | SkillA 2 | SkillB SkillsToPerson ==================== Person_ID | Skill_ID ==================== 1 | 1 1 | 2 2 | 2 ``` For example, I want to find buildings that contain at least one person with SkillA and SkillB, BlockA should be returned, because Person1 has both skills, and is in BlockA Can anyone offer some advice? Thanks
You can do this using `GROUP BY` and `HAVING`: ``` SELECT b.Name AS Building FROM Building b JOIN People p ON b.ID = p.Building_ID JOIN SkillsToPerson sp ON p.ID = sp.Person_ID WHERE sp.skill_id IN (1, 2, 3) -- Skill IDs to look for GROUP BY b.Name HAVING COUNT(DISTINCT sp.skill_id) = 3; -- 3 skills ``` Note that you do not need the `Skills` table because you have the skills id in `SkillsToPerson`. Similarly, if you are happy with the building id, you don't need the building table. I call this type of query a "set-within-sets" query, because you are looking for sets of something (skills) within another (buildings). `GROUP BY` and `HAVING` provide a very flexible method for handling this type of query.
How about: ``` SELECT DISTINCT Building.Name FROM People INNER JOIN Building ON Building.ID = People.Building_ID INNER JOIN SkillsToPerson ON SkillsToPerson.Person_ID = People.ID INNER JOIN Skills ON Skills.ID = SkillsToPerson.Skills_ID WHERE Skills.Skill_ID IN (1, 2, 3, ...) -- list of skills here ```
MSSQL - Cannot work out how to join three tables to find the building that contains people with the correct skills
[ "", "sql", "sql-server", "join", "" ]
I have been stuck for a good while on this issue now and have made zero progress. I don't even know if it is possible... I have 1 table: ``` +------+------------+-------+---------+------------+ | Item | Date | RUnit | FDHUnit | Difference | +------+------------+-------+---------+------------+ | A | 19/04/2016 | 21000 | 20000 | 1000 | | B | 20/04/2016 | 2500 | 500 | 2000 | +------+------------+-------+---------+------------+ ``` Is it possible to Create a new row in the same table for each of those `items` which will display the `Difference` and perhaps a few other columns? My desired output would be something like this: ``` +------+------------+-------+---------+------------+ | Item | Date | RUnit | FDHUnit | Difference | +------+------------+-------+---------+------------+ | A | 19/04/2016 | 21000 | 20000 | | | A | 19/04/2016 | NULL | NULL | 1000 | | B | 20/04/2016 | 2500 | 500 | | | B | 20/04/2016 | NULL | NULL | 2000 | +------+------------+-------+---------+------------+ ``` Reason being is that i would like to show a new column and indicate that it is either `Held directly` or `not held directly`.
Yes, use `union all`: ``` select item, date, ruunit, fdhunit, difference from t union all select item, date, null, null, runit - fdhunit from t order by item, (case when runit is not null then 1 else 2 end); ``` The `order by` puts the results in the order that your results suggest. Without an `order by`, the ordering of the records is indeterminate.
try this way ``` select * from (select item, date, ruunit, fdhunit, '' as difference from t union all select item, date, null as ruunit, null as fdhunit, difference from t) a order by item, date ```
How to duplicate records in same table
[ "", "sql", "sql-server-2008", "t-sql", "" ]
I have no Idea how i could solve the following problem in an effective way. **Given**: 1. A Telephone Number as one single String e.g.: `1111223344` 2. A Database with this Number `split in 2 different Columns` (First part of Number in `ColA`, Second Part of Number in `ColB`) | The Database is huge (up to 100 GB) Lets say in ColA is '11112' and in ColB is '23344' - Combined these 2 Columns are the String we are looking for. We don't know how many characters are in which column. **Need**: A select Statement, that `combines ColA + ColB and compares it to the given String`. If it Equals: Select the Row. The selected Row/Rows will be selected and used with a .Net Application.
This should get you started. You will want to adjust this depending on the actual requirements. If the two values are guaranteed to be strings: ``` SELECT * FROM MyTable m WHERE m.ColA + m.ColB = '1111223344' ``` If the two values aren't guaranteed to be strings: ``` SELECT * FROM MyTable m WHERE CONCAT(m.ColA, m.ColB) = '1111223344' ```
One way i could think of is, to use Hashbytes as a computed column .you can index this column for good performance as well.. ``` CREATE TABLE #TESTMAIN ( NMBR VARCHAR(10) ) INSERT INTO #TESTMAIN SELECT '123456' UNION ALL SELECT '3456' create table #backup ( nmbr1 varchar(10), nmbr2 varchar(10) ) insert into #backup select '123','456' union all select '34','56' Alter table #testmain add mainnmbr as hashbytes('SHA1',nmbr) select * from #testmain Alter table #backup add bckpnmbr as hashbytes('SHA1',concat(nmbr1,nmbr2)) select * from #testmain select * from #backup ``` Now you can do a simple compare on data below.. [![enter image description here](https://i.stack.imgur.com/OXBfH.png)](https://i.stack.imgur.com/OXBfH.png)
SQL | Select (...) where 2 columns combined equal (xy)
[ "", "sql", ".net", "oracle", "" ]
I am working on a SQL query where I have a rather huge data-set. I have the table data as mentioned below. Existing table : ``` +---------+----------+----------------------+ | id(!PK) | name | Date | +---------+----------+----------------------+ | 1 | abc | 21.03.2015 | | 1 | def | 22.04.2015 | | 1 | ajk | 22.03.2015 | | 3 | ghi | 23.03.2015 | +-------------------------------------------+ ``` What I am looking for is an insert query into an empty table. The condition is like this : ``` Insert in an empty table where id is common, count of names common to an id for march. ``` Output for above table would be like ``` +---------+----------+------------------------+ | some_id | count | Date | +---------+----------+----------------------+ | 1 | 2 | 21.03.2015 | | 3 | 1 | 23.03.2015 | +-------------------------------------------+ ``` All I have is : ``` insert into empty_table values (some_id,count,date) select id,count(*),date from existing_table where id=1; ``` Unfortunately above basic query doesn't suit this complex requirement. Any suggestions or ideas? Thank you. Udpated query ``` insert into empty_table select id,count(*),min(date) from existing_table where date >= '2015-03-01' and date < '2015-04-01' group by id; ```
Seems you want the number of unique names per id: ``` insert into empty_table select id ,count(distinct name) ,min(date) from existing_table where date >= DATE '2015-03-01' and date < DATE '2015-04-01' group by id; ```
If I understand correctly, you just need a date condition: ``` insert into empty_table(some_id, count, date) select id, count(*), min(date) from existing_table where id = 1 and date >= date '2015-03-01' and date < date '2015-04-01' group by id; ``` Note: the list after the table name contains the columns being inserted. There is no `values` keyword when using `insert . . . select`.
SQL : Getting data as well as count from a single table for a month
[ "", "sql", "database", "teradata", "subquery", "" ]
Consider the following scenario : ``` an Item has a Price a Order contains Items a Delivery contains Orders ``` I want to query for each delivery, the order with the highest price, where a price of an order is the summation of prices of the contained items. A simple sufficient schema would look like this : **Delivery** *d\_id* **Order** *o\_id* **Item** *i\_id*,price **ItemsInOrder** *o\_id*,*i\_id* **OrdersInDelivery** *d\_id*,*o\_id* I am stuck in the point of having the summation results, needing to get the max order per delivery : ``` SELECT d_id,o_id,SUM(price) from ItemsInOrder natural join OrdersInDelivery natural join Item group by d_id,o_id ``` How should i go from here to get that each d\_id, would appear once and aside the o\_id with the maximal price summation?
Thanks for all your answers, but none of them was what i was looking for. I finally found what i was looking for and the approach i choose is as follows : ``` SELECT d_id,o_id,sum_price FROM ( SELECT d_id,o_id,SUM(price) as sum_price from ItemsInOrder natural join OrdersInDelivery natural join Item group by d_id,o_id order by d_id,sum_price desc ) as sums GROUP BY d_id ```
You can use Left Joining with self, tweaking join conditions and filters. In this approach, you left join the table with itself. Equality, of course, goes in the group-identifier. ``` SELECT a.* FROM ( SELECT oid.d_id AS d_id, o_id, SUM(price) AS sum_price FROM `OrdersInDelivery` oid INNER JOIN `ItemsInOrder` iio USING (o_id) INNER JOIN `Item` i USING (i_id) GROUP BY d_id, o_id ) a LEFT OUTER JOIN ( SELECT oid.d_id AS d_id, o_id, SUM(price) AS sum_price FROM `OrdersInDelivery` oid INNER JOIN `ItemsInOrder` iio USING (o_id) INNER JOIN `Item` i USING (i_id) GROUP BY d_id, o_id )b ON a.d_id = b.d_id AND a.sum_price < b.sum_price WHERE b.d_id IS NULL AND b.o_id IS NULL; ``` Take a look at this query in SQL fiddle: <http://sqlfiddle.com/#!9/d5c04d/1> In that case it's recommended to use `TEMPORARY TABLE`: ``` CREATE TEMPORARY TABLE OrderSummary( SELECT oid.d_id AS d_id, o_id, SUM(price) AS sum_price FROM `OrdersInDelivery` oid INNER JOIN `ItemsInOrder` iio USING (o_id) INNER JOIN `Item` i USING (i_id) GROUP BY d_id, o_id ); CREATE TEMPORARY TABLE OrderSummary2( SELECT * FROM OrderSummary ); SELECT a.* FROM OrderSummary a LEFT OUTER JOIN OrderSummary2 b ON a.d_id = b.d_id AND a.sum_price < b.sum_price WHERE b.d_id IS NULL AND b.o_id IS NULL; ```
How to find MAX over SUMs MySQL
[ "", "mysql", "sql", "rdbms", "" ]
I have a postgresql db with a number of tables. If I query: ``` SELECT column_name FROM information_schema.columns WHERE table_name="my_table"; ``` I will get a list of the columns returned properly. However, when I query: ``` SELECT * FROM "my_table"; ``` I get the error: ``` (ProgrammingError) relation "my_table" does not exist 'SELECT *\n FROM "my_table"\n' {} ``` Any thoughts on why I can get the columns, but can't query the table? Goal is to be able to query the table.
You have to include the schema if isnt a public one ``` SELECT * FROM <schema>."my_table" ``` Or you can change your default schema ``` SHOW search_path; SET search_path TO my_schema; ``` Check your table schema here ``` SELECT * FROM information_schema.columns ``` [![enter image description here](https://i.stack.imgur.com/h7Ylz.png)](https://i.stack.imgur.com/h7Ylz.png) For example if a table is on the default schema `public` both this will works ok ``` SELECT * FROM parroquias_region SELECT * FROM public.parroquias_region ``` But sectors need specify the schema ``` SELECT * FROM map_update.sectores_point ```
You can try: ``` SELECT * FROM public."my_table" ``` Don't forget double quotes near my\_table.
Postgresql tables exists, but getting "relation does not exist" when querying
[ "", "sql", "postgresql", "" ]
I have the following query which (in my system) gets the total number of members who have more than 6 memberships.... ``` select count(*) as MemberCount from ( SELECT count(membership.memberid) as MembershipCount from Membership, Package WHERE membership.PackageId = Package.Id AND membership.DiscountPercentage != 100 AND Package.PackageTypeId != 1 AND membership.MembershipStateId != 5 AND Membership.LocationId = 1 group by memberid having count(membership.memberid) > 6 ) NonTrialMemberships ``` What i need to do is left outer join the "having" part of it with a temporary table so i can get a listing of the number of members who have more than 1,2,3,4,5,6,7,8,9,10 memberships ``` having count(membership.memberid) > 6 <------(OUTER JOIN THIS) ``` I have created a temporary table to use as the join but not sure how to use it. I have also seen that using an in memory table may be more efficient than a temp table... ``` create table #Temp ( Num int, ) DECLARE @i int = 0 DECLARE @total int = 10 WHILE @i < @total BEGIN SET @i = @i + 1 Insert Into #Temp values (@i) END select * from #Temp select count(*) as MemberCount from ( SELECT count(membership.memberid) as MembershipCount from Membership, Package WHERE membership.PackageId = Package.Id AND membership.DiscountPercentage != 100 AND Package.PackageTypeId != 1 AND membership.MembershipStateId != 5 AND Membership.LocationId = 1 group by memberid having count(membership.memberid) > 6 ) NonTrialMemberships If(OBJECT_ID('tempdb..#temp') Is Not Null) Begin Drop Table #Temp End ``` Any guidance would be appreciated. Thank you.
Try this, using your #temp. ``` select #temp.num, count(*) from #temp left join ( SELECT count(membership.memberid) as MembershipCount from Membership, Package WHERE membership.PackageId = Package.Id AND membership.DiscountPercentage != 100 AND Package.PackageTypeId != 1 AND membership.MembershipStateId != 5 AND Membership.LocationId = 1 group by memberid having count(membership.memberid) > 1 ) ntm on ntm.MembershipCount > #temp.num group by #temp.num ``` definitly it will count users with 11 memberships in every >1, >2, ..., >10 group, as requested.
You want to select the counts, and get the count of each of those (sounds weird, but it's actually very simple): ``` SELECT MemberCount, Count(*) FROM ( SELECT count(membership.memberid) as MemberCount FROM Membership, Package WHERE membership.PackageId = Package.Id AND membership.DiscountPercentage != 100 AND Package.PackageTypeId != 1 AND membership.MembershipStateId != 5 AND Membership.LocationId = 1 group by memberid having count(membership.memberid) > 6 ) t group by MemberCount order by MemberCount ``` You mentioned that you want to get the number of members who have 1, 2, 3, etc. memberships. If that is the case, you will want to remove the following line from the query: ``` having count(membership.memberid) > 6 ``` Doing so will include a count of all counts (again, sounds weird), which seems to be your requirement. Please leave a comment if you would like clarification or if I misunderstood your question.
Outer Join SQL on having query
[ "", "sql", "sql-server", "t-sql", "" ]
I know this will cause an ambiguous error since Name exists in both tables. But I want to select the Name from these two tables. Name is a column in t\_1 and t\_2. Is it possible to select from two tables? Is there a way for me to select the name between two tables, and returns a single column of data? ``` SELECT Name FROM t_1, t_2 WHERE Name LIKE '%' + @USER + '%' ```
You could try a `UNION`; ``` SELECT Name FROM t_1 WHERE Name LIKE '%' + @USER + '%' UNION SELECT Name FROM t_2 WHERE Name LIKE '%' + @USER + '%' ``` Note, `UNION ALL` can also be used, this will also return multiple instances of the same name if they exist. Use whichever satisfies your requirements.
``` SELECT Name FROM t_1 WHERE Name LIKE '%' + @USER + '%' UNION ALL SELECT Name FROM t_2 WHERE Name LIKE '%' + @USER + '%' ```
Is this query possible? SELECT FROM 2 tables LIKE
[ "", "sql", "sql-server", "stored-procedures", "" ]
``` +----+------+---------------------------------------+ | id | dur | workdur | +----+------+---------------------------------------+ | 1 | 64 | /home/public/users/james/PB_3594162_0 | | 2 | 123 | /home/public/users/john/PB_990-94162_0 | | 3 | 13 | /doc/users/jason/PB_0125135 | | 4 | 355 | /doc/users/jason/notPB | ``` I can get all PB\_ ones with ``` select workdur from work where workdur like '%PB_%' ``` How can I group by partial string of `"%PB_"` so that i can get average `dur` of the above select? NOTE: in the select statement, id=4 wont be selected
You can simply use aggregation together with your `WHERE` condition, with no need of a nested query: ``` select avg(dur) from work where workdur like '%PB_%' ```
You can use this ``` select avg(dur) from (select dur from work where workdur like '%PB_%'); ``` **One important thing** you should notice is that in oracle (sql, in general), `_` is a wildcards like `%`, so careful with your `like` because it equal to `like '%PB%'` So for your case you should change to (and remove nested, according to suggest from @Aleksej) ``` select avg(dur) from work where workdur like '%PB\_%' escape '\'; ```
sql: group by partial match
[ "", "sql", "oracle", "" ]
I have a macro with the following code: ``` sSQL = "select [Folio Type], [Folio ID], [Departure Date]," & _ " Sum([Folio Total]), Count([Folio ID])" & _ " from [Individual Folios$B2:O150000]" & _ " group by [Folio Type], [Departure Date], [Folio ID], [Folio Total]" & _ " having Sum([Folio Total]) <> 0" ``` The code above gives me an incorrect count of 1 for each `[Folio ID]`. I want to get the total count of a given `[Folio ID]`, regardless of `[Departure Date]`. When I remove some of the fields I get different results - the following code appears to be working correctly: ``` sSQL = "select [Folio ID], Count([Folio ID]), Sum([Folio Total])" & _ " from [Individual Folios$B2:O150000]" & _ " group by [Folio ID], [Folio Total]" & _ " having sum([Folio Total]) <> 0" ``` I need to figure out how to get the same results as above after adding the removed fields back in. My guess is that the query is counting occurrences by taking all the fields into consideration. If that's the case, is there any easy workaround?
You need to do this in two queries. I've put the second one as a subquery in the FROM clause. ``` SELECT a.[Folio Type] ,a.[Folio ID] ,a.[Departure Date] ,b.FolioCount ,b.FolioSum FROM [Individual Folios$B2:O150000] a INNER JOIN (SELECT [Folio ID], [Folio Type] ,Count([Folio ID]) As FolioCount ,SUM([Folio Total]) As FolioSum FROM [Individual Folios$B2:O150000] GROUP BY [Folio ID] ,[Folio Type] HAVING SUM([Folio Total]) <> 0 ) b ON a.[Folio ID] = b.[Folio ID] ``` Make your string equal to this query: ``` sSQL = "SELECT a.[Folio Type],a.[Folio ID],a.[Departure Date]" _ & ",b.FolioCount,b.FolioSum" _ & " FROM [Individual Folios$B2:O150000] a" _ & " INNER JOIN" _ & " (SELECT [Folio ID], [Folio Type] " _ & " Count([Folio ID]) As FolioCount,SUM([Folio Total]) As FolioSum" _ & " FROM [Individual Folios$B2:O150000]" _ & " GROUP BY [Folio ID], [Folio Type]" _ & " HAVING SUM([Folio Total]) <> 0) b"_ & " ON a.[Folio ID] = b.[Folio ID]" ```
Consider inner joins of derived tables (i.e., subqueries in `FROM` clause). Standard SQL *(conceptual illustration)* ``` SELECT t1.[Folio Type], t1.[Folio ID], t1.[Departure Date], t2.FolioCount, t2.FolioSum FROM (SELECT [Folio Type], [Folio ID], [Departure Date], [Folio Total] FROM [Individual Folios$B2:O150000] ) AS t1 INNER JOIN (SELECT [Folio ID], Count([Folio ID]) As FolioCount, SUM([Folio Total]) As FolioSum FROM [Individual Folios$B2:O150000] GROUP BY [Folio ID], [Folio Total] HAVING SUM([Folio Total]) <> 0 ) As t2 ON t1.[Folio ID] = t2.[Folio ID] AND t1.[Folio Total] = t2.[Folio Total] ``` VBA embedded string: ``` sSQL = "SELECT t1.[Folio Type], t1.[Folio ID], t1.[Departure Date]," sSQL = sSQL & " t2.FolioCount, t2.FolioSum" sSQL = sSQL & " FROM" sSQL = sSQL & " (SELECT [Folio Type], [Folio ID], [Departure Date]" sSQL = sSQL & " FROM [Individual Folios$B2:O150000]) AS t1" sSQL = sSQL & " INNER JOIN" sSQL = sSQL & " (SELECT [Folio ID], Count([Folio ID]) As FolioCount," sSQL = sSQL & " SUM([Folio Total]) As FolioSum" sSQL = sSQL & " FROM [Individual Folios$B2:O150000]" sSQL = sSQL & " GROUP BY [Folio ID], [Folio Total]" sSQL = sSQL & " HAVING SUM([Folio Total]) <> 0) As t2" sSQL = sSQL & " ON t1.[Folio ID] = t2.[Folio ID]" sSQL = sSQL & " AND t1.[Folio Total] = t2.[Folio Total]" ```
Excel/ADO/VBA: Count returning incorrect results
[ "", "sql", "excel", "vba", "oledb", "" ]
I'm new to SQL thus the question. I'm trying to query the following. Given the CITY and COUNTRY tables, query the names of all the continents (COUNTRY.Continent) and their respective average city populations (CITY.Population) rounded down to the nearest integer. The table schemas are `City: id, name, countryside, population` ``` Country: code, name,continent, population ``` I've written the inner join, but can't seem to figure out the way to get the `avg` city population. This is my code. ``` SELECT COUNTRY.CONTINENT FROM COUNTRY INNER JOIN ON COUNTRY.CODE = CITY.COUNTRYCODE; ``` Any help appreciated.
Ok. Here is the solution. ``` Select Country.Continent, floor(Avg(city.population)) From Country Inner Join City On Country.Code = City.CountryCode Group By Country.Continent; ``` Here, We have to group by Continent so as to have a result set for continent being the key identifier and then applying AVG function to the population for the cities belonging to this continent.
``` SELECT COUNT(*) over (partition by c.Code) ,SUM(ci.Population)/COUNT(*) over (partition by c.Code) FROM Country c INNER JOIN City ci on ci.CountryCode = c.CountryCode GROUP by c.CODE ``` I have not tested this so I apologize if it needs a little tweaking. The concept here is to group by country code since you are looking for an average of population within specific or individual country codes.
Inner join and average in SQL
[ "", "sql", "inner-join", "rounding", "average", "" ]
I have two queries that end up having the same format. Each has a Month, a year, and some relevant data per month/year. The schema looks like this: ``` subs Month Year 8150 1 2015 11060 1 2016 5 2 2014 6962 2 2015 8736 2 2016 Cans months years 2984 1 2015 2724 1 2016 13 2 2014 2563 2 2015 1901 2 2016 ``` The first query syntax looks like this: ``` SELECT COUNT(personID) AS subs_per_month, MONTH(Date_1) AS month_1, YEAR(Date_1) AS year_1 FROM (SELECT personID, MIN(date) AS Date_1 FROM orders WHERE isSubscription = 1 GROUP BY personID ORDER BY Date_1) AS my_sub_q GROUP BY month_1 , year_1 ``` The second query: ``` SELECT COUNT(ID), MONTH(date) AS months, YEAR(date) AS years FROM orders WHERE status = 4 AND isSubscription = 1 GROUP BY months , years ORDER BY months, years ``` The end goal is to write a simple join so that the final dataset looks like this: ``` subs cans months years 8150 2984 1 2015 11060 2724 1 2016 5 13 2 2014 6962 2563 2 2015 8736 1901 2 2016 ``` I'm a little overwhelmed with how to do this correctly, and after a lot of trial and all error, I thought I'd ask for help. What's confusing is where the `JOIN` goes, and how that looks relative to the rest of the syntax.
Without giving consideration to simplifying your queries you can use your two queries as inline views and simply select from both (I aliased Q1 and Q2 for your queries and named fields the same within each for simplicity. ``` Select Q1.cnt as Subs, Q2.cnt as Cans, Q1.months, Q1.years from (SELECT COUNT(personID) AS Cnt, MONTH(Date_1) as Months, YEAR(Date_1) AS years FROM (SELECT personID, MIN(date) AS Date_1 FROM orders WHERE isSubscription = 1 GROUP BY personID) AS my_sub_q GROUP BY month_1 , year_1) Q1 INNER JOIN (SELECT COUNT(ID) cnt, MONTH(date) AS months, YEAR(date) AS years FROM orders WHERE status = 4 AND isSubscription = 1 GROUP BY months, years) Q2 ON Q1.Months = Q2.Months and Q1.Years = Q2.years Order by Q1.years, Q2.months ```
Temporary table approach: ``` create temporary table first_query <<your first query here>>; create temporary table second_query <<your second query here>>; select fq.subs, sq.cans, fq.months, fq.years from first_query fq join second_query sq using (months, years) ``` Your table preview and query columns do not match for first query, so I assumed both tables have columns - months and years. One messy query approach: ``` SELECT fq.subs_per_month subs, sq.cans, sq.months, sq.years FROM (SELECT COUNT(personID) AS subs_per_month, MONTH(Date_1) AS month_1, YEAR(Date_1) AS year_1 FROM (SELECT personID, MIN(date) AS Date_1 FROM orders WHERE isSubscription = 1 GROUP BY personID ORDER BY Date_1) AS my_sub_q GROUP BY month_1 , year_1) fq JOIN (SELECT COUNT(ID) cans, MONTH(date) AS months, YEAR(date) AS years -- I added 'cans' FROM orders WHERE status = 4 AND isSubscription = 1 GROUP BY months , years ORDER BY months, years) sq ON fq.month_1 = sq.months AND fq.year_1 = sq.years ```
Correct join syntax within multiple queries and sub queries
[ "", "mysql", "sql", "" ]
I have a query in which I need to get results from a table, depending on the parameters specified by the user on a vb.net page. Declaring a variable would be the ideal solution, but since its an inline table-valued function I can't do this, I had read on other questions that this is possible in a multstatment table-function but that the use of such function is not recommended. The following query would be the ideal solution or what i want to achieve. ``` USE [AUDIT] GO ALTER FUNCTION [dbo].[GetCreated_LPA_Audits] ( @fechaInicio nvarchar (max), @fechaFin nvarchar (max) @Period int, @Fecha int ) RETURNS TABLE AS RETURN ( SELECT T1.Plant,T1.Area,T1.Location,T1.IDAudit,T1.Auditor,T1.AuditDate,T1.DueDate,T1.CreationDate FROM Header AS T1 inner join Audits AS T2 ON T1.IDChecklist = T2.IDChecklist inner join AuditGroups AS T3 ON T2.IDGroup = T3.IDGroup WHERE T3.IDGroup = '2' and CreationDate is not null AND CASE WHEN @Periodo = '0' THEN CreationDate>= @fechaInicio WHEN @Periodo = '1' THEN DATEPART(MONTH,CreationDate)>= @Fecha WHEN @Periodo = '2' THEN CASE WHEN @Fecha = 13 THEN DATEPART(MONTH,CreationDate)>= 1 END WHEN @Periodo = '2' THEN CASE WHEN @Fecha = 14 THEN DATEPART(MONTH,CreationDate)>= 7 END WHEN @Periodo = '3' THEN CASE WHEN @Fecha = 15 THEN DATEPART(Year,CreationDate)>= 2015 END END AND CASE WHEN @Periodo = '0' THEN @fechaInicio<=CreationDate WHEN @Periodo = '1' THEN @Fecha<=DATEPART(MONTH,CreationDate) WHEN @Periodo = '2' THEN CASE WHEN @Fecha = 13 THEN 6<=DATEPART(MONTH,CreationDate) END WHEN @Periodo = '2' THEN CASE WHEN @Fecha = 14 THEN 12<=DATEPART(MONTH,CreationDate) END WHEN @Periodo = '3' THEN CASE WHEN @Fecha = 15 THEN DATEPART(Year,CreationDate)>= 2015 END END ``` AND In the previous query @fechaInicio, @fechaFin, @Periodo and @Fecha are parameters provided by the user. Basically if `@Periodo = 0` I need to get the results where `CreationDate>= @fechaInicio and CreationDate<=@fechaFin` If `@Periodo = 1` I need to get the results where `DATEPART(MONTH,CreationDate)>= @Fecha and DATEPART(MONTH,CreationDate)<= @Fecha` And so on. Hope I made myself clear, thanks in advance! **EDIT** using pseudo-code from @MatBailie, with some slights alterations and answers to his questions ``` IF @periodo = '0' THEN WHERE CreationDate >= @fechaInicio -- Copied from 1st CASE AND @fechaFin <= CreationDate -- Copied from 2nd CASE -- gets results from @fechaInicio to @fechaFin -- i.e. results from 04/05/2016 to 04/16/2016 IF @periodo = '1' THEN WHERE DATEPART(MONTH,CreationDate) >= @Fecha -- Copied from 1st CASE AND @Fecha <= DATEPART(MONTH,CreationDate) -- Copied from 2nd CASE -- In these case both conditions are the same 'cause -- @Fecha is the number of a month (1 - 12) -- i.e. @Fecha = 3 will get all the results of March -- regardless of what it is on @fechaInicio and @fechaFin IF @periodo = '2' THEN IF @fetcha = 13 THEN WHERE DATEPART(MONTH,CreationDate)>= 1 -- Copied from 1st CASE AND 6<=DATEPART(MONTH,CreationDate) -- Copied from 2nd CASE IF @fetcha = 14 THEN WHERE DATEPART(MONTH,CreationDate)>= 7 -- Copied from 1st CASE AND 12<=DATEPART(MONTH,CreationDate) -- Copied from 2nd CASE -- You never use @fetchaInicio? -- You want everything in the first 6 months or last 6 months -- For every year -- Regardless of what is in @fetchaInicio? -- Exactly!!-- IF @periodo = '3' THEN IF @fetcha = 15 THEN WHERE DATEPART(Year,CreationDate)>= 2015 -- Copied from 1st CASE AND 2015 <= DATEPART(Year,CreationDate) -- Copied from 2nd CASE -- ``` *And what about the case `@periodo = '2' AND @fetcha NOT IN (13,14)`? And what about the case `@periodo = '3' AND @fetcha NOT IN (15)`?* This case would not exist, its restricted on the client side. If they chose `@Periodo = '2'` then `@Fecha` will have values of 1 - 12 and nothing else. Same with `@Periodo = '3'` then `@Fecha` will have values of 15 or 16, both refering to 2015 or 2016.
First, you are correct, inline TFV is really faster. If not overcomplicated. I'd better have a number of iTFVs for each @Periodo parameter value on the SQL Server side and choose the right one in code on the client side. Alternatively you may do it in a single iTVF ``` WHERE @Periodo = '0' AND CreationDate>= @fechaInicio and CreationDate<=@fechaFin OR @Periodo = '1' and DATEPART(MONTH,CreationDate)>= @Fecha and DATEPART(MONTH,CreationDate)<= @Fecha ... ``` But MS SQL is known to build ocassionally bad plans for OR operators which may render your efforts to stick to iTVF useless.
You're much better off re-organising the WHERE clause, such that the filtered field is on the left had side and not inside any functions. For example... ``` WHERE CreationDate >= @VPeriodo AND CreationDate < CASE WHEN @Periodo = '0' THEN DATEADD(DAY, 1, @VPeriodo) WHEN @Periodo = '1' THEN DATEADD(MONTH, 1, @VPeriodo) WHEN @Periodo = '2' THEN DATEADD(MONTH, 1, @VPeriodo) WHEN @Periodo = '3' THEN DATEADD(YEAR, 1, @VPeriodo) END ``` In this example the right hand side is all scalar constants. This means that you can then do a range scan on the CreationDate field. Also, @VPeriodo should be a `DATE` or `DATETIME` rather than a `VARCHAR(MAX)`. ***EDIT:*** Including hoops to jump through for using VARCHARs All date's will need to be in the format `YYYYMMDD` when using VARCHAR. This is so that the natrual order of the stirngs is the same as the natural order of the dates... - `'20161101'` > `'20161002'` When using other formats, such as `YYYYDDMM`, it fails... - `'20160111'` < `'20160210'` Problem, in this format 2nd Oct comes AFTER 1st Nov ``` WHERE CreationDate >= @VPeriodo AND CreationDate < CONVERT( NVARCHAR(8), CASE WHEN @Periodo = '0' THEN DATEADD(DAY, 1, CAST(@VPeriodo AS DATE)) WHEN @Periodo = '1' THEN DATEADD(MONTH, 1, CAST(@VPeriodo AS DATE)) WHEN @Periodo = '2' THEN DATEADD(MONTH, 1, CAST(@VPeriodo AS DATE)) WHEN @Periodo = '3' THEN DATEADD(YEAR, 1, CAST(@VPeriodo AS DATE)) END, 112 -- Format code for ISO dates, YYYYMMDD ) ``` ***EDIT:*** A question to the OP after the OP made comments and altered the question Here all I have done is re-arrange your code to make pseudo-code for what you've written... ``` IF @periodo = '0' THEN WHERE CreationDate >= @fetchaInicio -- Copied from 1st CASE AND @fetchaInicio <= CreationDate -- Copied from 2nd CASE -- These two conditions are direct from your code -- But they're the same as each other -- What do you REALLY want to happen when @Periodo = '0'? IF @periodo = '1' THEN WHERE DATEPART(MONTH,CreationDate) >= @Fecha -- Copied from 1st CASE AND @Fecha <= DATEPART(MONTH,CreationDate) -- Copied from 2nd CASE -- These two conditions are direct from your code -- But they're the same as each other -- What do you REALLY want to happen when @Periodo = '1'? IF @periodo = '2' THEN IF @fetcha = 13 THEN WHERE DATEPART(MONTH,CreationDate)>= 1 -- Copied from 1st CASE AND 6<=DATEPART(MONTH,CreationDate) -- Copied from 2nd CASE IF @fetcha = 14 THEN WHERE DATEPART(MONTH,CreationDate)>= 7 -- Copied from 1st CASE AND 12<=DATEPART(MONTH,CreationDate) -- Copied from 2nd CASE -- You never use @fetchaInicio? -- You want everything in the first 6 months or last 6 months -- For every year -- Regardless of what is in @fetchaInicio? IF @periodo = '3' THEN IF @fetcha = 15 THEN WHERE DATEPART(Year,CreationDate)>= 2015 -- Copied from 1st CASE AND DATEPART(Year,CreationDate)>= 2015 -- Copied from 2nd CASE -- Both conditions are the same again, why? -- You want everything from 2015 onwards, forever? -- You never use @fetchaInicio? -- It's always 2015? ``` And what about the case `@periodo = '2' AND @fetcha NOT IN (13,14)`? And what about the case `@periodo = '3' AND @fetcha NOT IN (15)`? Please could you take my pseudo-code above and give some real examples of what you actually want to do in each case?
CASE inside WHERE Clause
[ "", "sql", "sql-server-2008", "case", "where-clause", "" ]
I've used this site for many months now and it has been very helpful. However i've finally come across a problem that i've been unable to find an answer for how to do. My table has roughly 30 columns in it but im only interested in 7 of them. What im trying to accomplish is a query which will return the date/time of the first and last pick of each order but only for orders which have been completed. my table looks something like this: ``` ordnum | adddte |pckdte |pckqty |appqty | srcloc | ctnnum -----------+-----------------+------------------+-------+-------+--------+------ ORD123 | 4/20/16 6:31 AM | Null | 1 | 0 | 375 | CTN1 ORD123 | 4/20/16 6:31 AM | 4/20/16 11:39 AM | 2 | 2 | 335 | NULL ORD123 | 4/20/16 6:31 AM | 4/20/16 11:37 AM | 1 | 1 | 336 | CTN1 ORD456 | 4/20/16 7:11 AM | 4/20/16 11:59 AM | 3 | 3 | 376 | CTN2 ORD456 | 4/20/16 7:11 AM | 4/20/16 11:47 AM | 1 | 1 | 345 | CTN2 ORD456 | 4/20/16 7:11 AM | 4/20/16 11:07 AM | 4 | 4 | 355 | CTN2 ``` Please note the only way to tell if an order is complete is each of its appqty has a value greater than 0. Its pckdte can be null and the order still be competed. Below is the code im currently using which does give me the start and stop times of each order but also includes order which have not been completed yet. ``` SELECT ordnum ,max(pckdte) AS "Last Pick" ,min(pckdte) AS "First Pick" FROM MyTable Where (adddte >= '2016-04-02 00:00:00' AND pckdte <= '2016-04-20 23:59:59') and ctnnum like 'c%' and (srcloc like '2%' or srcloc like '3%' or srcloc like '4%' or srcloc like '5%' or srcloc like '1%') group by ordnum order by ordnum ``` The code gives me these results: ``` ordnum | Last Pick | First Pick ------------+------------------+---------------- ORD123 | 4/20/16 11:39 AM | 4/20/16 11:37 AM ORD456 | 4/20/16 11:59 AM | 4/20/16 11:07 AM ``` However I do not want to include ORD123 as there is still an uncompleted pick (appqty = 0). Columns adddte, srcloc, and ctnnum are needed to separate my department's orders from everyone else's as they are all stored on the same table and the parameters outlined in my where statement is the only way to filter out my department's work. Any help on this problem would be greatly appreciated. Please let me know if i need to provide additional information.
add one line to check make sure the appqty = 0 for an ordnum doesn't exist ``` SELECT ordnum ,max(pckdte) AS "Last Pick" ,min(pckdte) AS "First Pick" FROM MyTable Where (adddte >= '2016-04-02 00:00:00' AND pckdte <= '2016-04-20 23:59:59') and ctnnum like 'c%' and (srcloc like '2%' or srcloc like '3%' or srcloc like '4%' or srcloc like '5%' or srcloc like '1%') -- line below added AND NOT EXISTS (SELECT 1 FROM MyTable T2 WHERE appqty = 0 AND T2.ordnum = MyTable.ordnum) group by ordnum order by ordnum ```
Subquery should work: ``` AND ordnum not in (SELECT ordnum FROM MyTable WHERE appqty = 0) ``` This will filter out all `ordnum` that have at least one row with an `appqty` of `0`. Full query: ``` SELECT ordnum ,max(pckdte) AS "Last Pick" ,min(pckdte) AS "First Pick" FROM MyTable Where (adddte >= '2016-04-02 00:00:00' AND pckdte <= '2016-04-20 23:59:59') and ctnnum like 'c%' and (srcloc like '2%' or srcloc like '3%' or srcloc like '4%' or srcloc like '5%' or srcloc like '1%') AND ordnum not in (SELECT ordnum FROM MyTable WHERE appqty = 0) group by ordnum order by ordnum ```
SQL query to return only 1 record per order based on if column does not have 0
[ "", "mysql", "sql", "" ]
If I have table like this: ``` emp_num trans_date day_type 5667 2016-03-01 1 5667 2016-03-02 1 5667 2016-03-03 1 5667 2016-03-04 3 5667 2016-03-05 3 5667 2016-03-06 1 5667 2016-03-07 1 5667 2016-03-08 1 5667 2016-03-09 1 5667 2016-03-10 1 5667 2016-03-11 3 5667 2016-03-12 3 5667 2016-03-13 1 5667 2016-03-14 1 5667 2016-03-15 1 5667 2016-03-16 1 5667 2016-03-17 1 5667 2016-03-18 3 5667 2016-03-19 3 5667 2016-03-20 1 5667 2016-03-21 1 5667 2016-03-22 1 5667 2016-03-23 1 5667 2016-03-24 1 5667 2016-03-25 3 5667 2016-03-26 3 5667 2016-03-27 1 5667 2016-03-28 1 5667 2016-03-29 1 5667 2016-03-30 1 5667 2016-03-31 1 ``` Given that every employee must have all month days in `trans_date`. How to get all the employees who have **more than** two `day_type =3` **per at least one week through a given month in a year**?
``` DECLARE @month int = 3, @year int = 2016 ;WITH cte AS ( SELECT * FROM (VALUES (5667, '2016-03-01', 1),(5667, '2016-03-02', 1),(5667, '2016-03-03', 1),(5667, '2016-03-04', 3),(5667, '2016-03-05', 3),(5667, '2016-03-06', 1), --2 (5667, '2016-03-07', 1),(5667, '2016-03-08', 1),(5667, '2016-03-09', 1),(5667, '2016-03-10', 1),(5667, '2016-03-11', 3),(5667, '2016-03-12', 3),(5667, '2016-03-13', 1), --2 (5667, '2016-03-14', 1),(5667, '2016-03-15', 1),(5667, '2016-03-16', 1),(5667, '2016-03-17', 1),(5667, '2016-03-18', 3),(5667, '2016-03-19', 3),(5667, '2016-03-20', 1), --2 (5667, '2016-03-21', 1),(5667, '2016-03-22', 1),(5667, '2016-03-23', 1),(5667, '2016-03-24', 1),(5667, '2016-03-25', 3),(5667, '2016-03-26', 3),(5667, '2016-03-27', 1), --2 (5667, '2016-03-28', 1),(5667, '2016-03-29', 1),(5667, '2016-03-30', 1),(5667, '2016-03-31', 1), --0 (4275, '2016-03-01', 3),(4275, '2016-03-02', 1),(4275, '2016-03-03', 1),(4275, '2016-03-04', 3),(4275, '2016-03-05', 1),(4275, '2016-03-06', 3), --3 (4275, '2016-03-07', 3),(4275, '2016-03-08', 3),(4275, '2016-03-09', 1),(4275, '2016-03-10', 1),(4275, '2016-03-11', 3),(4275, '2016-03-12', 1),(4275, '2016-03-13', 1), --3 (4275, '2016-03-14', 3),(4275, '2016-03-15', 3),(4275, '2016-03-16', 1),(4275, '2016-03-17', 1),(4275, '2016-03-18', 3),(4275, '2016-03-19', 1),(4275, '2016-03-20', 1), --3 (4275, '2016-03-21', 3),(4275, '2016-03-22', 3),(4275, '2016-03-23', 1),(4275, '2016-03-24', 1),(4275, '2016-03-25', 3),(4275, '2016-03-26', 1),(4275, '2016-03-27', 1), --3 (4275, '2016-03-28', 3),(4275, '2016-03-29', 3),(4275, '2016-03-30', 1),(4275, '2016-03-31', 1), --2 (9922, '2016-03-01', 1),(9922, '2016-03-02', 1),(9922, '2016-03-03', 1),(9922, '2016-03-04', 3),(9922, '2016-03-05', 3),(9922, '2016-03-06', 1), --2 (9922, '2016-03-07', 1),(9922, '2016-03-08', 1),(9922, '2016-03-09', 1),(9922, '2016-03-10', 1),(9922, '2016-03-11', 3),(9922, '2016-03-12', 3),(9922, '2016-03-13', 1), --2 (9922, '2016-03-14', 1),(9922, '2016-03-15', 1),(9922, '2016-03-16', 1),(9922, '2016-03-17', 1),(9922, '2016-03-18', 3),(9922, '2016-03-19', 3),(9922, '2016-03-20', 1), --2 (9922, '2016-03-21', 3),(9922, '2016-03-22', 3),(9922, '2016-03-23', 1),(9922, '2016-03-24', 1),(9922, '2016-03-25', 1),(9922, '2016-03-26', 1),(9922, '2016-03-27', 1), --2 (9922, '2016-03-28', 3),(9922, '2016-03-29', 1),(9922, '2016-03-30', 3),(9922, '2016-03-31', 1) --2 ) AS t (emp_num, trans_date, day_type) ) ,final AS ( SELECT DATEPART(week,c.trans_date) as week_num, emp_num, COUNT(c.trans_date) as coun FROM cte c WHERE day_type = 3 AND DATEPART(month,trans_date) = @month AND DATEPART(YEAR,trans_date) = @year GROUP BY emp_num, DATEPART(week,c.trans_date) HAVING COUNT(c.trans_date) > 2 ) SELECT f.emp_num FROM final f GROUP BY emp_num ``` Output: ``` emp_num ----------- 4275 (1 row(s) affected) ```
Given the data provided by you: ``` declare @table1 table (emp_num int, trans_date datetime, day_type int) insert into @table1 VALUES (5667,'2016-03-01',1),(5667,'2016-03-02',1),(5667,'2016-03-03',1), (5667,'2016-03-04',3),(5667,'2016-03-05',3),(5667,'2016-03-06',1), (5667,'2016-03-07',1),(5667,'2016-03-08',1),(5667,'2016-03-09',1), (5667,'2016-03-10',1),(5667,'2016-03-11',3),(5667,'2016-03-12',3), (5667,'2016-03-13',1),(5667,'2016-03-14',1),(5667,'2016-03-15',1), (5667,'2016-03-16',1),(5667,'2016-03-17',1),(5667,'2016-03-18',3), (5667,'2016-03-19',3),(5667,'2016-03-20',1),(5667,'2016-03-21',1), (5667,'2016-03-22',1),(5667,'2016-03-23',1),(5667,'2016-03-24',1), (5667,'2016-03-25',3),(5667,'2016-03-26',3),(5667,'2016-03-27',1), (5667,'2016-03-28',1),(5667,'2016-03-29',1),(5667,'2016-03-30',1), (5667,'2016-03-31',1),(4275,'2016-03-01',3),(4275,'2016-03-02',1), (4275,'2016-03-03',1 ),(4275,'2016-03-04',3 ),(4275,'2016-03-05',1 ), (4275,'2016-03-06',1 ),(4275,'2016-03-07',3 ),(4275,'2016-03-08',3 ), (4275,'2016-03-09',1 ),(4275,'2016-03-10',1 ),(4275,'2016-03-11',3 ), (4275,'2016-03-12',1 ),(4275,'2016-03-13',1 ),(4275,'2016-03-14',3 ), (4275,'2016-03-15',3 ),(4275,'2016-03-16',1 ),(4275,'2016-03-17',1 ), (4275,'2016-03-18',3 ),(4275,'2016-03-19',1 ),(4275,'2016-03-20',1 ), (4275,'2016-03-21',3 ),(4275,'2016-03-22',3 ),(4275,'2016-03-23',1 ), (4275,'2016-03-24',1),(4275,'2016-03-25',3 ),(4275,'2016-03-26',1 ), (4275,'2016-03-27',1 ),(4275,'2016-03-28',3 ),(4275,'2016-03-29',3 ), (4275,'2016-03-30',1 ),(4275,'2016-03-31',1) ``` This will get you what you are looking for (`emp_num` 5667 not returned, `emp_num` 4275 returned) although bear in mind that some months will have weeks that span two months so you may need to tweak it if your requirements for this are more subtle: ``` declare @year int = 2016, @month int = 3 ;with emp_cte (emp_num, weeknum, day_type_count) as ( select emp_num, datepart(week, trans_date), sum(case when day_type = 3 then 1 else 0 end) from @table1 t where year(trans_date) = @year and month(trans_date) = @month group by emp_num, datepart(week, trans_date) ) select emp_num from emp_cte group by emp_num having min(day_type_count) >= 2 ```
How to get number of days of specific type per week
[ "", "sql", "sql-server", "date", "sql-server-2012", "" ]
I am dealing with the holiday table of a application and I have to find the next working days based on the holiday list in that table . If the input is a working day, we expect a blank/NULL to be returned, but if it is a holiday, we expect the next working day to be returned. My holiday table contains below sample data. First date column is for startdate and second one is for enddate. Instead of using startdate and enddate for two consecutive holidays. Client have created two separate rows. Now I have to write a select query which will give the next working days based on that sample data. Suppose if I am passing '2016-04-20 00:00:00.000' as the conditional date then the query should return '2016-04-22 00:00:00.000' as the working date and there are consecutive two holidays. ``` 2016 2016-04-20 00:00:00.000 2016-04-20 00:00:00.000 Test 2016 2016-04-21 00:00:00.000 2016-04-21 00:00:00.000 Test2 2016 2016-04-28 00:00:00.000 2016-04-28 00:00:00.000 Test3 ```
You can try this: ``` --create table holidays(y int, ds datetime, de datetime, hname varchar(10)); --insert into holidays values --(2016,'2016-04-20 00:00:00.000','2016-04-20 00:00:00.000','Test'), --(2016,'2016-04-21 00:00:00.000','2016-04-21 00:00:00.000','Test2'), --(2016,'2016-04-28 00:00:00.000','2016-04-28 00:00:00.000','Test3'), --(2016,'2016-04-22 00:00:00.000','2016-04-22 00:00:00.000','Test4') CREATE FUNCTION dbo.getNextDate(@dateToCheck datetime) RETURNS Datetime AS BEGIN RETURN( select top 1 dateadd(d,1,de) from (select y, MIN(ds) as ds, MAX(ds) as de from ( Select *, ROW_NUMBER() OVER(ORDER BY ds asc) as ranking from holidays ) t group by y,(CAST(ds AS INT)-Ranking) )t where @dateToCheck BETWEEN ds AND de ) END ``` **Upon testing:** ``` SELECT dbo.getNextDate('2016-04-23 00:00:00.000')-- returns NULL SELECT dbo.getNextDate('2016-04-21 00:00:00.000')-- returns 2016-04-23 00:00:00.000 ``` `SQL demo link`
Suposing a holidays table with this structure: ``` CREATE TABLE holidays ( [year] int, [ds] datetime, [de] datetime, [description] nvarchar(50) ) ``` You can create a function that iterates through dates until it finds the correct one ``` CREATE FUNCTION dbo.getNextDate(@dateToCheck datetime) RETURNS Datetime AS BEGIN DECLARE @tempDate datetime SET @tempDate=DATEADD(day,1,@dateToCheck) WHILE EXISTS(SELECT * FROM holidays WHERE @tempDate BETWEEN ds AND de) BEGIN SET @tempDate=DATEADD(day,1,@tempDate) END RETURN @tempDate END ``` This is a very rudimentary first aproximation but it should work.
Finding the next working day from SQL table
[ "", "sql", "sql-server", "sql-server-2008", "sql-server-2008-r2", "" ]
I have two tables `[Price Range]` ``` ID | Price 1 | 10 2 | 50 3 | 100 ``` and `Product`: ``` ID | Name | Price 1 | Prod1 | 5 2 | Prod2 | 10 3 | Prod3 | 20 4 | Prod4 | 30 5 | Prod5 | 50 6 | Prod5 | 60 7 | Prod6 | 120 ``` I need to associate product with specific price range i.e. join both table by price range and require result set like this: ``` ProductPriceRange ID | Name | Price | PriceRangeID 1 | Prod1 | 5 | Null 2 | Prod2 | 10 | 1 3 | Prod3 | 20 | 1 4 | Prod4 | 30 | 1 5 | Prod5 | 50 | 2 6 | Prod5 | 60 | 2 7 | Prod6 | 120 | 3 ```
If `Price Range` ID's are ascending as price then you can simple: ``` SELECT p.ID, p.Name, MAX(pr.ID) as PriceRangeID FROM Product p LEFT JOIN PriceRange pr ON p.Price >= pr.Price GROUP BY p.ID, p.Name ``` Output: ``` ID Name PriceRangeID ----------- ----- ------------ 1 Prod1 NULL 2 Prod2 1 3 Prod3 1 4 Prod4 1 5 Prod5 2 6 Prod5 2 7 Prod6 3 Warning: Null value is eliminated by an aggregate or other SET operation. (7 row(s) affected) ``` Another way with new cte: ``` ;WITH new_price_range AS ( SELECT pr1.ID, MAX(pr1.Price) as PriceB, MIN(ISNULL(pr2.Price-1,pr1.Price*10)) as PriceT FROM PriceRange pr1 LEFT JOIN PriceRange pr2 ON Pr1.Price < Pr2.Price GROUP BY pr1.ID) SELECT p.ID, p.Name, pr.ID as PriceRangeID FROM Product p LEFT JOIN new_price_range pr ON p.Price between pr.PriceB and pr.PriceT ``` In this cte we generate this: ``` ID PriceB PriceT ----------- ----------- ----------- 1 10 49 2 50 99 3 100 1000 (3 row(s) affected) ``` Output: ``` ID Name PriceRangeID ----------- ----- ------------ 1 Prod1 NULL 2 Prod2 1 3 Prod3 1 4 Prod4 1 5 Prod5 2 6 Prod5 2 7 Prod6 3 (7 row(s) affected) ```
You can use this ``` select id, name, price, (select id from PriceRange where price = (select max(price) from PriceRange where price <= a.price) ) as PriceRangeID from Product a ```
Sql Query to return following result set
[ "", "sql", "sql-server", "" ]
I have a `product` table and every product might be `delivered`, `idle`, `shipping`, `preparing`. I want to show a list with the counts of products for each state, and I can see how to query for that here: [How to get multiple counts with one SQL query?](https://stackoverflow.com/questions/12789396/how-to-get-multiple-counts-with-one-sql-query) However, what does this query return, and how do I assign the return value to lets say, 4 integers, called `deliveredCount`, `idleCount`, `shippingCount`, `preparingCount`? PS: For the record, I am using SQLite with OrmLite in Android with JAVA EDIT: In this SO question people explain what Query to do when you want to get multiple counts, but they don't tell us what does that query return and in what format. For example: ``` SELECT a.distributor_id, (SELECT COUNT(*) FROM myTable WHERE level='personal' and distributor_id = a.distributor_id) as PersonalCount, (SELECT COUNT(*) FROM myTable WHERE level='exec' and distributor_id = a.distributor_id) as ExecCount, (SELECT COUNT(*) FROM myTable WHERE distributor_id = a.distributor_id) as TotalCount FROM myTable a ; ``` **What is the return type of this and what is the format?** PS2: Someone was really quick to downvote my question because it lacked sufficient information. Then I edited it, but the downvote still remains :(
Hard to say for sure but sounds like you need to use a version of the top answer in the link you have provided. Something like; ``` SELECT ProductID, COUNT(*) AS Total, SUM(CASE WHEN pStatus = 'delivered' THEN 1 ELSE 0 END) DeliveredCount, SUM(CASE WHEN pStatus = 'idle' THEN 1 ELSE 0 END) IdleCount, SUM(CASE WHEN pStatus = 'shipping' THEN 1 ELSE 0 END) ShippingCount, SUM(CASE WHEN pStatus = 'preparing' THEN 1 ELSE 0 END) PreparingCount FROM ProductTable GROUP BY ProductID ``` This will return something like; ``` ProductID | DeliveredCount | IdleCount | ... 1 | 250 | 3250 | ... ```
You might want to try this. ``` SELECT SUM(CASE WHEN Prod = 'delivered' THEN 1 ELSE 0 END) as deliveredCount, SUM(CASE WHEN Prod = 'idle' THEN 1 ELSE 0 END) as idleCount, SUM(CASE WHEN Prod = 'shipping' THEN 1 ELSE 0 END) as shippingCount, SUM(CASE WHEN Prod = 'preparing' THEN 1 ELSE 0 END) as preparingCount FROM Product ```
What does a multiple count query in SQL return?
[ "", "sql", "sqlite", "" ]
So I have an sql query i am currently working on that is like this: ``` SELECT * from tableA where ( status = NVL('','OPEN') or status = NVL('','CLOSED') ) and delete_flag != 'Y' ``` The above query works fine and gives me the result I want.. but I was wondering if there is anyway I can combine the above status IN NVL line to one instead of using the or there. for example, I want to be able to do: ``` SELECT * from tableA where status IN NVL('','OPEN','CLOSED') and delete_flag != 'Y' ``` But the apostrophes are not working with me here.. how can I work around it?
You are getting an input parameter from your application that can have the values "Open", "Closed" or null You want to be able to select status values that equal this input paremeter if it is null or the value of the input if it isn't. To have null for a filter default to all you use COALESCE and the column you are filtering on. Like this ``` SELECT * from tableA where COALESCE(parameter,status) = status and status in ('OPEN','CLOSED') -- see comments and delete_flag != 'Y' ``` In this case if parameter is OPEN you will get all OPEN items, if parameter is CLOSED you will get all closed items and if it is null you will get all items. This is a very common pattern in web applications. --- Single line version ``` SELECT * from tableA where COALESCE(parameter,CASE WHEN status in ('OPEN','CLOSED') then status ELSE '' END) = status and delete_flag != 'Y' ```
simply: ``` SELECT * from tableA where 1=1 and nvl(status, '---') IN ('OPEN','CLOSED') and delete_flag != 'Y' ```
sql query NVL string with apostrophes (')
[ "", "sql", "oracle", "toad", "" ]
I have a data set like this: ``` User Date Status Eric 1/1/2015 4 Eric 2/1/2015 2 Eric 3/1/2015 4 Mike 1/1/2015 4 Mike 2/1/2015 4 Mike 3/1/2015 2 ``` I'm trying to write a query in which I will retrieve users whose MOST RECENT transaction status is a 4. If it's not a 4 I don't want to see that user in the results. This dataset could have 2 potential results, one for Eric and one for Mike. However, Mike's most recent transaction was not a 4, therefore: The return result would be: ``` User Date Status Eric 3/1/2015 4 ``` As this record is the only record for Eric that has a 4 as his latest transaction date. Here's what I've tried so far: ``` SELECT user, MAX(date) as dates, status FROM orders GROUP BY status, user ``` This would get me to a unqiue record for every user for every status type. This would be a subquery, and the parent query would look like: ``` SELECT user, dates, status WHERE status = 4 GROUP BY user ``` However, this is clearly flawed as I don't want status = 4 records IF their most recent record is not a 4. I only want status = 4 when the latest date is a 4. Any thoughts?
``` SELECT user, date , actualOrders.status FROM ( SELECT user, MAX(date) as date FROM orders GROUP BY user) AS lastOrderDates INNER JOIN orders AS actualOrders USING (user, date) WHERE actualOrders.status = 4 ; -- Since USING is being used, there is not a need to specify source of the -- user and date fields in the SELECT clause; however, if an ON clause was -- used instead, either table could be used as the source of those fields. ``` Also, you may want to rethink the field names used if it is not too late and `user` and `date` are both found [here](https://dev.mysql.com/doc/refman/5.5/en/keywords.html).
``` SELECT user, date, status FROM ( SELECT user, MAX(date) as date, status FROM orders GROUP BY user ) WHERE status = 4 ```
mysql highly selective query
[ "", "mysql", "sql", "" ]
I have one external table like ``` CREATE EXTERNAL TABLE TAB(ID INT, NAME STRING) PARTITIONED BY(YEAR INT, MONTH STRING , DATES INT) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'; ``` I have data like ``` /user/input/2015/jan/1; /user/input/2015/jan/30 ``` like that years 2000 to 2016 ,every year with 12 months and 30 days; ``` ALTER TABLE TAB ADD PARTITION(year = '2015', month = 'jan',dates = '5') LOCATION '/user/input/2015/jan/1'; ``` if i do like this i am getting only one day data ; ``` select * from TAB where (year = '2015', month = 'jan',dates = '5'); ``` if I run ``` select * from TAB where (year = '2015', month = 'jan',dates = '6'); ``` I am not getting any data. Please help me on this how to alter table for the above scenario
``` create table tab(id int,name string,dt string) partitioned by (year string,month string); create table samp(id int,name string,dt string) row format delimited fields terminated by '\t'; load data inpath '\dir' into table samp; insert overwrite table tab partition (y,m) select id,name dt,YEAR(dt),MONTH(dt) from samp; ```
You are getting 1 day for `ALTER TABLE TAB ADD PARTITION(year = '2015', month = 'jan',dates = '5') LOCATION '/user/input/2015/jan/1';` because you are specifying 1 file in your location value For 5 days create the partition as below ``` ALTER TABLE TAB ADD PARTITION(dates <= '5') LOCATION '/user/input/2015/jan/'; ```
Hive partitions by date?
[ "", "sql", "apache-spark", "hive", "hiveql", "bigdata", "" ]
``` table A no date count 1 20160401 1 1 20160403 4 2 20160407 3 ``` ``` result no date count 1 20160401 1 1 20160402 0 1 20160403 4 1 20160404 0 . . . 2 20160405 0 2 20160406 0 2 20160407 3 . . . ``` I'm using Oracle and I want to write a query that returns rows for every date within a range based on table A. Is there some function in Oracle that can help me?
Try this: ``` with A as ( select 1 no, to_date('20160401', 'yyyymmdd') dat, 1 cnt from dual union all select 1 no, to_date('20160403', 'yyyymmdd') dat, 4 cnt from dual union all select 2 no, to_date('20160407', 'yyyymmdd') dat, 3 cnt from dual), B as (select min(dat) mindat, max(dat) maxdat from A t), C as (select level + mindat - 1 dat from B connect by level + mindat - 1 <= maxdat), D as (select distinct no from A), E as (select * from D,C) select E.no, E.dat, nvl(cnt, 0) cnt from E full outer join A on A.no = E.no and A.dat = E.dat order by 1, 2, 3 ```
you can use the `SEQUENCES`. First create a sequence ``` Create Sequence seq_name start with 20160401 max n; ``` where n is the max value till u want to display. Then use the sql ``` select seq_name.next,case when seq_name.next = date then count else 0 end from tableA; ``` Note:- Its better not to use date,count as the column names.
Fill in missing dates in date range from a table
[ "", "sql", "oracle", "date", "" ]
I have to provide a list of distinct sites that are active, who have one or more domain, and whose domains are all​ deleted. Here is my query so far. ``` SELECT DISTINCT * FROM sites JOIN domains ON domains.site = sites.id WHERE domains.is_deleted = 1 AND sites.is_deleted = 0 ``` From my research, it seems like the best way to check if a site has more than one domain is to have a `COUNT()` subquery. How can I use `COUNT()` to count the number of domains per site? Here is a **[SQL fiddle](http://sqlfiddle.com/#!9/5eeecf/2)**.
**Q: How can I use COUNT() to count the number of domains per site?** (answer to the question is at the bottom of the answer.) --- There are several different queries that will return the specified result. I'd build the query like this, starting with "List of distinct sites that are active" We can get the rows from sites table where the (non-null) is\_deleted column is 0... ``` SELECT s.id , s.name , s.company , s.association , s.is_supercharged , s.is_deleted FROM sites s WHERE NOT s.is_deleted ORDER BY s.id ``` "who have one or more domain" We can write a query that returns a list of the values of the site (FK) column from domain table ``` SELECT d.site FROM domain d GROUP BY d.site ``` "and whose domains are all​ deleted" We can write another query that returns a distinct list of values from the site column of rows in domain which have is\_deleted = 0 ``` SELECT a.site FROM domain a WHERE NOT a.is_deleted GROUP BY a.site ``` And we can put those three queries together. We can turn the last two queries into inline views (wrap thme in parens and reference them like they were tables.) We can use an inner join to (sites with at least one domain) to get only sites that have at least one domain. And use an anti-join pattern to (sites that have an active domain), to exclude sites that do have an active domain. For example: ``` SELECT s.id , s.name , s.company , s.association , s.is_supercharged , s.is_deleted FROM sites s JOIN ( SELECT d.site AS site_id FROM domain d GROUP BY d.site ) r ON r.site_id = s.id LEFT JOIN ( SELECT a.site AS site_id FROM domain a WHERE NOT a.is_deleted GROUP BY a.site ) q ON q.site_id = s.id WHERE q.site_id IS NULL AND NOT s.is_deleted ORDER BY s.id ``` This is just one of several different query patterns that will return the specified result. --- We could also write query like this, to return a distinct list of site, where the "count" of domains is equal to the "count" of domains that are deleted ``` SELECT d.site FROM domain d GROUP BY d.site HAVING SUM(IF(d.is_deleted,1,0)) = SUM(1) ``` With that, we could just use an inner join to site... ``` SELECT s.id , s.name , s.company , s.association , s.is_supercharged , s.is_deleted FROM sites s JOIN ( SELECT d.site AS site_id FROM domain d GROUP BY d.site HAVING SUM(IF(d.is_deleted,1,0)) = SUM(1) ) q ON q.site_id = s.id WHERE NOT s.is_deleted ORDER BY s.id ``` There are several other patterns that return an equivalent result. --- **Q: How can I use COUNT() to count the number of domains per site?** To get a count of domains per site for sites that have at least one domain, without regard to whether the site itself is\_deleted: ``` SELECT d.site , COUNT(1) AS count_domains FROM domain d GROUP BY d.site ``` Similarly, to get a count of "is\_deleted" domains per site (for sites that have at least one domain) ``` SELECT d.site , COUNT(IF(d.is_deleted,1,NULL)) AS count_deleted_domains FROM domain d GROUP BY d.site ``` We can combine those queries, and return both counts on a row for a site. Those queries omit "counts" of zero. To get the zero counts, and include only sites that are not is\_deleted: ``` SELECT s.id AS `site` , COUNT(d.site) AS `count_domains` , COUNT(IF(d.is_deleted,1,NULL)) AS `count_deleted_domains` FROM site s LEFT JOIN domain d ON d.site = s.id WHERE NOT s.is_deleted GROUP BY s.id ORDER BY s.id ``` We can also reference the results from the COUNT() aggregates in a HAVING clause. This query will return all sites that don't have any "active" domain, including the sites that don't have *any* related domain. ``` SELECT s.id AS `site` , COUNT(d.site) AS `count_domains` , COUNT(IF(d.is_deleted,1,NULL)) AS `count_deleted_domains` FROM site s LEFT JOIN domain d ON d.site = s.id WHERE NOT s.is_deleted GROUP BY s.id HAVING COUNT(d.site) = COUNT(IF(d.is_deleted,1,NULL)) ORDER BY s.id ``` We could easily modify that to return the specified list of sites (add references to columns from the site table into the SELECT list) and omit the COUNT() expressions from the SELECT list. If we only want sites that have at least one domain, we can simply remove the LEFT keyword, to make it an inner join rather than an outer join.
To get the count of number of domains each site fulfilling the criteria : ``` SELECT DISTINCT sites.id, sites.name, COUNT(domains.id) AS DomainCount FROM sites INNER JOIN domains ON domains.site = sites.id WHERE domains.is_deleted = 1 AND sites.is_deleted = 0 GROUP BY sites.id, sites.name ``` And like karina says, the following query will only show sites with more than one domain: ``` SELECT DISTINCT sites.id, sites.name FROM sites INNER JOIN domains ON domains.site = sites.id WHERE domains.is_deleted = 1 AND sites.is_deleted = 0 GROUP BY sites.id, sites.name HAVING COUNT(domains.id) > 1 ```
count function per row
[ "", "mysql", "sql", "" ]
I have a problem. It seems easy to resolve, but in fact I don't know why it is not working ! I have two tables : ``` HOSTS(id, hostgroup_id) HOSTGROUPS(id, name) ``` With these inserted rows : ``` HOSTS ________________________ id | hostgroup_id ________________________ 1 | 1 2 | 1 3 | 2 4 | NULL -- a host can have no hostgroup ________________________ HOSTGROUPS ________________________ id | name ________________________ 1 | ARM 2 | Spark 3 | Pyro ________________________ ``` With that, I just want to count number of hosts in each hostgroups. Here is my SQL query : ``` SELECT HG.name, COUNT(H.id) AS count FROM HOSTS H, HOSTGROUPS HG WHERE H.hostgroup = HG.id UNION DISTINCT SELECT HG.name, 0 AS count FROM HOSTS H, HOSTGROUPS HG WHERE (H.hostgroup = HG.id) = FALSE ``` And here is my result : ``` _____________________ name | count _____________________ ARM | 2 -- OK Spark | 0 -- NOPE, INTENDED 1 Pyro | 0 -- OK ARM | 0 -- NOPE, DUPLICATED ROW _____________________ ``` And finally, here's what I am waiting for : ``` _____________________ name | count _____________________ ARM | 2 Spark | 1 Pyro | 0 _____________________ ``` Thanks for your answers guys ! :)
Try this: ``` SELECT hg.name, (SELECT COUNT(*) FROM HOSTS h WHERE h.hostgroup_id = hg.id) FROM HOSTGROUPS hg ```
I think you can use a left join with group by ``` SELECT HG.name, COUNT(*) AS count FROM HOSTS H LEFT JOIN HOSTGROUPS HG ON ( H.hostgroup = HG.id) GROUP BY HG.name; ```
Counting rows from table
[ "", "mysql", "sql", "count", "rows", "" ]