Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
Currently I have a table that has a user\_type column and will only show the row if the user matches that user\_type. I want to make it so there can be multiple user\_types set without duplicating the data and without creating another table. I thought that I could convert the column from an int to varchar and have it be a comma separated list of user type IDs. So far it has been working great. That is as long as I know what the user\_type is beforehand because then I can specifically use that when checking if I should display to the user: ``` SELECT * FROM perm WHERE user_type='50' OR user_type LIKE '50,%' OR user_type LIKE '%,50,%' OR user_type LIKE '%,50' ``` The issue comes when I am trying to join tables on the value. When I tried using `IN`: ``` SELECT p.* FROM perm p JOIN [user] u ON u.type IN (p.user_type) ``` I get the error: `Conversion failed when converting the varchar value '50,40,30' to data type int.` So I decided I would go back to the `LIKE` method which I have been using above: ``` SELECT p.* FROM perm p JOIN [user] u ON ( u.type LIKE p.user_type OR u.type LIKE (p.user_type + ',%') OR u.type LIKE ('%,' + p.user_type + ',%') OR u.type LIKE ('%,' + p.user_type) ) ``` This only returns results that have one user type value only. Is there a way to convert a comma separated list to be used with the `IN` command? Or is there any way to make a dynamic `LIKE` argument?
There'e many examples of TSQL code that split separated string into "array" or table variable. For example using this function: <https://stackoverflow.com/a/10914602/961695>, you can rewrite your query as: ``` SELECT p.* FROM perm p JOIN dbo.splitstring('50,40,30') s ON p.user_type = s.name ``` (and avoid dynamic SQL as well)
I'll give you the way that works with what you have now (horrible) and the right way to do it (as @paqogomez suggested). This way involves abusing `PARSENAME`, and only works with SQL Server and only if perm.user\_type has 4 or fewer user types. There's another way that doesn't have the limit of 4--it involves abusing SQL Server's XML parsing--but it's more complex and slower so I won't show that one: I also assume you just want to list the rows in Perm for a specific user, and the `[user]` table has `id` as it primary key: ``` SELECT p.* FROM [user] u JOIN perm p ON u.type IN ( CAST((PARSENAME(REPLACE(p.user_type,',','.'),1)) AS INT), CAST((PARSENAME(REPLACE(p.user_type,',','.'),2)) AS INT), CAST((PARSENAME(REPLACE(p.user_type,',','.'),3)) AS INT), CAST((PARSENAME(REPLACE(p.user_type,',','.'),4)) AS INT) ) WHERE u.id = ? ``` The better way is the paqogomez way, where you use a relation table to store the user types for Perm (assumes the primary key for Perm is `id`: ``` Perm_User_Type Perm_id -> Perm.id User_type -> [user].type ``` Then the much more efficient query would look like this: ``` SELECT p.* FROM [user] u JOIN Perm_User_Type put ON u.type = put.User_type JOIN perm p ON put.Perm_id = p.id WHERE u.id = ? ``` Of course there would be no limit on the number of user types in this case.
Using SQL, how do I convert a comma separated list to be used with the IN command?
[ "", "sql", "sql-server", "join", "sql-like", "sql-in", "" ]
Was not sure how to express this in the title. So here's the deal: I have a table storing information about currency pairs used in foreign exchange rates: ``` PAIR_ID | BASE_CURRENCY | TERM_CURRENCY | ATTRIBUTE1 | ATTRIBUTE2 ... ``` Ideally I should have another table to store the currency symbols (master data), say `CURRENCY_SYMBOLS` and foreign keys from `BASE_CURRENCY` and `TERM_CURRENCY` to this table. However I am confused about 2 possible approaches here. **Approach 1**: **CURRENCY\_PAIRS**: ``` PAIR_ID | BASE_CURRENCY_ID | TERM_CURRENCY_ID | ATTRIBUTE1 | ATTRIBUTE2 ... ``` **CURRENCY\_SYMBOLS**: ``` SYMBOL_ID | SYMBOL ``` with `BASE_CURRENCY_ID` & `TERM_CURRENCY_ID` referencing `SYMBOL_ID` Or **Approach 2**: rather than having a symbol\_id that really adds no value, just have: **CURRENCY\_PAIRS**: ``` PAIR_ID | BASE_CURRENCY | TERM_CURRENCY | ATTRIBUTE1 | ATTRIBUTE2 ... ``` **CURRENCY\_SYMBOLS**: ``` SYMBOL ``` with `BASE_CURRENCY` & `TERM_CURRENCY` referencing the `SYMBOL` directly. I am not sure which one is better, approach 1 seems ideal but really no advantage - in fact in all my queries an additional join will be needed to retrieve data. Approach 2 seems more efficient but somehow not correct. Any pointers on which one I should go with?
Approach 2 seems like a good idea at first, but there are a few problems with it. I'll list them all even though 1 and 2 don't really apply as much to you, since you're only using it with 3-digit ISO codes: 1. **Foreign key references can take up more room.** Depending on how long you need to make your VARCHARs, they can take up more room as foreign keys then, say, a byte or a short. If you have zillions of objects which refer to these foreign keys then it adds up. Some DBs are smart about this and replace the VARCHARs with hash table references in the referring tables, but some don't. No DB is smart about it 100% of the time. 2. **You're necessarily exposing database keys (which should have no meaning, at least to end-users) as business keys.** What if the bosses want to replace "USD" with "$" or "Dollars"? You would need to add a lookup table in that case, negating a primary reason to use this approach in the first place. Otherwise you'd need to change the value in the CURRENCY\_SYMBOLS, which can be tricky (See #3). 3. **It's hard to maintain.** Countries occasionally change. They change currencies as they enter/leave the Euro, have coups, etc. Sometimes just the name of the currency becomes politically incorrect. With this approach you not only would have to change the entry in CURRENCY\_SYMBOLS, but cascade that change to every object in the DB that refers to it. That could be incredibly slow. Also, since you have no constant keys, the keys the programmers are hard-wiring into their business logic are these same keys that have now changed. Good luck hunting through the entire code base to find them all. I often use a "hybrid" approach; that is, I use approach 1 but with a very short VARCHAR as the ID (3 or 4 characters max). That way, each entry can have a "SYMBOL" field which is exposed to end users and can be changed as needed by simply modifying the one table entry. Also, developers have a slightly more meaningful ID than trying to remember that "14" is the Yen and "27" is the US Dollar. Since these keys are not exposed, they don't have to change so long as the developers remember that `YEN` was the currency before The Great Revolution. If a query is just for business logic, you may still be able to get away with not using a join. It's slower for some things but it's faster for others. YMMV.
In both cases you need a join so you are not saving a join. Option 1 adds an ID. This ID will default to have a clustered index. Meaning the data is sorted on disk with the lowest ID first and the highest ID at the end. This is a flexible option that will allow easy future development. Option 2 will hard code the symbols into the Currency Pairs table. This means if at a later date you want to add another column to the symbols table, eg for grouping, you will need to create the symbol\_id field and update all your records in the currency pairs table. This increases maintenance costs. I always add int ID fields for this sort of table because the overhead is low and maintenance is easier. There are also indexing advantages to option 1
Use the values of a column (instead of numeric ids) as foreign key reference
[ "", "sql", "database", "oracle", "" ]
I have the tables: ``` +------------+ | Ingredient | +------------+ | id | +------------+ | name | +------------+ +---------------+ | Relingredient | +---------------+ | id_ingredient | +---------------+ | id_recipe | +---------------+ +--------+ | Recipe | +--------+ | id | +--------+ | name | +--------+ ``` --- And **I need to Select Recipes that have the ingredients that I want** (ALL the ingredients pass to them) I tried this: ``` SELECT R.id, R.nom FROM Recipe R, Relingredient RI, Ingredient I WHERE R.id = RI.id_recipe AND RI.id_ingredient = I.id AND I.name='onion' AND I.name='oil' GROUP BY R.name ``` but retuns zero rows I also tried this: ``` SELECT R.id, R.nom FROM Recipe R, Relingredient RI, Ingredient I WHERE R.id = RI.id_recipe AND RI.id_ingredient = I.id AND (I.name='onion' or I.name='oil') GROUP BY R.name ``` But it selects all recipes that have onion or oil, not only the ones wich haves onion **AND** oil ... What can I do? (edit) sample of what I want: for example I have the recipes: 1: grilled chicken(ingredients: chicken, onion, oil) 2: chinese soup(ingredients: pork, onion, oil, noodles) 3: vegetable sandwich (ingredients: bread, oil, tomato, salad) **The query should return just the recipes: grilled chicken and chinese soup** Thanks for Helping me!!
Try something like this: ``` SELECT R.id, R.name FROM Recipt R JOIN Relingredient RI ON R.Id = RI.Id_recipe JOIN Ingredient I ON RI.Id_ingredient = I.Id WHERE I.name = 'onion' OR I.name='oil' GROUP BY R.id, R.name HAVING COUNT(I.name) = 2 ```
Try this : ``` SELECT R.id, R.name FROM Recipe R Where R.Id in ( Select RI.Id_recipe From Relingredient as RI INNER JOIN Ingredient as I ON RI.Id_ingredient = I.Id WHERE I.name = 'onion' OR I.name='oil' ) ``` I got it completely. New query: ``` SELECT * FROM dbo.Recipt WHERE Id IN ( SELECT id_recipe FROM dbo.Relingredient INNER JOIN dbo.Ingredient ON id_ingredient = Id WHERE Name = 'oil' ) AND Id IN ( SELECT id_recipe FROM dbo.Relingredient INNER JOIN dbo.Ingredient ON id_ingredient = Id WHERE Name = 'union' ) ```
many-to-many select with complex condition
[ "", "sql", "select", "many-to-many", "" ]
I'd like to be able to print some debug information from sql script / function. Of course, I can do this by ``` RAISE NOTICE 'hello!' ``` But I also need to print a whole table's data. This doesn't work: ``` RAISE NOTICE '%' (SELECT * FROM table1) ``` Is it possible and how?
The most straightforward way would be to iterate over the rows in a `for loop` and use `RAISE NOTICE` containing each column you're interested in interpolated in it. i.e. something like: ``` FOR items IN SELECT * FROM table1 LOOP RAISE NOTICE 'col1: %, col2: %', quote_ident(items.col1), quote_ident(items.col2); END LOOP; ``` where items is declared as `RECORD`.
Since postgres 9.3 you can use `to_json()` to convert record into text suitable for notice, ``` RAISE NOTICE '%', to_json(record1); ```
Raise notice to print a table's data
[ "", "sql", "postgresql", "postgresql-9.2", "" ]
I am writing following query to fetch all details of table `bill_details`. ``` select * from bill_details; ``` Along with data, I also want to fetch number of records in this table.
``` SELECT *, (SELECT COUNT(*) FROM bill_details) AS cnt FROM bill_details ``` Every row of the results will have an additional column with the row count.
``` mysql_num_rows(mysql_query("select * from bill_details")) ```
How to fetch all data and also the number of records in table
[ "", "mysql", "sql", "" ]
I have three tables and trying to use left join and inner join in same query with the three tables. But its not working the way i want. i have four tags A,B,C,D and only two tags are used by agent\_id=22. Below is the query i am using ``` SELECT MDT.tags,COUNT(MDTM.tag_id) AS tagCount FROM mp_desk_agent_added_customers MDAC INNER JOIN mp_desk_tags_match MDTM ON MDAC.u_cust_id=MDTM.u_cust_id INNER JOIN mp_desk_tags MDT ON MDTM.tag_id=MDT.tag_id WHERE MDAC.agent_id=22 GROUP BY MDT.tag_id ``` The result i am getting is just two tag name and their count which is present in tag\_match table. But i want all the four tag name and count as 0 for the tags which are not present in tag\_match table. ## TABLES STRUCTURE **mp\_desk\_tags** tag, tag\_id **mp\_desk\_tags\_match** tag\_match\_id,tag\_id,u\_cust\_id **mp\_desk\_agent\_added\_customers** u\_cust\_id,agent\_id
There are four tags and you want four result records, one per tag. So select from the tags table. You get the count with a sub-select. ``` select tag_id, tag, ( select count(*) from mp_desk_tags_match dtm where dtm.tag_id = dt.tag_id and u_cust_id in ( select u_cust_id from mp_desk_agent_added_customers where agent_id = 22 ) ) as tag_count from mp_desk_tags dt; ``` Here is the same with joins: ``` select dt.tag_id, dt.tag, count(*) from mp_desk_tags dt left join mp_desk_tags_match dtm on dtm.tag_id = dt.tag_id left join mp_desk_agent_added_customers daac on daac.u_cust_id = dtm.u_cust_id and daac.agent_id = 22 group by dt.tag_id; ```
Looking at your table structure your query should be: ``` SELECT MDT.tags,COUNT(MDTM.tag_id) AS tagCount FROM mp_desk_agent_added_customers MDAC INNER JOIN mp_desk_tags_match MDTM ON MDAC.u_cust_id=MDTM.u_cust_id LEFT JOIN mp_desk_tags MDT ON MDTM.tag_id=MDT.tag_id WHERE MDAC.agent_id=22 GROUP BY MDT.tag_id ``` This is assuming that the first tabel MDAC does not have a record being returned in the MDT table which would case your totals to only display for the first 2 tag id's
Left join and Inner join in same mysql query not working
[ "", "mysql", "sql", "join", "left-join", "" ]
I wish to take table A and create something like table B, but based on an arbitrary set of split dates contained in table C. For example, (note it is not always true that start\_date = inception\_date, and so inception\_date must be preserved rather than derived from start\_date; this actually represents hundreds of fields that belong with the period) ![enter image description here](https://i.stack.imgur.com/7IM1H.png) I'm working in SAS but I'd like to be able to write this using `PROC SQL`. I think one way to do this would be to create multiple tables for pairs of records from table C (including nulls at the end), and then union them together. Pseudo-code example: ``` for each record of table_c, concoct the pairs { (., 01-Jan-2012), (01-Jan-2012, 01-Jul-2012), (01-Jul-2012, 01-Jan-2013), (01-Jan-2013, .) } ``` The following query may require some null testing around `split_date1` and `split_date2`: ``` CREATE TABLE subquery1 AS SELECT a.customer_id ,max(a.start_date, x.split_date1) AS start_date ,min(a.end_date, x.split_date2 - 1) AS end_date ,a.inception_date FROM table_a AS a JOIN split_date AS x ; .... (do for each pair of split dates, and then union all these tables together with some WHERE querying to throw away the nonsensical rows) to produce table_b. The image above indicates which subquery would generate which rows in table_b ``` **Please help me fill in the gaps, or suggest an alternative method.** table\_a: ``` customer_id start_date end_date inception_date aaa 18-Jun-11 17-Jun-12 18-Jun-11 aaa 18-Jun-12 17-Jun-13 18-Jun-12 bbb 13-Jul-11 12-Jul-12 13-Jul-11 ccc 14-May-11 13-Nov-11 14-Jul-11 ddd 21-Jun-11 20-Jun-12 21-Jun-11 ``` table\_b: ``` customer_id start_date end_date inception_date subquery aaa 18-Jun-11 31-Dec-11 18-Jun-11 (1) aaa 01-Jan-12 17-Jun-12 18-Jun-11 (2) aaa 18-Jun-12 30-Jun-12 18-Jun-12 (2) aaa 01-Jul-12 31-Dec-12 18-Jun-12 (3) aaa 01-Jan-13 17-Jun-13 18-Jun-12 (4) bbb 13-Jul-11 31-Dec-11 13-Jul-11 (1) bbb 01-Jan-12 30-Jun-12 13-Jul-11 (2) bbb 01-Jul-12 12-Jul-12 13-Jul-11 (3) ccc 14-May-11 13-Nov-11 14-May-11 (1) ddd 21-Jun-11 31-Dec-11 21-Jun-11 (1) ddd 01-Jan-12 20-Jun-12 21-Jun-11 (2) ``` table\_c: ``` split_dates 01-Jan-12 01-Jul-12 01-Jan-13 ```
Here's a hybrid SQL/datastep approach - but it is shorter! Input the data (taken from the answer given by @Joe):- ``` data table_a; informat start_date end_date date9.; format start_date end_date date9.; input customer_id $ start_date end_date; datalines; aaa 18JUN2011 17JUN2012 aaa 18JUN2012 17JUN2013 bbb 13JUL2011 12JUL2012 ccc 14MAY2011 13NOV2011 ddd 21JUN2011 20JUN2012 ;;;; run; data table_c; informat split_dates date9.; format split_dates date9.; input split_dates; datalines; 01JAN2012 01JUL2012 01JAN2013 ;;;; run; ``` The following copies the split dates to a macro variable (SQL!) and then loops through table\_a using this macro (datastep!):- ``` ** Output the split dates to a macro variable; proc sql noprint; select split_dates format=8. into: c_dates separated by ',' from table_c order by split_dates; quit; ** For each period in table_a, look to see if each split date is within it,; ** outputting a row if so; data final_out(drop=dt old_end_date); set table_a(rename=(end_date = old_end_date)); format start_date end_date inception_date date11.; inception_date = start_date; do dt = &c_dates; if start_date <= dt <= old_end_date then do; end_date = dt - 1; output; start_date = dt; end; end; ** For the last row per table_a entry; end_date = old_end_date; output; run; ``` And if you know the split dates beforehand, you could hard code them into the datastep and omit the SQL bit (not recommended mind - hard coding is seldom a good idea).
Data step solution. First, sample data (I left out the other date variable, I think it's unimportant to the solution although of course you'll want it in production): ``` data table_a; informat start_date end_date date9.; format start_date end_date date9.; input customer_id $ start_date end_date; datalines; aaa 18JUN2011 17JUN2012 aaa 18JUN2012 17JUN2013 bbb 13JUL2011 12JUL2012 ccc 14MAY2011 13NOV2011 ddd 21JUN2011 20JUN2012 ;;;; run; data table_c; informat split_dates date9.; format split_dates date9.; input split_dates; datalines; 01JAN2011 01JUL2011 01JAN2012 01JUL2012 01JAN2013 ;;;; run; ``` Now, the solution. First, we load the data from `table_c` into a temporary array; a hash table would also work (and might be faster if table c is very long, since this solution requires iterating over all of the array while a hash table would have a faster time just finding the few that match). Then we iterate over the array C was loaded into, check if it qualifies as a useful break point, if so assign the start/end dates, output, and re-assign the new start date. Here I use new start/end variables; if you want to keep the old variable names, just rename the original variables on input to some other variable name and then use the original variable names as the new ones and the renamed original variables as the old ones. ``` data table_b; set table_a; format final_start final_end date9.; array split_date_list[100] _temporary_; *make sure this 100 is as big or bigger than table_c; if _n_=1 then do; do _t = 1 to nobsc; *load the contents of table_c into a temporary array; set table_c point=_t nobs=nobsc; split_date_list[_t]=split_dates; end; end; final_start=start_date; *You could reuse start_date here, I use new name for consistency; do _u= 1 to dim(split_date_list) until (final_end=end_date); if final_start le split_date_list[_u] le end_date then do; *if split date is in between start and end, split it; final_end=split_date_list[_u]-1; *But end_date does need a second variable, else it loses track of the actual end; output; *output a row; final_start=split_date_list[_u]; *fix the start date to the new value; end; else if split_date_list[_u] gt end_date then do; *if we have passed the end date; final_end=end_date; output; end; end; if end_date ne final_end then do; *if we never passed the end date, output the final row; final_end=end_date; output; end; run; ```
Create table that splits records on specific dates using SAS
[ "", "sql", "date", "sas", "" ]
This is my select statement: ``` SELECT * FROM Persons p WHERE p.Name= ISNULL(@Name, p.Name) ``` it `@Name` is null it only selects the rows where `Name` is not NULL but not the one with `NULL` value. What has to be done to select desired rows?
``` DECLARE @name varchar(290) ='Thomas' SELECT * FROM Persons P WHERE exists(select name intersect select coalesce(@name, name)) ```
When `@Name` is `NULL` the query became ``` SELECT * FROM Persons P WHERE p.Name = p.Name ``` and `NULL` is not equal `NULL`, as it mean unknown value and two unknown values are not equal. A way to get all the data is ``` SELECT * FROM Persons P WHERE COALESCE(p.Name, N'a') = COALESCE(@Name, p.Name, N'a') ``` so that when p.Name is `NULL` there is a default value to use. **Edit** as the string `'NULL'` as the last value of `COALESCE` can be confusing I changed it to another constant literal.
Select rows where one cell value is equal to parameter or everything in ms sql
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I am making a shopping cart project. I want to copy a data from table to another one but when i use this query ``` USE PRODUCTS SELECT NAME,PRICE,BOUGHT INTO CART FROM COMPUTERS ``` i can only copy 1 data and after the first run it says table already created. What i want to do is append the old table not try to creat a new one. Can you give me some ideas, queries maybe ?
``` insert into CART (NAME, PRICE, BOUGHT) select NAME, PRICE, BOUGHT from COMPUTERS ```
How about: ``` USE PRODUCTS SELECT NAME,PRICE,BOUGHT FROM COMPUTERS INSERT INTO CART ```
Sql copying a data from table to another table
[ "", "sql", "sql-server", "" ]
I don't know what's the proper syntax but hope you understand what I'm trying to achieve: ``` SELECT Count(BookCategory) AS TotalBookAC FROM Passenger WHERE BookCategory='AC'; SELECT Count(BookCategory) AS TotalBookGen FROM Passenger WHERE BookCategory='General'; ... ``` What `keyword` is needed to **join** two or more sql statement in one? ## Edit *Passenger*'s table: ``` TicketID - PK TrainNumber - FK BookDate - As Date PassengerName - varchar Age - num Sex - varchar Address - varchar ReservationStatus - 'waiting' OR 'confirmed' BookCategory - 'CP' OR 'General' ``` *TrainList*'s table: ``` TrainNumber - PK TrainName - varchar Source - varchar (place) Destination - varchar (place) FareAC - num (currency) FareGeneral - num (currency) DayAvailable - varchar (weekdays) ``` Now I'm struggling how could I construct an SQL statement that will query a distinct *TrainList.TrainNumber* and fields for the `Count` of *AC* and *General* in *Passenger.BookCategory*. And here's an example output I want to achieve ``` +-------------+---------------+--------------+--------------+--------------+ | TrainNumber | TotalSeatsGen | TotalSeatsAC | TotalBookGen | TotalBookAC | +-------------+---------------+--------------+--------------+--------------+ | 1 | 10 | 12 | 11 | 13 | +-------------+---------------+--------------+--------------+--------------+ ^ . . . DISTINCT <-- specific records for this TrainNumber only --> ```
You want to select all records from table Passenger and count occurences. So you must scan the whole table and count conditionally. In standard SQL you would use CASE WHEN; in MS Access you'd use IIF. The counting is done with SUM; you add a 1 per match, so you are actually counting. ``` SELECT TrainNumber, SUM( IIF(BookCategory = 'General' and ReservationStatus = 'confirmed', 1, 0) ) AS TotalSeatGen, SUM( IIF(BookCategory = 'AC' and ReservationStatus = 'confirmed', 1, 0) ) AS TotalSeatAC, SUM( IIF(BookCategory = 'General', 1, 0) ) AS TotalBookGen, SUM( IIF(BookCategory = 'AC', 1, 0) ) AS TotalBookAC FROM Passenger GROUP BY TrainNumber ORDER BY TrainNumber; ```
Create a separate query for each count you need. Let's say they will be called qryTotalSeatsGen, qryTotalSeatsAC, qryTotalBookGen, qryTotalBookAC. Each query will have two fields: `TrainNumber` and `Total` Now create a final query with all the above mentioned queries. Join them all on `TrainNumber` and include each one's total in the field. ``` SELECT qryTotalSeatsGen.Total AS TotalSeatsGen, qryTotalSeatsAC.Total AS TotalSeatsAC, qryTotalBookGen.Total AS TotalBookGen, qryTotalBookAC.Total AS TotalBookAC FROM (qryTotalSeatsGen INNER JOIN qryTotalSeatsAC ON qryTotalSeatsGen.TrainNumer = qryTotalSeatsAC.TrainNumer) INNERJOIN .... ``` Use the Ms Access query builder (as the above query I wrote out by hand just to give you the idea, it may not be exact).
Multiple SELECT statements with different WHERE condition in one query
[ "", "sql", "ms-access", "" ]
When inner joining two tables the results are essentially "or" driven. So for example if I had a parent and child table, and I wanted to know that children who have red or blond hair I would write something like: ``` SELECT parent.parent_name FROM parent INNER JOIN child ON parent.parent_ID = child.parent_ID WHERE child.hair = blond OR child.hair = red ``` This would tell me all parents who have children with red OR blond hair. What would I write if I wanted to know parents who have at least one child with red hair AND at least one child with blond hair? Keep in mind that the criteria may change over time - tomorrow I might want to know black and red and yellow and blond and green hair, so writing a query for red and a query for blond and joining the results wont work because sometimes it will be two ANDs, but sometimes more. I hope that makes sense.
This is a great example of when to use a having clause. You know you want at least one of each color so a distinct will be necessary. The where clause limited to blond or red already. We grouped by parent so the having clause is only looking at each parents, not all kids; and if the distinct count is 2, then it must be because they have a child with blond hair and one with red hair. Since you know the criteria each time (red, blond or red, green, yellow blond), the count can vary as well via parameters! ``` SELECT parent.parent_name FROM parent INNER JOIN child ON parent.parent_ID = child.parent_ID WHERE child.hair = blond OR child.hair = red group by parent.parent_name Having count(Distinct child.hair) = 2 ``` so as your where clause changes so does the #2.
I would probably hit the child table twice, once for blond and once for red: ``` SELECT parent.parent_name FROM parent INNER JOIN child BlondChild ON parent.parent_ID = Blondchild.parent_ID and Blondchild.hair = 'blond' inner join chile RedChild on parent.parent_id = RedChild.parentid and RedChild.hair = 'red' ```
SQL Query Joining Multiple tables using and critrea
[ "", "sql", "inner-join", "" ]
I have two tables, but IMO only the first is necessary for this question. ``` conversation_id user conv1 randomuser conv1 admin conv2 derp conv3 derp conv3 admin conv3 herp conv4 derp conv4 admin ``` Now I want to select the conversation\_id by `derp` and `admin`. The conversation\_id should then be `conv4`. I have tried many options and the "best" working I found is: ``` SELECT chat_och_users_in_conversation.conversation_id AS conv_id FROM chat_och_users_in_conversation WHERE USER IN ('derp', 'admin') GROUP BY conversation_id HAVING (SELECT COUNT(*) FROM chat_och_users_in_conversation WHERE conversation_id = conv_id ) = 2 ``` The conv\_id which is returned are `conv1` and `conv4`. I think I understand why this is returned: the `IN` works like an `OR` in matching rows. Note that this should work on many different database types, so not only MySQL.
If you have only few values you can use `EXISTS`/`NOT EXISTS`: ``` SELECT DISTINCT c1.conversation_id AS conv_id FROM chat_och_users_in_conversation c1 WHERE EXISTS ( SELECT 1 FROM chat_och_users_in_conversation c2 WHERE c1.conversation_id = c2.conversation_id AND c2.user = 'derp' ) AND EXISTS ( SELECT 1 FROM chat_och_users_in_conversation c2 WHERE c1.conversation_id = c2.conversation_id AND c2.user = 'admin' ) AND NOT EXISTS ( SELECT 1 FROM chat_och_users_in_conversation c2 WHERE c1.conversation_id = c2.conversation_id AND c2.user NOT IN ('derp', 'admin') ) ``` It's verbose but simple(just copy-paste) and comprehensible. It can also be changed or extended easily. `Demo`
Not sure if you want all value: ``` SELECT conversation_id AS conv_id FROM chat_och_users_in_conversation WHERE USER IN ('derp', 'admin') GROUP BY conversation_id HAVING COUNT(DISTINCT USER) = 2 ``` Or only the last one? ``` SELECT MAX(conversation_id) AS conv_id FROM chat_och_users_in_conversation WHERE USER IN ('derp', 'admin') GROUP BY conversation_id HAVING COUNT(DISTINCT USER) = 2 ```
Find id by matching the values
[ "", "sql", "relational-division", "" ]
I have two tables (A and B) that contain ID's however in table B some records have these ID's grouped together e.g the IDExec column may consist of a record that looks like 'id1 id2'. I'm trying to find the ID's in table A that do not appear in table B. I thought that by using something like: ``` SELECT * FROM A WHERE NOT EXISTS( SELECT * FROM B WHERE Contains(A.ExecID, B.ExecID)) ``` This isn't working as contains needs the 2nd parameter to be string, text\_lex or variable. Do you guys have a solution to this problem? To shed more light on the above problem the table strucutres are as follows: ``` Table A (IDExec, ProdName, BuySell, Quantity, Price, DateTime) Table B (IDExec, ClientAccountNo, Quantity) ``` The C# code I've created to manipulate the buysell data in Table A groups up all the buysell's of the same product on a given day. The question now is how would you guy normalise this so I'm not bastardizing IDExec? Would it be better to create a new ID column in Table B called AllocID and link the two tables like that? So something like this: ``` Table A (IDExec, AllocID, ProdName, BuySell, Quantity, Price, DateTime) Table B (AllocID, ClientAccountNo, Quantity) ```
This data should be normalized, storing multiple values in one field is a bad idea. A workaround is using `LIKE`: ``` SELECT * FROM A WHERE NOT EXISTS( SELECT * FROM B WHERE ' '+B.ExecID+' ' LIKE '% '+A.ExecID+' %') ``` This is using space delimited values per your example.
This is kind of crude, but it will give you all of the entries in `A` that are not contained in `B`. ``` SELECT * FROM A WHERE A.ExecID not in (SELECT ExecID from B); ```
SQL Contains query
[ "", "sql", "sql-server", "" ]
I got query: ``` INSERT INTO peekquick.file_storage (file_id, size, content, file_desc, files_set_id, content_type, file_name, answer_id) VALUES (file_id = 62745251829, size = 1295585, content = '', file_desc = '', files_set_id = '', content_type = 'image/jpeg', file_name = 'witryna.jpeg', answer_id = 176458); ``` and I got error: ``` Duplicate entry '0' for key 'PRIMARY' ``` and I got no clue why this %$#@$^ doesn't work. Can anyone help?
> Make sure the column set as your PRIMARY KEY is set to AUTO\_INCREMENT **`INT`** has a [maximum signed value of 2147483647](http://dev.mysql.com/doc/refman/5.0/en/integer-types.html). Any number greater than that will be truncated to that value. In Sql Server define the column like this... ``` FILE_ID [PrimaryID] [int] IDENTITY(1,1) NOT NULL ``` Then you can add a constraint making it the primary key. or alter table like this ``` ALTER TABLE MyTable ADD MytableID int NOT NULL IDENTITY (1,1), ADD CONSTRAINT PK_MyTable PRIMARY KEY CLUSTERED (MyTableID) ``` or ![enter image description here](https://i.stack.imgur.com/SBQvM.jpg) Now go *Column properties* below of it scroll down and find *Identity Specification*, expand it and you will find Is *Identity* make it *Yes*. Now choose *Identity Increment* right below of it give the value you want to increment in it. ![enter image description here](https://i.stack.imgur.com/ueRYJ.jpg) See [MSDN Documentation](http://msdn.microsoft.com/en-us/library/ms187742.aspx)
Remove the FILE\_ID=... after VALUES. Like this: ``` INSERT INTO peekquick.FILE_STORAGE (FILE_ID, SIZE, CONTENT, FILE_DESC, FILES_SET_ID, CONTENT_TYPE, FILE_NAME, ANSWER_ID) VALUES (62745251829, 1295585, '', '', '', 'image/jpeg', 'witryna.jpeg', 176458); ``` Make sure you primary key is inserted correctly, make it Auto Increment if you are not generating it manually. Also there might be a UNIQUE column in this table, make sure the value you are inserting in this column is unique.
SQL error : duplicate entry for value'0' PRIMARY
[ "", "mysql", "sql", "" ]
i need to do an update query on a postcode field to create a space between. E.g is there are 7 characters e.g `HP114GT` i want to have `HP11 4GT` or if there are 6 e.g `HP14GT` i want `HP1 4GT`. any help would be great!!
``` UPDATE Table SET Column = CASE WHEN LEN(Column) = 6 THEN STUFF(Column, 4, 0, ' ') WHEN LEN(Column) = 7 THEN STUFF(Column, 5, 0, ' ') END WHERE CHARINDEX(' ', Column, 1) = 0 AND LEN(Column) BETWEEN 6 AND 7 ```
Please try below query for SQL Server: ``` UPDATE tbl SET Col=LEFT(Col, len(Col)-3)+' '+RIGHT(col, 3) WHERE LEN(Col)>3 AND CHARINDEX(' ', Col, 1)=0 ```
SQL Update, Space inbetween a Postcode
[ "", "sql", "sql-server", "string", "sql-update", "" ]
# What I Need I have a database with fields that can contain long phrases of words. I wanted the ability to quickly search for a keyword or phrase in these columns, but when searching a phrase, I want to be able to search the phrase like Google would, returning all rows that contain all of the specified words, but in no particular order or "nearness" to each other. Ranking the results by relevance is unnecessary at this point. After reading about SQL Server's [Full-Text Search](http://msdn.microsoft.com/en-us/library/ms142571.aspx), I thought it would be just what I needed: a searchable index based on each word in a text-based column. My end goal is to safely accept user input and turn it into a query that leverages the speed of Full-Text Search, while maintaining ease-of-use for the users. # The Problem: Full-Text Search functions don't search like Google I see the [`FREETEXT` function](http://msdn.microsoft.com/en-us/library/ms142583.aspx#OV_ft_predicates) can take an entire phrase, break it up into "useful" words (ignoring words like 'and', 'or', 'the', etc), and then return a list of matching rows very quickly, even with a complex search term. But when you try to use it, you may notice that instead of an `AND` search for each of the terms, it seems to only do an `OR` search. Maybe there's a way to change its behavior, but I haven't found anything useful. Then there's [`CONTAINS`](http://msdn.microsoft.com/en-us/library/ms142583.aspx#OV_ft_predicates), which can accept a boolean query phrase, but sometimes with odd results. Take a look at the following queries on this table: ## Data ``` PKID Name ----- ----- 1 James Kirk 2 James Cameron 3 Kirk Cameron 4 Kirk For Cameron ``` ## Queries ``` Q1: SELECT Name FROM tblName WHERE FREETEXT(Name, 'james') Q2: SELECT Name FROM tblName WHERE FREETEXT(Name, 'james kirk') Q3: SELECT Name FROM tblName WHERE FREETEXT(Name, 'kirk for cameron') Q4: SELECT Name FROM tblName WHERE CONTAINS(Name, 'james') Q5: SELECT Name FROM tblName WHERE CONTAINS(Name, '"james kirk"') Q6: SELECT Name FROM tblName WHERE CONTAINS(Name, '"kirk james"') Q7: SELECT Name FROM tblName WHERE CONTAINS(Name, 'james AND kirk') Q8: SELECT Name FROM tblName WHERE CONTAINS(Name, 'kirk AND for AND cameron') ``` ## Query 1: ``` SELECT Name FROM tblName WHERE FREETEXT(Name, 'james') ``` Returns "James Kirk" and "James Cameron". Alright, lets narrow it down... ## Query 2: ``` SELECT Name FROM tblName WHERE FREETEXT(Name, 'james kirk') ``` Guess what. Now you'll get "James Kirk", "James Cameron", and "Kirk For Cameron". Same thing happens for **Query 3**, so let's just skip that. ## Query 4: ``` SELECT Name FROM tblName WHERE CONTAINS(Name, 'james') ``` Same results as Query 1. Okay. Narrow the results maybe...? ## Query 5: ``` SELECT Name FROM tblName WHERE CONTAINS(Name, '"james kirk"') ``` After discovering that you need to enclose the string in double-quotes if there are spaces, I find that this query works great on this particular dataset for the results I desire! Only "James Kirk" is returned. Wonderful! Or is it... ## Query 6: ``` SELECT Name FROM tblName WHERE CONTAINS(Name, '"kirk james"') ``` Crap. No. It is matching that exact phrase. Hmmm... After checking the [syntax for T-SQL's CONTAINS function](http://msdn.microsoft.com/en-us/library/ms187787.aspx), I see that you can throw boolean keywords in there, and it looks like that might be the answer. Let's see... ## Query 7: ``` SELECT Name FROM tblName WHERE CONTAINS(Name, 'james AND kirk') ``` Neat. I get all three results, as expected. Now I just write a function to cram the word `AND` between all the words. Done, right? What now... ## Query 8: ``` SELECT Name FROM tblName WHERE CONTAINS(Name, 'kirk AND for AND cameron') ``` This query knows exactly what it's looking for, except for some reason, there are no results. Why? Well after reading about [Stopwords and Stoplists](http://msdn.microsoft.com/en-us/library/ms142551.aspx), I will make an educated guess and say that because I'm asking for the intersection of the index results for "kirk", "for", and "cameron", and the word "for" will not have any results (what with it being a stopword and all), then the result of any intersection with that result is also empty. Whether or not it actually functions like that is irrelevant to me, since that is the observable behavior of the `CONTAINS` function every time I do a boolean search with a stopword in there. So I need a new solution. # Here comes [`NEAR`](http://msdn.microsoft.com/en-us/library/ms142568.aspx) Looks promising. If I can take a user query and put commas between it, this will... wait this is the same thing as using boolean `AND` in `CONTAINS` queries. But does it ignore stopwords correctly? ``` SELECT Name FROM tblName WHERE CONTAINS(Name, 'NEAR(kirk, for, cameron)') ``` Nope. No results. Remove the word "for", and you get all three results again. :( # What now?
I found [another question on here](https://stackoverflow.com/questions/506034/) that deals with this same topic. In fact, the post detailing the method is even titled "[A Google-like Full Text Search](http://www.sqlservercentral.com/articles/Full-Text+Search+%282008%29/64248/)". It uses an open-source library called [Irony](https://irony.codeplex.com/) to parse a user-entered search string and turn it into a FTS-compatible query. Here is the [source code for the latest version](http://irony.codeplex.com/SourceControl/latest#Irony.Samples/FullTextSearchQueryConverter/SearchGrammar.cs) of the Google-like Full-Text Search.
Have you looked at using the Semantic Index functions in SQL Server 2012? They are built on full text indexes but extend them to include details about word frequency. I used them just recently to build a word cloud and it was really good. There are some good articles to be found on the internet and you can also search for words that are 'near' each other in docs. I set up the full text index across 2 nvarchar columns and then enable sematic indexing. These links will get you started but I think it will give you what you need. [Setting up Sematic indexes](http://technet.microsoft.com/en-us/library/gg509116.aspx#HowToEnableAlter) [Some good info](https://www.simple-talk.com/sql/database-administration/exploring-semantic-search-key-term-relevance/)
How do you use T-SQL Full-Text Search to get results like Google?
[ "", "sql", "sql-server", "" ]
I have an XML block that I am sending to my stored procedure. ``` <vehicles> <licensePlate>ABC123</licensePlate> <vehicle> <model>Ford</model> <color>Blue</color> <carPool> <employee> <empID>111</empID> </employee> <employee> <empID>222</empID> </employee> <employee> <empID>333</empID> </employee> </carPool> </vehicle> </vehicles> ``` I then use a select statement to parse out the data that I need from this XML block. ``` INSERT INTO licensePlates (carColor, carModel, licensePlate, empID, dateAdded) SELECT ParamValues.x2.value('color[1]', 'VARCHAR(100)'), ParamValues.x2.value('model[1]', 'VARCHAR(100)'), ParamValues.x2.value('../licensePlate[1]', 'VARCHAR(100)'), @empID, GETDATE() FROM @xmlData.nodes('/vehicles/vehicle') AS ParamValues(x2) ``` I need to store the XML contained within the tag `<carPool>` into a column in this table. So I'm getting this XML block, and need a piece of that to not be parsed and just go directly to the table: ``` <carPool> <employee> <empID>111</empID> </employee> <employee> <empID>222</empID> </employee> <employee> <empID>333</empID> </employee> </carPool> ``` How can I go about doing this? This is an example of what the inserted record would look like. ![enter image description here](https://i.stack.imgur.com/W4qDQ.png)
[You can insert the node directly](https://meta.stackexchange.com/q/185681) ``` INSERT INTO licensePlates (carColor, carModel, licencePlate, empId, dateAdded, carPoolMembers) SELECT ParamValues.x2.value('color[1]', 'VARCHAR(100)'), ParamValues.x2.value('model[1]', 'VARCHAR(100)'), ParamValues.x2.value('../licensePlate[1]', 'VARCHAR(100)'), @empID, GETDATE(), ParamValues.x2.query('./carPool') FROM @xmlData.nodes('/vehicles/vehicle') AS ParamValues(x2) ```
Assuming your stored procedure has a parameter called `@Input XML`, you could use this code: ``` INSERT INTO dbo.YourTable(XmlColumn) SELECT @input.query('/vehicles/vehicle/carPool') ``` That should select the `<carPool>` XML tag and insert it into the XML column of your table.
TSQL XML Parsing
[ "", "sql", "sql-server", "xml", "t-sql", "" ]
I have the following table structure: `Value` (stores random integer values), Datetime` (stores purchased orders datetimes). How would I get the average value from all `Value` rows across a full day? I'm assuming the query would be something like the following ``` SELECT count(*) / 1 FROM mytable WHERE DateTime = date(now(), -1 DAY) ```
Looks like a simple `AVG` task: ``` SELECT `datetime`,AVG(`Value`) as AvgValue FROM TableName GROUP BY `datetime` ``` To find average of a specific day: ``` SELECT `datetime`,AVG(`Value`) as AvgValue FROM TableName WHERE `datetime`=@MyDate GROUP BY `datetime` ``` Or Simply: ``` SELECT AVG(`Value`) as AvgValue FROM TableName WHERE `datetime`=@MyDate ``` **Explanation:** `AVG` is an aggregate function used to find the average of a column. Read more [**here**](http://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html#function_avg).
You can `GROUP BY` the `DATE` part of `DATETIME` and use `AVG` aggregate function to find an average value for each group : ``` SELECT AVG(`Value`) , DATE(`Datetime`) FROM `mytable` GROUP BY DATE(`Datetime`) ```
Calculate average column value per day
[ "", "mysql", "sql", "datetime", "" ]
I need to construct a query that returns rows that have certain empty fields. For example I have 300 records that contain a `Name, Address and City`. Once one or more fields are empty they need to be returned. If I for example have a row that has an empty `City` and a row that has an empty `address`, both need to be returned. What would be the best way to construct this query? The reason I need this that I would like to construct a dashboard that shows incomplete records so this information can be added.
``` SELECT * FROM TABLE WHERE Name IS NULL OR Name = '' OR City IS NULL OR City = '' OR [Address] IS NULL OR [Address] = '' ```
Well we have the `IS NULL OR` and the `NULLIF`, so I take the `COALESCE` ``` SELECT Name , City , Address , ... FROM TABLE WHERE COALESCE(Name,'') = '' OR COALESCE(City,'') = '' OR COALESCE([Address],'') = '' ```
SQL Server Query check empty fields for a number of columns
[ "", "sql", "sql-server", "ssms", "" ]
I'm having trouble building my query, I'm getting an error on Unknown Column, j.id in on clause. Here's my query so far ``` SELECT j.id, j.title, venues.name as venueName, ja.completed FROM dbsivcmsnew.jobs AS j, venues LEFT JOIN dbsiv.job_applications AS ja ON ja.jobId = j.id WHERE j.venueId = venues.id AND j.closingDate > UNIX_TIMESTAMP() AND j.active = 1 ORDER BY j.closingDate DESC ``` From what I can see from [here](https://stackoverflow.com/a/13703501/993600) the syntax is correct. However, the JOIN might be unnecessary, what my query will need to return is ``` SELECT j.id, j.title, venues.name as venueName, ja.completed FROM jobs AS j, venues, dbsiv.job_applications AS ja WHERE j.venueId = venues.id AND j.closingDate > UNIX_TIMESTAMP() AND j.active = 1 AND ja.pin = $CurrentUsersId //This needs to optional though, if no match ja.completed should be 0 ORDER BY j.closingDate DESC ``` Is it possible to make that WHERE statement optional without using a join?
Well, you'r mixing syntaxes for "table joining", which is... not a good idea Just add your additional clause in the LEFT JOIN should do the trick ``` SELECT j.id, j.title, v.name as venueName, ja.completed FROM jobs j inner join venues v on v.id = j.venueID left join dbsiv.job_applications ja on ja.jobId = j.id and ja.pin = $CurrentUsersId WHERE AND j.closingDate > UNIX_TIMESTAMP() AND j.active = 1 ORDER BY j.closingDate DESC ```
Change your select to: ``` SELECT j.id, j.title, venues.name as venueName, ja.completed FROM dbsivcmsnew.jobs AS j join j.venueId = venues.id venues on LEFT JOIN dbsiv.job_applications AS ja ON ja.jobId = j.id WHERE j.closingDate > UNIX_TIMESTAMP() AND j.active = 1 ORDER BY j.closingDate DESC ``` If you use join, than use it for all tables in from statement
Unknown column error when joining from 2 different databases
[ "", "mysql", "sql", "" ]
I have 4 tables tab\_1, tab\_2, tab\_3 and tab\_4 how can i get the count of all the 4 tables using one single dynamic query? ``` expected result: count of tab_1 = count of tab_2 = count of tab_3 = count of tab_4 = ``` Thanks in Advance
It sounds like you want to run four separate queries, not a single query. It sounds like you're describing something like ``` DECLARE TYPE tbl_list IS TABLE OF VARCHAR2(30); l_tables tbl_list := tbl_list( 'table_1', 'table_2', 'table_3', 'table_4' ); l_cnt pls_integer; BEGIN FOR i IN 1 .. l_tables.count LOOP EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM ' || l_tables(i) INTO l_cnt; dbms_output.put_line( 'Count of ' || l_tables(i) || ' = ' || l_cnt ); END LOOP; END; ```
Please try: ``` SET SERVEROUTPUT ON DECLARE result1 NUMBER; result2 NUMBER; result3 NUMBER; result4 NUMBER; BEGIN select count(*) into result1 from tab_1; select count(*) into result2 from tab_2; select count(*) into result3 from tab_3; select count(*) into result4 from tab_4; DBMS_OUTPUT.PUT_LINE('count of tab_1=' || result1); DBMS_OUTPUT.PUT_LINE('count of tab_2=' || result2); DBMS_OUTPUT.PUT_LINE('count of tab_3=' || result3); DBMS_OUTPUT.PUT_LINE('count of tab_4=' || result4); END; ```
Dynamic SQL Query using loops
[ "", "sql", "oracle", "dynamic", "count", "" ]
Suppose I have the following table `table`: ``` +---------+---------+ | column1 | column2 | +---------+---------+ | A | B | | A | A | | A | A | | C | D | | B | B | | B | B | +---------+---------+ ``` I'm querying the DB in order to get all the duplicated rows in `table`: ``` SELECT * FROM table WHERE (column1, column2) IN ( SELECT column1, column2 FROM table GROUP BY column1, column2 HAVING COUNT (*) > 1); ``` and getting this: ``` +---------+---------+ | column1 | column2 | +---------+---------+ | A | A | | A | A | | B | B | | B | B | +---------+---------+ ``` Is there a way to isolate each group of duplicated rows, to save them, and then to get the next group? In the end I would have distinct results sets: ``` Result set #1: +---------+---------+ | column1 | column2 | +---------+---------+ | A | A | | A | A | +---------+---------+ Result set #2: +---------+---------+ | column1 | column2 | +---------+---------+ | B | B | | B | B | +---------+---------+ ``` My idea is to create a sequence in order to enumerate each group, something like this: ``` Result set #1: +---------+----------+ | column1 | column2 | +---------+----------+ | A | A1 | | A | A2 | +---------+----------+ Result set #2: +---------+----------+ | column1 | column2 | +---------+----------+ | B | B1 | | B | B2 | +---------+----------+ ``` Thank you.
Does this do what you want? ``` select groupnum, column1, column2, seqnum from (select t.*, count(*) over (partition by column1, column2) as cnt, dense_rank() over (order by column1, column2) as groupnum, row_number() over (partition by column1, column2 order by column1) as seqnum from table t ) t where cnt > 1 order by groupnum; ```
From the logical perspective... ``` A | A A | A ``` ...is the same thing as... ``` A | A | 2 ``` So why not just: ``` SELECT column1, column2, COUNT(*) FROM T GROUP BY column1, column2 HAVING COUNT(*) > 1 ``` ? You'll get a result such as... ``` A | A | 2 B | B | 2 ``` ...in other words: each row represents a whole group. You can then easily "expand" each group in the client code if desired.
Differentiate between groups of duplicated values
[ "", "sql", "database", "oracle", "" ]
I have 2 MYSQL TABLES: ``` TABLE 1: PRODUCTID | BRAND | BASECOLOR | COLORNAME Table 2: PRODUCTID | BRAND | COLORNAME ``` In table 1 the field 'COLORNAME' is empty and the fields 'PRODUCTID' and 'BRAND' must match in the two tables. I need to moove the row 'COLORNAME' from table2 to table 1. I've done this SQL request: ``` INSERT INTO tablel (COLORNAME) SELECT COLORNAME FROM table2 WHERE table1.PRODUCTID = table2.PRODUCTID AND table1.BRAND = table2.BRAND ``` I've been given this answer: *Unknown column 'table1.PRODUCTID' in 'where clause'* I'm new in SQL so I'm a bit lost, I would thank some help.
Try this: ``` update table1 tab1, table2 tab2 set tab1.colorname=tab2.colorname where tab2.brand=tab1.brand; ```
``` INSERT INTO tablel (COLORNAME) (SELECT t2.COLORNAME FROM table2 t2,tablel t1 WHERE t1.PRODUCTID = t2.PRODUCTID AND t1.BRAND = t2.BRAND) ```
Migrating row from one table to another with conditions [MYSQL]
[ "", "mysql", "sql", "database", "" ]
I am using the following Select as part of a bigger query. Can someone here tell me how I can refer to the manually defined name "`amountUSD`" in my Case statement ? I am always getting the following error when trying to save it this way: "`Invalid column name 'amountUSD'.`" A work-around would probably be to insert it into a temp table first but I was hoping I could avoid that. ``` SELECT (CASE WHEN R.currency = 'USD' THEN '1' ELSE E.exchange_rate END) AS exchangeRate, (R.amount * E.exchange_rate) AS amountUSD, ( CASE WHEN amountUSD < 1000 THEN '18' WHEN amountUSD < 5000 THEN '25' WHEN amountUSD < 20000 THEN '27' WHEN amountUSD < 100000 THEN '28' WHEN amountUSD < 250000 THEN '29' WHEN amountUSD < 2000000 THEN '30' WHEN amountUSD < 5000000 THEN '31' END ) AS approvalLevel FROM Exchange_Rates E WHERE E.from_currency = R.currency AND E.to_currency = 'USD' FOR XML PATH(''), ELEMENTS, TYPE ``` Many thanks for any help with this, Tim.
Will this work? ``` SELECT (CASE WHEN R.currency = 'USD' THEN '1' ELSE E.exchange_rate END) AS exchangeRate, (R.amount * E.exchange_rate) AS amountUSD, ( CASE WHEN (R.amount * E.exchange_rate) < 1000 THEN '18' WHEN (R.amount * E.exchange_rate) < 5000 THEN '25' WHEN (R.amount * E.exchange_rate) < 20000 THEN '27' WHEN (R.amount * E.exchange_rate) < 100000 THEN '28' WHEN (R.amount * E.exchange_rate) < 250000 THEN '29' WHEN (R.amount * E.exchange_rate) < 2000000 THEN '30' WHEN (R.amount * E.exchange_rate) < 5000000 THEN '31' END ) AS approvalLevel FROM Exchange_Rates E WHERE E.from_currency = R.currency AND E.to_currency = 'USD' FOR XML PATH(''), ELEMENTS, TYPE ```
You can not - because amountUSD does not exist at this point. It only exists in the output projection. You have 2 choices: * Not use amountUSD or * Not use your table, but make a 2 step query, first project amountUSD, THEN select over that and make the case there (approvalLevel). This is not as hard as it sounds as you can make a select over another select.
SQL Server: how to use manually defined column name for case statement
[ "", "sql", "sql-server", "stored-procedures", "case", "" ]
I have the following statement: ``` SELECT DISTINCT COUNT(Z.TITLE) AS COUNT FROM QMFILES.MPRLRREQDP Y, QMFILES.MPRLRTYPP Z WHERE Y.REQUEST_TYPE = Z.ID AND Y.REQUEST_ID = 13033; ``` On this particular result set, if I removed `DISTINCT` and `COUNT()` the result set will return nine rows of the exact same data. If I add `DISTINCT`, I get one row. Adding `COUNT()` I get a result of nine where I am expecting one. I am assuming the order of operations are affecting my result, but how can I fix this so I get the result I want? NOTE: This is a subselect within a larger SQL statement.
`SELECT DISTINCT COUNT(Z.TITLE)` counts the number of rows with a value for `Z.TITLE` (nine). The `DISTINCT` is superfluous, since the `COUNT` has already aggregated the rows. `SELECT COUNT(DISTINCT Z.TITLE)` counts the number of distinct values in `Z.TITLE` (one).
The COUNT results in one new record which contains the number of items in the SELECT statement. DISTINCT is looking for the results of the COUNT, which is only that single record after all.
Why would DISTINCT COUNT() return 9 instead of 1?
[ "", "sql", "db2", "ibm-midrange", "" ]
I am new to mySQL development and I'm trying to set a simple hotel reservation database. One part of the database is the list of tables below: * Hotels * Rooms The relationship is: One hotel contains many rooms, but one room belongs to one specific hotel (one-to-many relationship). So I designed the table like this: ``` CREATE TABLE Hotels ( HotelName CHAR(50) NOT NULL UNIQUE, HotelEmail CHAR(200) NOT NULL, Category CHAR(1) NOT NULL, Street CHAR(200), StreetNumber CHAR(5), PostalCodhotelse CHAR(10), City CHAR(20), Country CHAR(30), Phone CHAR(10), PRIMARY KEY (HotelName) ); ``` `Rooms` table contains the rooms with the primary key of the table combined with `HotelName` and `RoomNumber`. At this point, I want every time I insert a new `HotelName`, the`auto_increment (`RoomNumber`) to be reset, i.e.: ``` Hotel RoomNumber ------ ---------- HotelA 1 HotelA 2 HotelA 3 HotelB 1 HotelB 2 HotelC 1 ``` So I designed the table `Rooms` like this: ``` CREATE TABLE Rooms ( HotelName CHAR(200) NOT NULL, RoomNumber int auto_increment, RoomType CHAR(30), NumOfBeds INT Unsigned, Description TEXT, PRIMARY KEY (HotelName, RoomNumber), FOREIGN KEY (HotelName) References hotels (HotelName) ON UPDATE cascade ); ``` However, when I try to execute the last SQL statement to create the table, I get the message: ``` Error Code: 1075. Incorrect table definition; there can be only one auto column and it must be defined as a key ``` My question is how to set the combined key to the table so I can insert `RoomNumber` every time I change the `HotelName`?
InnoDB tables do not support auto-increment that tracks different max values per hotel, in the way you describe. InnoDB tracks only one auto-increment value per table. MyISAM tables do support this feature, but that only works because MyISAM does table-level locking, so it is able to scan the table for the largest id value for the respective hotel. Also MyISAM does not support rollback, so it's a bit less likely to produce gaps in the sequence. But in spite of this, [I recommend against using MyISAM](https://stackoverflow.com/questions/20148/myisam-versus-innodb/17706717#17706717). In short: you can make a compound primary key, but you can't use auto-increment to populate it. You have to give specific integers in the INSERT statement.
Why is Room Number an auto inc? What about suites, what about hotels where room numbers have gaps? I personally would be tempted by a room\_id auto inc surrogate, but that's another issue and in general a highly opinionated one. Room number is a natural key, it should not be auto-incrementing.
Auto_Increment: how to auto_increment a combined-key (ERROR 1075)
[ "", "mysql", "sql", "" ]
I am doing report in SSRS, I need dataset column for calculating the number of leap years between two dates in t-SQL. I found the function for single input parameter whether it is the leap year or not but for my requirement two parameters in function or any t-SQL statement. Thanks..waiting for anybody reply
I thought, will add as another answer. ``` DECLARE @A DATE = '2008-03-23', @B DATE = '2012-04-20' DECLARE @AM INT,@AY INT,@BM INT,@BY INT SET @AM = DATEPART(MONTH,@A), --3 @AY = DATEPART(YEAR,@A), --2008 @BM = DATEPART(MONTH,@B), --4 @BY = DATEPART(YEAR,@B) --2012 DECLARE @COUNT INT = 0 WHILE (@AY <= @BY) BEGIN SET @COUNT = @COUNT + (CASE WHEN (@AY%4 = 0 AND @AY%100 !=0) OR @AY%400 = 0 THEN 1 ELSE 0 END) SET @AY = @AY + 1 END SET @COUNT = @COUNT + CASE WHEN @AM >= 3 THEN -1 ELSE 0 END SELECT @A BEGIN_DATE,@Y END_DATE,@COUNT NO_OF_LEAP_YEARS ``` As I dont have an instance of sql server available now,I did not test the code..But you will get the an idea about what I was trying to achieve. I declared @BM, in case you want to do the checking with the end month too..
Number of leap days between two dates. ``` DECLARE @StartDate DATETIME = '2000-02-28', @EndDate DATETIME = '2017-02-28' SELECT ((CONVERT(INT,@EndDate-58)) / 1461 - (CONVERT(INT,@StartDate-58)) / 1461) ``` -58 to start counting from 1st March 1900 and / 1461 being the number of days between 29th Februaries. NOTE: in Excel, the -58 would be -60 as 1st Jan 1900 in Excel is day 1 but in SQL is day zero and SQL doesn't recognise 29th Feb 1900 whereas Excel does. ALSO NOTE: This formula will go wrong every 400 years as every 400 years we skip a leap year. Hope this helps someone.
how to calculate number of leap years between two dates in t-sql?
[ "", "sql", "sql-server", "t-sql", "reporting-services", "" ]
The solution to the topic is evading me. I have a table looking like (beyond other fields that have nothing to do with my question): NAME,CARDNUMBER,MEMBERTYPE Now, I want a view that shows rows where the cardnumber AND membertype is identical. Both of these fields are integers. Name is VARCHAR. Name is not unique, and duplicate cardnumber, membertype should show for the same name, as well. I.e. if the following was the table: ``` JOHN | 324 | 2 PETER | 642 | 1 MARK | 324 | 2 DIANNA | 753 | 2 SPIDERMAN | 642 | 1 JAMIE FOXX | 235 | 6 ``` I would want: ``` JOHN | 324 | 2 MARK | 324 | 2 PETER | 642 | 1 SPIDERMAN | 642 | 1 ``` this could just be sorted by cardnumber to make it useful to humans. What's the most efficient way of doing this?
Since you mentioned names can be duplicated, and that a duplicate name still means is a different person and should show up in the result set, we need to use a GROUP BY HAVING COUNT(\*) > 1 in order to truly detect dupes. Then join this back to the main table to get your full result list. Also since from your comments, it sounds like you are wrapping this into a view, you'll need to separate out the subquery. ``` CREATE VIEW DUP_CARDS AS SELECT CARDNUMBER, MEMBERTYPE FROM mytable t2 GROUP BY CARDNUMBER, MEMBERTYPE HAVING COUNT(*) > 1 CREATE VIEW DUP_ROWS AS SELECT t1.* FROM mytable AS t1 INNER JOIN DUP_CARDS AS DUP ON (T1.CARDNUMBER = DUP.CARDNUMBER AND T1.MEMBERTYPE = DUP.MEMBERTYPE ) ``` [SQL Fiddle Example](http://www.sqlfiddle.com/#!8/37b49)
> What's the most efficient way of doing this? I believe a `JOIN` will be more efficient than `EXISTS` ``` SELECT t1.* FROM myTable t1 JOIN ( SELECT cardnumber, membertype FROM myTable GROUP BY cardnumber, membertype HAVING COUNT(*) > 1 ) t2 ON t1.cardnumber = t2.cardnumber AND t1.membertype = t2.membertype ``` Query plan: <http://www.sqlfiddle.com/#!2/0abe3/1>
SQL - select rows that have the same value in two columns
[ "", "mysql", "sql", "join", "" ]
As I understand I have a denormalized table. Here is some list of table columns: ``` ... C, F, T, C1, F1, T1, .... C8, T8, F8..... ``` Is it possible to select those values in a rows? Something like this: ``` C, F, T C1, F1, T1 ...... C8, F8, T8 ```
You can do it easily with a `union all`: ``` select C, F, T from table t union all select C1, F1, T1 from table t union all . . . select C8, F8, T8 from table t; ``` Note the use of `union all` instead of `union`. `union` does automatic duplicate elimination, so you might not get all your values with `union` (as well as it being a more expensive operation). This will generally result in the table being scanned 9 times. If you have a large table, there are other methods that are likely to be more efficient. EDIT: A more efficient method is likely to be a `cross join` and `case`. In DB2, I think this would be: ``` select (case n.n when 0 then C when 1 then C1 . . . when 8 then C8 end) as C, (case n.n when 0 then F when 1 then F1 . . . when 8 then F8 end) as F, (case n.n when 0 then T when 1 then T1 . . . when 8 then T8 end) as T from table t cross join (select 0 as n from sysibm.sysdummy1 union all select 1 from sysibm.sysdummy1 union all . . . select 9 from sysibm.sysdummy1 ) n; ``` This may seem like more work, but it should only be reading the bigger table once, with the rest of the work being in-memory operations.
``` select c,f,t from table union all select c1,f1,t1 from table union all select c8,f8,t8 from table ``` Make sure to filter by WHERE clause each SELECT statement.
Denormalized table. SQL Select
[ "", "sql", "db2", "" ]
I want to show blank '' instead of Zero value of Expression field. If exp is of INT datatype. whenever I try to use(case when exp1 is 0 then '' else exp1 end as exp1)but it still gets 0 as output.any help appreciated. Thanks
If `exp` is a numeric type you'll need to convert it to a string using [`CAST` or `CONVERT`](http://msdn.microsoft.com/en-us/library/ms187928.aspx). Also, I don't believe `exp1 is 0` will work; I think you're looking for `exp1 = 0` instead. Try something like this this: ``` (case when exp1 = 0 then '' else cast(exp1 as varchar(30)) end) as exp1 (case when exp1 = 0 then '' else convert(varchar(30), exp1) end) as exp1 ``` Or using a simple `CASE` expression, like this: ``` (case exp1 when 0 then '' else cast(exp1 as varchar(30)) end) as exp1 (case exp1 when 0 then '' else convert(varchar(30), exp1) end) as exp1 ``` Note: The default length for `varchar` and `nvarchar` in `CAST` and `CONVERT` is 30, so `cast(exp1 as varchar)` or `convert(varchar, exp1)` would work as well, but as a matter of practice it's best to specify the lengths of these types whenever you use them. --- However, if what you'd rather do is convert the value 0 to `null`, it's fairly easy. Just use [`NULLIF`](http://msdn.microsoft.com/en-us/library/ms177562.aspx): ``` nullif(exp1, 0) exp1 ``` This will return `NULL` if `exp1` evaluates to 0, otherwise it will return the value of `exp1`. When you're inserting this value into a table, make sure the column you are inserting it into is nullable. If you're not familiar with using null, see the [Wikipedia article](http://en.wikipedia.org/wiki/Null_%28SQL%29) on the topic for more information.
I used RegEx ``` { cast(trim( replace(to_char(value),'0','' )) as int) } ```
replace 0 with blank spaces
[ "", "sql", "sql-server", "t-sql", "" ]
I am having trouble with some logic here. Trying to get a count of rows where S.ID's not in my subquery. ``` COUNT(CASE WHEN S.ID IN (SELECT DISTINCT S.ID FROM...) THEN 1 ELSE 0 END) ``` I am recieving the error: ``` Cannot perform an aggregate function on an expression containing an aggregate or a subquery. ``` How to fix this or an alternative?
Maybe something like this? ``` SELECT COUNT(*) FROM .... WHERE ID NOT IN (SELECT DISTINCT ID FROM ...) ```
Using `EXISTS` : ``` SELECT COUNT(t.*) FROM table1 t WHERE NOT EXISTS (SELECT * FROM table2 WHERE ID = t.ID) ```
TSQL Count Case When In
[ "", "sql", "sql-server", "t-sql", "sql-server-2012", "" ]
I'm looking at the documentation for the Play framework and it appears as though [SQL is being written in the framework](https://github.com/playframework/playframework/blob/master/documentation/manual/scalaGuide/tutorials/todolist/ScalaTodoList.md#persist-the-tasks-in-a-database). Coming from Rails, I know this as a bad practice. Mainly because developers require the need to switch databases as they scale. What are the practices in Play to allow for developers to implement conventions and to work with databases without having to hard code SQL?
One of the feature/defects (depending on who you ask) of Play is the **there is no ORM**. (An ORM is an object-relational mapper/mapping; it is the part of Rails, Django, etc. that writes your SQL for you.) * Pro-ORM: You don't have to write any SQL. + Ease-of-use: Some developers unused to SQL will find this easier. + Code resuse: Tables are usually based on your classes; there is less duplication of code. + Portability: ORMs try smooth over any differences between DMBS vendors. * No-ORM: You get to write your own SQL, and not rely on unseen ORM (black)magic. + Performance: I work for a company which produces high-traffic web applications. With millions of visitors, you need to know *exactly* what queries you are running, and *exactly* what indicies you are using. Otherwise, the code that worked so well in dev will crash production. + Flexibility: ORMs often do not have the full range of expression that a domain-specific language like SQL does. Some more complex sub-selects and aggregation queries will be difficult/impossible to write with ORMs. + While you may think "developers require the need to switch databases as they scale", if you scale enough to change your DBMS, *changing query syntax will be the least of your scalability issues*. Often, the queries themselves will have to be rewritten to use **sharding**, etc., at which point the ORM is dead. It is a tradeoff; one that in my experience has often favored no ORM. See the [Anorm](http://www.playframework.com/documentation/2.0/ScalaAnorm) page for Play's justification of this decision: > **You don’t need another DSL to access relational databases** > > SQL is already the best DSL for accessing relational databases. We don’t need to invent something new. > > ... --- Play developers will typically write their own SQL (much the same way they will write in other languages, like HTML), use [Anorm](http://www.playframework.com/documentation/2.0/ScalaAnorm), and follow common SQL conventions. If portability is a requirement, use only [ANSI SQL](http://en.wikipedia.org/wiki/SQL-92) (no vendor-specific featues). It is generally well supported. --- EDIT: Or if you are really open-minded, you might have a look at NoSQL databases, like Mongo. They are inherently object-based, so object-oriented Ruby/Python/Scala can be used as the API naturally.
In addition to Paul Draper's excellent answer, this post is meant to tell you about what Play developers usually do in practice. **TL;DR: [use](https://github.com/playframework/play-slick) [Slick](http://slick.typesafe.com/)** Play is less opinionated than Rails and gives the user many more choices for their data storage backend. Many people use Play as a web layer for very complex existing backend systems. Many people use Play with a NoSQL storage backend (e.g. MongoDB). Then there's also people using Play for traditional web-service-with-SQL applications. Generalizing a bit too much, one can recognize two people using Play with relational databases. **"Traditional web developers"** They are used to standard Java technologies or are part of an organization that uses them. The Java Persistence API and its implementations (Hibernate, EclipseLink, etc...) are their ORM. You can do so too. There are also appear to be [Scala ORMs](http://sorm-framework.org/), but I'm less familiar with those. Note that Java/Scala ORMs are still different ORMs *in style* when compared to Rails' ActiveRecord. Ruby is a dynamic language that allows/promotes loads of monkey patching and `method_missing` stuff, so there is `MyObject.finder_that_I_didnt_necessarily_have_to_write_myself()`. This style of ORM is called the [active record pattern](http://en.wikipedia.org/wiki/Active_record_pattern). This style is impossible to accomplish in pure Java and discouraged in Scala (as it violates type safety), so you have to get used to writing a more traditional style using service layers and data access objects. **"Scala web developers"** Many Scala people think that ORMs are [a bad abstraction for database access](http://en.wikipedia.org/wiki/Object-relational_impedance_mismatch). They also agree that using raw SQL is a bad idea, and that an abstraction is still in order. Luckily Scala is an expressive compiled language, and people have found a way to abstract database access that does not rely on the *object oriented language* paradigm as ORMs do, but mostly on the *functional language* paradigm. This is quite a similar idea to the [LINQ](http://en.wikipedia.org/wiki/LINQ) query language Microsoft has made for their .NET framework languages. The core idea is that you don't have an ORM to perform query magic, nor write queries in SQL, but write them in *Scala itself*. The advantages of this approach are twofold: 1. You get a more fine grained control over what your queries actually execute when compared to ORMs. 2. Queries are checked for validity by the Scala compiler, so you don't get runtime errors for invalid SQL you wrote yourself. If it is valid Scala, it is translated to a valid SQL statement for you. Two major libraries exist for accomplishing this. The first is [Squeryl](http://squeryl.org/). The second is [Slick](http://slick.typesafe.com/). Slick appears to be the most popular one, and there are some examples floating around the web that show how you are supposed to make it work with Play. Also check out [this video](http://parleys.com/play/51c2e20de4b0d38b54f46243/) that serves as an introduction to Slick and which compares it to the ORM approach.
Writing SQL in Play with Scala?
[ "", "sql", "scala", "orm", "playframework", "" ]
I am trying to get the following result in Sql: example: 23/05/2014 to 20142305 but get this: ``` select convert(decimal, convert(datetime, '5/23/2014')) ``` result:41780 anyone know how you can get the following format?? (if possible ??) regards and thanks
In many databases (including SQL Server), you can just do: ``` select 10000 * year(datetime) + 100 * month(datetime) + day(datetime) ``` Some databases don't support these functions, so you might need `extract(datepart from datetime)` or a similar function.
You can try this ``` SELECT CAST(CONVERT(VARCHAR(8), datetime, 112) AS DECIMAL) ```
convert datetime (mm/dd/YYYY) to decimal(YYYYmmDD)
[ "", "sql", "sql-server-2008", "" ]
I have a table with PO#,Days\_to\_travel, and Days\_warehouse fields. I take the distinct Days\_in\_warehouse values in the table and insert them into a temp table. I want a script that will insert all of the values in the Days\_in\_warehouse field from the temp table into the Days\_in\_warehouse\_batch row in table 1 by PO# duplicating the PO records until all of the POs have a record per distinct value. Example: Temp table: (Contains only one field with all distinct values in table 1) ``` Days_in_warehouse 20 30 40 ``` Table 1 : ``` PO# Days_to_travel Days_in_warehouse Days_in_warehouse_batch 1 10 20 2 5 30 3 7 40 ``` Updated Table 1: ``` PO# Days_to_travel Days_in_warehouse Days_in_warehouse_batch 1 10 20 20 1 10 20 30 1 10 20 40 2 5 30 20 2 5 30 30 2 5 30 40 3 7 40 20 3 7 40 30 3 7 40 40 ``` Any ideas as to how can I update Table 1 to see desired results?
One more way without a TEMP table and DELETE. ``` UPDATE T SET [Days_in_warehouse_batch] = [Days_in_warehouse]; INSERT INTO T ([PO], [Days_to_travel], [Days_in_warehouse], [Days_in_warehouse_batch]) SELECT T.PO, T.Days_to_travel, T.Days_in_warehouse, DAYS_Table.Days_in_warehouse FROM T CROSS JOIN (SELECT DISTINCT Days_in_warehouse FROM T) as DAYS_Table WHERE T.Days_in_warehouse <> DAYS_Table.Days_in_warehouse; ``` `SQLFiddle demo`
What you're looking for is the `cartesian product` between your two tables. ``` select t1.po, t1.daystotravel, t1.daysinwarehouse, temp.daysinwarehousebatch from table1 t1, temp ``` The easiest way I can think of updating table1 with these values is to insert them, and then delete the originals. ``` insert into table1 select t1.po, t1.daystotravel, t1.daysinwarehouse, temp.daysinwarehousebatch from table1 t1, temp ``` And then delete the originals: ``` delete from table1 where daysinwarehousebatch is null ``` * [SQL Fiddle Demo](http://sqlfiddle.com/#!3/ca254/1)
Insert rows based on number of distinct values in another table in SQL
[ "", "sql", "sql-server", "t-sql", "" ]
i have 2 tables, one is called `TEMP` and the other one is called `MAIN`. So all what i am trying to do is to check if all records from the `TEMP` table are in the `MAIN` table. The logic should be all records from temp table must also be in the main table but when i run the sql query here; it does not give me any record and i know there are records missing in the main table from the temp table. what am i doing wrong here? ``` IF EXISTS(SELECT DISTINCT GRP_NM ,GRP_VAL FROM TEMP WHERE GRP_NM + GRP_VAL NOT IN (SELECT GRP_NM + GRP_VAL FROM MAIN) ) BEGIN INSERT INTO MAIN(GRP_NM, GRP_VAL ) SELECT GRP_NM ,GRP_VAL FROM MAIN WHERE GRP_NM + GRP_VAL NOT IN (SELECT GRP_NM + GRP_VAL FROM MAIN) END ```
I suspect your problem has something to do with NULLs. If either GRP\_NM or GRP\_VAL is null in either table, then GRP\_NM + GRP\_VAL will be null, and your IN and EXISTS statements get totally bollixed up. In any case, try this one out: ``` INSERT MAIN (GRP_NM, GRP_VAL) select GRP_NM, GRP_VAL from TEMP except select GRP_NM, GRP_VAL from MAIN ```
``` INSERT INTO MAIN(GRP_NM, GRP_VAL ) SELECT GRP_NM, GRP_VAL FROM TEMP WHERE NOT EXISTS (SELECT 1 FROM MAIN WHERE GRP_NM = TEMP.GRP_NM AND GRP_VAL = TEMP.GRP_VAL ) ```
issues using if exists statement in sql
[ "", "sql", "sql-server", "t-sql", "" ]
I have a strange bug here. I'm trying to limit the result from the query below to just ONE row/result: ``` SELECT userEmail FROM [LunchUsersAvailable] WHERE LunchGroupsID = '301' AND availableDate = '2015-01-01' AND userEmail != 'userEmail@company.com' EXCEPT ( SELECT DISTINCT (COALESCE([user1], [user2])) AS matchedEmail FROM ( SELECT [user1], [user2] FROM [LunchMatched] UNION ALL SELECT [user2], [user1] FROM [LunchMatched] ) AS tmp WHERE (user1 = 'userEmail@company.com' OR user2 = 'userEmail@company.com') ) ``` **OUTPUT:** test2@company.com test3@company.com test4@company.com --- I tried using **TOP 1**. Didn't work. No results at all. ``` SELECT TOP 1 userEmail FROM [LunchUsersAvailable] WHERE LunchGroupsID = '301' AND availableDate = '2015-01-01' AND userEmail != 'userEmail@company.com' EXCEPT ( SELECT DISTINCT (COALESCE([user1], [user2])) AS matchedEmail FROM ( SELECT [user1], [user2] FROM [LunchMatched] UNION ALL SELECT [user2], [user1] FROM [LunchMatched] ) AS tmp WHERE (user1 = 'userEmail@company.com' OR user2 = 'userEmail@company.com') ) ``` --- I tested with **MIN()** as well. No results. ``` SELECT MIN (userEmail) as email FROM [LunchUsersAvailable] WHERE LunchGroupsID = '301' AND availableDate = '2015-01-01' AND userEmail != 'userEmail@company.com' EXCEPT ( SELECT DISTINCT (COALESCE([user1], [user2])) AS matchedEmail FROM ( SELECT [user1], [user2] FROM [LunchMatched] UNION ALL SELECT [user2], [user1] FROM [LunchMatched] ) AS tmp WHERE (user1 = 'userEmail@company.com' OR user2 = 'userEmail@company.com') ) ``` --- I tested with **MAX()** just in case. **It works!** ``` SELECT MAX (userEmail) as email FROM [LunchUsersAvailable] WHERE LunchGroupsID = '301' AND availableDate = '2015-01-01' AND userEmail != 'userEmail@company.com' EXCEPT ( SELECT DISTINCT (COALESCE([user1], [user2])) AS matchedEmail FROM ( SELECT [user1], [user2] FROM [LunchMatched] UNION ALL SELECT [user2], [user1] FROM [LunchMatched] ) AS tmp WHERE (user1 = 'userEmail@company.com' OR user2 = 'userEmail@company.com') ) ``` **OUTPUT:** test4@company.com --- In summary, my question is: how do I get the **test2@company.com** as a result? --- **UPDATE WITH MORE TESTING:** I've done two different tests. One works, the other one doesn't. I'll show each one in detail. First, the test where everything works fine: I want to explain each SELECT individually and the result for each part individually: ``` -- this outter SELECT lists all available users who are available SELECT userEmail FROM [LunchUsersAvailable] WHERE LunchGroupsID = '301' AND availableDate = '2015-01-01' AND userEmail != 'userEmail@company.com' ``` **OUTPUT:** test1@company.com test2@company.com test3@company.com test4@company.com (4 row(s) affected) --- ``` -- this inner SELECT lists all the users who has been paired with userEmail@company.com before SELECT DISTINCT (COALESCE([user1], [user2])) AS matchedEmail FROM ( SELECT [user1], [user2] FROM [LunchMatched] UNION ALL SELECT [user2], [user1] FROM [LunchMatched] ) AS tmp WHERE (user1 = 'userEmail@company.com' OR user2 = 'userEmail@company.com') ``` **OUTPUT:** userEmail@company.com test2@company.com (2 row(s) affected) --- Running the full code, **without** the TOP 1, MIN() or MAX(): test1@company.com test3@company.com test4@company.com (3 row(s) affected) --- This is **all correct** so far. Now I need to get only the **test1@company.com** part so I change the first line of the code from SELECT userEmail FROM [LunchUsersAvailable] to SELECT TOP 1 userEmail FROM [LunchUsersAvailable] The result is: test1@company.com (1 row(s) affected) --- This is working **absolutely** fine. But now, if the *userEmail@company.com* has been previously matched with the **test1@company.com instead of the test2@company.com** ([LunchMatched] table) the result is different. The INNER, OUTTER and the FULL code (without TOP 1, MIN or MAX) works the same way as explained above, which is fine. **BUT**, if I change the first line of the code from SELECT userEmail FROM [LunchUsersAvailable] to SELECT TOP 1 userEmail FROM [LunchUsersAvailable] The result is **null** and not the correct one: test2@company.com (1 row(s) affected)
I got it! The TOP 1 or MIN is processing **before** the EXCEPT which means that it will first get the TOP 1 from the available users result and only them apply the EXCEPTION. **Solution**: get the full list of available users, run the exception and only then get TOP 1 ``` SELECT TOP 1 useremail FROM ( SELECT userEmail FROM [LunchUsersAvailable] WHERE LunchGroupsID = '301' AND availableDate = '2015-01-01' AND userEmail != 'userEmail@company.com' EXCEPT -- from a specific cafeteria and date, EXCEPT ( SELECT DISTINCT (COALESCE([user1], [user2])) AS matchedEmail -- select all matched users in one column FROM ( SELECT [user1], [user2] FROM [LunchMatched] UNION ALL SELECT [user2], [user1] FROM [LunchMatched] ) AS tmp WHERE (user1 = 'userEmail@company.com' OR user2 = 'userEmail@company.com') -- who has been previously matched with the user who is registering now ) ) as tmp ```
This suggests that `userEmail` takes on multiple values, one of which is the empty string (`''` -- not `NULL`). The first row returned by `userEmail` has this empty value. The `min()` would capture it. The `max()` doesn't. Put in a check that `userEmail <> ''` or just use `max()`. If you want the `min()`, you could also use conditional aggregation with a `case`: ``` select min(case when userEmail > '' then userEmail end) ``` EDIT: I see the problem. The `top 1`, `min()` and `max()` refer to the first query before the `except`. I had missed the `except` because it is scrolled off the end of the line. `except` works like `union`, connecting queries together. It is not part of the `where` clause.
SQL - TOP 1 and MIN doesn't work but MAX works
[ "", "sql", "max", "min", "" ]
I need to copy a table into a new table on SQL server 2008. Also, add a new column into the new table. The values of the new column depends on the compare result between the new table and another table. Example, Table1: ``` col1 col2 col3 abc 346 6546 hth 549 974 ``` Expected Table1\_new: ``` col1 col2 col3 col4 abc 346 6546 1 hth 549 974 0 ``` Table2: ``` col1 abc sfsdf ``` If Table2's col1 appear in Table1 col1, mark col4 as 1 in Table1\_new, else mark as 0. The code does not work ``` SELECT *, ( SELECT 1 as col4 FROM Table2 as a INNER JOIN Table1 as b on b.col1 = a.col1 SELECT 0 as col4 FROM Table2 as a INNER JOIN Table1 as b on b.col1 <> a.col1 # I do not know how to do this !!! ) INTO table1_new FROM table1 ``` Any help would be appreciated.
You could use an outer join: ``` SELECT table1.col1, col2, col3, CASE WHEN table2.col1 IS NULL THEN 0 ELSE 1 END AS col4 INTO table1_new FROM table1 LEFT OUTER JOIN table2 ON table1.col1 = table2.col1 ```
You can do this in several ways. The following uses an `exists` clause in a `case` statement: ``` insert into table1_new(col1, col2, col3, col4) select col1, col2, col3, (case when exists (select 1 from table2 t2 where t2.col1 = t1.col1) then 1 else 0 end) from table1 t1; ``` You can also do this with a `left outer join`, but you run the risk of duplicates if `t2` has duplicates.
copy a table into a new table and add a new column on SQL server 2008
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I'm trying to convert a number to a decimal with two decimals places. ``` SELECT CONVERT(DECIMAL(10,2),12345) ``` The above would return 12345.00 but I'm trying to achieve 123.45
You need something like that: ``` SELECT CONVERT(DECIMAL(15,2),12345/100.0) ```
``` SELECT CONVERT(DECIMAL(10,2),CAST(12345 as float)/CAST(100 as float)) ```
SQL - Convert number to decimal
[ "", "sql", "sql-server", "t-sql", "" ]
I have a table that looks like this: ``` memberno(int)|member_mouth (varchar)|Inspected_Date (varchar) ----------------------------------------------------------------------------- 12 |'1;2;3;4;5;6;7' |'12-01-01;12-02-02;12-03-03' [7 members] ``` So by looking at how this table has been structured (poorly yes) The values in the `member_mouth` field is a string that is delimited by a ";" The values in the `Inspected_Date` field is a string that is delimited by a ";" So - for each delimited value in `member_mouth` there is an equal `inspected_date` value delimited inside the string This table has about 4Mil records, we have an application written in C# that normalizes the data and stores it in a separate table. The problem now is because of the size of the table it takes a long time for this to process. (the example above is nothing compared to the actual table, it's much larger and has a couple of those string "array" fields) My question is this: What would be the best and fastest way to normilize this data in MSSQL proc? let MSSQL do the work and not a C# app?
The best way will be SQL itself. The way followed in the below code is something which worked for me well with 2-3 lakhs of data. I am not sure about the below code when it comes to 4 Million, but may help. ``` Declare @table table (memberno int, member_mouth varchar(100),Inspected_Date varchar(400)) Insert into @table Values (12,'1;2;3;4;5;6;7','12-01-01;12-02-02;12-03-03;12-04-04;12-05-05;12-07-07;12-08-08'), (14,'1','12-01-01'), (19,'1;5;8;9;10;11;19','12-01-01;12-02-02;12-03-03;12-04-04;12-07-07;12-10-10;12-12-12') Declare @tableDest table (memberno int, member_mouth varchar(100),Inspected_Date varchar(400)) ``` The table will be like. ``` Select * from @table ``` ![enter image description here](https://i.stack.imgur.com/IDwxL.png) See the code from here. ``` ------------------------------------------ Declare @max_len int, @count int = 1 Set @max_len = (Select max(Len(member_mouth) - len(Replace(member_mouth,';','')) + 1) From @table) While @count <= @max_len begin Insert into @tableDest Select memberno, SUBSTRING(member_mouth,1,charindex(';',member_mouth)-1), SUBSTRING(Inspected_Date,1,charindex(';',Inspected_Date)-1) from @table Where charindex(';',member_mouth) > 0 union Select memberno, member_mouth, Inspected_Date from @table Where charindex(';',member_mouth) = 0 Delete from @table Where charindex(';',member_mouth) = 0 Update @table Set member_mouth = SUBSTRING(member_mouth,charindex(';',member_mouth)+1,len(member_mouth)), Inspected_Date = SUBSTRING(Inspected_Date,charindex(';',Inspected_Date)+1,len(Inspected_Date)) Where charindex(';',member_mouth) > 0 Set @count = @count + 1 End ------------------------------------------ Select * from @tableDest Order By memberno ------------------------------------------ ``` **Result.** ![enter image description here](https://i.stack.imgur.com/byy2K.png)
You can take a reference here. [Splitting delimited values in a SQL column into multiple rows](https://stackoverflow.com/questions/11018076/splitting-delimited-values-in-a-sql-column-into-multiple-rows)
comparable varchar "arrays" in seperate fields but on same row
[ "", "sql", "sql-server", "stored-procedures", "database-normalization", "denormalization", "" ]
I want to select distinct based on two columns so that if either of the two columns has the same value, then it will only post that row once. If it helps you understand, I am doing a sender/receiver thing and I only want to pull one row (whether it be sender or receiver). Here is how I want the MySQL to work: ``` SELECT DISTINCT(sender AND receiver), message, timestamp FROM messages WHERE receiver=receiver_name OR sender=sender_name; ``` Example: Messages Table: ``` id| sender |receiver | message | timestamp 1 | Jeffrey | Michael | Hey man, what's up | 12:00 PM 2 | Michael | Jeffrey | Not much. How are you? | 12:02 PM ``` Result after SQL query: Latest message from Michael: Not much. How are you? Time: 12:02 PM I hope that makes some sense
From the OP comment to the answer of Gordon Linoff it seems he is interested to retrieve the last message to every conversione between he and someone else, in that case the sender or the receiver will be him ``` SET @myname = 'Michael' --just stick here your name SELECT m.* FROM messages m INNER JOIN (SELECT MAX(ID) ID FROM messages WHERE @myname IN (receiver, sender) GROUP BY COALESCE(NULLIF(receiver, @myname), sender) ) a ON m.ID = a.ID ``` The formula `COALESCE(NULLIF(receiver, @myname), sender)` will always resolve in the other name: * if the OP is the sender then `NULLIF(receiver, @myname)` will return the receiver and so will the `COALESCE` * if the OP is the receiver then `NULLIF(receiver, @myname)` will return `NULL` and `COALESCE` will return the sender
You need to use the concat function. ``` SELECT distinct concat(sender,receiver) FROM messages WHERE receiver=receiver_name OR sender=sender_name; ``` See [here](http://www.mysql.com/about/contact/) for more info.
How do I use an AND statement with SELECT DISTINCT?
[ "", "mysql", "sql", "" ]
I want to pull data from table t ``` select (select round(avg(High),2) from t where High!=0)as High, (select round(avg(Med),2) from t where Med!=0) as Med, (select round(avg(Low),2) from t where Low!=0) as Low ``` and my table t is ``` select AAAA as High, BBBB as Med, CCCC as Low from TABLE1 where company like '%3m%' union all select (select round(XXX,2) from TABLE2 where company like '%3m%' and XXX!=0)as High, (select round(YYY),2) from TABLE2 where company like '%3m%' and YYY!=0) as Med, (select round(ZZZ,2) from TABLE2 where company like '%3m%' and ZZZ!=0) as Low ``` i wonder how i can use a query to pull data from table t without using subquery repetitively (originally I don't have t)? what I'm really trying to do is to get average from two rows where one row is not zero Thanks for any advice!
Try this query. It uses the CASE function. If the condition is not satisfied, then the value is assumed to be NULL, so that it would not be counted in the average. Since you do not have t to start with, you can create it using a CTE. ``` WITH t as ( select AAAA as High, BBBB as Med, CCCC as Low from TABLE1 where company like '%3m%' union all select (select round(XXX,2) from TABLE2 where company like '%3m%' and XXX!=0)as High, (select round(YYY),2) from TABLE2 where company like '%3m%' and YYY!=0) as Med, (select round(ZZZ,2) from TABLE2 where company like '%3m%' and ZZZ!=0) as Low ) SELECT ROUND(AVG(case when company like '%3m%'and ABC <> 0 then High else null end), 2) High, ROUND(AVG(case when company like '%3m%'and DEF <> 0 then Medium else null end), 2) Medium, ROUND(AVG(case when company like '%3m%'and GHI <> 0 then Low else null end), 2) Low from t; ```
You can not because every of the subqueries is different -in the filter condition.
SQL using subquery table multiple times
[ "", "sql", "sql-server", "" ]
I have two databases : DataB1 with table1 and DataB2 with Table2 ``` Table1 has : field1, field2, filed3 Table2 has : fielda, fieldb, fieldc ``` Different fields name but same data type. Question : How can I do a stored procedure that opens databases and copies from table1 to table2?
You can do this with an insert. In most databases, it would look like: ``` insert into datab1.table1(field1, field2, field3) select fielda, fieldb, fieldc from datab2.table2; ``` Some databases might have a three part naming convention if the schema name is also involved. EDIT: With a three-part naming convention, it might be something like: ``` insert into datab1.dbo.table1(field1, field2, field3) select fielda, fieldb, fieldc from datab2.dbo.table2; ```
You do not specify your database server, but most have `INSERT ... SELECT`: ``` INSERT INTO DataB2.Table2 (fielda,fieldb,fieldc) SELECT field1,field2,field3 FROM DataB1.Table1 ``` A better alternative is probably a `VIEW`, if you just want to easily see the data from another database. ``` CREATE VIEW DataB2.View2 AS SELECT field1 AS fielda, field2 AS fieldb, field3 AS fieldc FROM DataB1.Table1 ```
copy data from table in different database
[ "", "sql", "" ]
I have a DB that looks like ``` Year Name TR AST 1 Player 1 10 3 2 Player 1 23 "-" 3 Player 1 35 5 1 Player 2 32 1 2 Player 2 29 "-" 3 Player 2 50 2 1 Player 3 74 3 2 Player 3 23 1 3 Player 3 93 9 ``` I am trying to return the players name and the amount for the MAX TR. So in this case it would be Player 3 with 93 TR. I am using this SQL query within SQLITE ``` SELECT MAX(TR), name FROM database WHERE TR != "-" ``` It returns the correct MAX(TR) but it's giving the wrong Player Name. ``` 93 Player 1 ``` I've run other queries like MAX(AST) and it also gives a wrong Player Name. The same Player Name is returned regardless of which MAX query I use which is where the issue is. I've also tried this trying to follow the order of SQL execution from here <http://www.bennadel.com/blog/70-sql-query-order-of-operations.htm> ``` FROM database WHERE TR != "-" SELECT MAX(TR), name ``` As well as ``` SELECT name, MAX(TR) FROM database GROUP BY name LIMIT 1 ``` And it returns the wrong TR but with the right corresponding name for that TR. In case it's an issue I had to use ``` WHERE TR != "_" ``` otherwise it would return a player name and the TR "-". I'm not exactly sure why this is either. Is there any issue with the order of my query statement?
The easiest way to get what you want is using `order by` and `limit`: ``` SELECT name, TR FROM database WHERE TR <> '-' ORDER BY TR DESC LIMIT 1; ``` You don't even need the `max()`.
I would use a subquery, probably something like this: ``` SELECT name, TR FROM database WHERE TR IN ( SELECT MAX(TR) FROM database WHERE TR != "-" ) ``` Note that this can give you multiple results when there are multiple players the same TR, which is also the max TR.
SQL MAX Statement returning wrong column
[ "", "sql", "sqlite", "" ]
Hi I have 2 table With same column name (Id,FName, LName,DOB,Type,Contact, Add). Now I want to get all data as a separate row which have same FName, LName and DOB from both table as order by Fname . I try to use intersect and Union but not work for me. ``` Select * from #t1 where FirstName in (select FirstName from #t2 ) and LastName in (select LastName from #t2 ) and DateOfBirth in (select DateOfBirth from #t2 ) UNION Select * from #t2 where FirstName in (select FirstName from #t1 ) and LastName in (select LastName from #t1 ) and DateOfBirth in (select DateOfBirth from #t1 ) order by FirstName ,LastName ``` Thanks
This would do it I think but I'm sure there's a more succinct way... ``` SELECT T1.* FROM T1 INNER JOIN T2 ON T2.FirstName = T1.FirstName AND T2.LastName = T1.LastName AND T2.DateOfBirth = T1.DateOfBirth UNION ALL SELECT T2.* FROM T2 INNER JOIN T1 ON T1.FirstName = T2.FirstName AND T1.LastName = T2.LastName AND T1.DateOfBirth = T2.DateOfBirth ```
Try this ``` SELECT T1.Id, T1.FName, T1.LName, T1.DOB, T1.Type, T1.Contact, T1.Add FROM #t1 T1 INNER JOIN #t2 T2 ON T1.FName=T2.FName AND T1.LName=T2.LName AND T1.dob=T2.dob ```
Get Common data from two different table
[ "", "mysql", "sql", "sql-server", "sql-server-2008", "" ]
Take three layers of information: # Layer 1: Information This layer contains data with `UNIQUE` natural indexes and a surrogate key that is easily transferrable. ``` Table Surnames: +-----------------------------+--------------+ | ID (Auto Increment, PK) | Surname | +-----------------------------+--------------+ | 1 | Smith | | 2 | Edwards | | 3 | Brown | +-----------------------------+--------------+ Table FirstNames +-----------------------------+--------------+ | ID (Auto Increment, PK) | FirstName | +-----------------------------+--------------+ | 1 | John | | 2 | Bob | | 3 | Mary | | 4 | Kate | +-----------------------------+--------------+ ``` **Natural Keys** Alternatively, the two tables above can be without `ID` and utilize Surname and FirstName as Natural Primary Keys, as explained by Mike Sherrill. In this instance, assume the layer below references `varchar` rather than `int`. # Layer 2: People In this layer a composite index is used. This value can be `UNIQUE` or `PRIMARY`, depending on whether a surrogate key is utilized as the Primary Key. ``` +-----------------+--------------+ | FirstName | LastName | +-----------------+--------------+ | 1 | 2 | | 1 | 3 | | 2 | 3 | | 3 | 1 | | 4 | 2 | | ... | ... | +-----------------+--------------+ ``` # Layer 3: Parents In this layer, relationships between people are explored through a `ParentsOf` table. ``` ParentsOf +-----------------+-----------------+ | Person | PersonParent | +-----------------+-----------------+ OR +-----------------+-----------------+-----------------+-----------------+ | PersonFirstName | PersonSurname | ParentFirstName | ParentSurname | +-----------------+-----------------+-----------------+-----------------+ ``` # The Question Assuming that referential integrity is VERY important to me at its very core, and I will have `FOREIGN KEYS` on these indexes so that I keep the database responsible for monitoring its own integrity on this front, and that, if I were to use an ORM, it would be one like [Doctrine](http://docs.doctrine-project.org/en/2.0.x/tutorials/composite-primary-keys.html) which has native support for Compound Primary Keys... Please help me to understand: * The list of trade-offs that take place with utilizing surrogate keys vs. natural keys on the 1st Layer. * The list of trade-offs that take place with utilizing compound keys vs. surrogate keys on the 2nd Layer which can be transferred over to the 3rd Layer. I am not interested in hearing which is better, because I understand that there are significant disagreements among professionals on this topic and it would be sparking a religious war. Instead, I am asking, very simply and as objectively as is humanly possible, what trade-offs will you be taking by passing surrogate keys to each Layer vs maintaining Primary keys (natural/composite, or surrogate/composite). Anyone will be able to find someone saying ***NEVER*** or ***ALWAYS*** use surrogate keys on SO and other websites. Instead, a reasoned analyses of trade-offs is what I will most appreciate in your answers. **EDIT:** It has been pointed out that a surname example is a poor example for a use of 6NF. For the sake of keeping the question intact, I am going to leave it be. If you are having trouble imagining the use case for this, a better one might be a list of "Grocery Items". AKA: ``` +-----------------------------+--------------+ | ID (Auto Increment, PK) | Grocery | +-----------------------------+--------------+ | 1 | Sponges | | 2 | Tomato Soup | | 3 | Ice Cream | | 4 | Lemons | | 5 | Strawberries | | 6 | Whipped Cream| +-----------------------------+--------------+ +-----------------------------+--------------+ | ID (Auto Increment, PK) | Brand | +-----------------------------+--------------+ | 1 | Bright | | 2 | Ben & Jerry's| | 3 | Store Brand | | 4 | Campbell's | | 5 | Cool Whip | +-----------------------------+--------------+ ``` Natural Composite Key Example: ``` +-----------------------------+--------------+ | Grocery | Brand | +-----------------------------+--------------+ | Sponges | Bright | | Ice Cream | Ben & Jerry's| | Ice Cream | Store Brand | | Tomato Soup | Campbell's | | Tomato Soup | Store Brand | | Lemons | Store Brand | | Whipped Cream | Cool Whip | +-----------------------------+--------------+ ``` Recommended Pairings ``` +-----------------+-----------------+-----------------+-----------------+ | Grocery1 | Brand1 | Grocery2 | Brand2 | +-----------------+-----------------+-----------------+-----------------+ ``` To reiterate, this is also just an example. This is not how I would recommend proceeding, but it should help to illustrate my question. There ARE shortfalls to this method. I'll reiterate that this question was to request walking through the benefits and drawbacks of each method below, not to highlight one as better than another. I believe most people were able to look past the questionable nature of this specific example to answer the core question. This edit is for those that cannot. There are some very good answers below and if you are curious about which direction to go, please read them. **END EDIT** Thank you!
Here's some trade-offs: ## Single Surrogate (artificially created): * All child tables foreign keys only need a single column to reference the primary key. * Very easy to update the natural keys in table without needing to update every child table with foreign keys * Smaller primary/foreign key indexes (ie. not a wide) This can make the database run faster, for example when a record is deleted in a parent table, the child tables need to be searched to make sure this will not create orphans. Narrow indexes are faster to scan (just sightly). * you will have more indexes because you most likely will also want to index whatever natural keys exists in the data. ## Natural composite keyed tables: * fewer indexes in the database * less columns in the database * easier/faster to insert a ton of records as you will not need to grab the sequence generator * updating one of the keys in the compound requires that every child table also be updated. ## Then there is another category: artificial composite primary keys I've only found one instance where this makes sense. When you need to tag every record in every table for row level security. For example, suppose you had an database which stored data for 50,000 clients and each client was not supposed to see other client's data--very common in web application development. If each record was tagged with a `client_id` field, you are creating a row level security environment. Most databases have the tools to enforce row level security when setup correctly. First thing to do is setup primary and foreign keys. Normally a table with have an `id` field as the primary key. By adding `client_id` the key is now composite key. And it is necessary to carry `client_id` to all child table. The composite key is based on 2 surrogate keys and is a bulletproof way to ensure data integrity among clients and within the database a whole. After this you would create views (or if using Oracle EE setup Virtual Private Database) and other various structures to allow the database to enforce row level security (which is a topic all it own). Granted that this data structure is no longer normalized to the nth degree. The `client_id` field in each pk/fk denormalizes an otherwise normal model. The benefit of the model is the ease of enforcing row level security at the database level (which is what databases should do). Every select, insert, update, delete is restricted to whatever `client_id` your session is currently set. The database has **session awareness**. ## Summary Surrogate keys are always the safe bet. They require a little more work to setup and require more storage. The biggest benefit in my opinion is: * Being able to update the PK in one table and all other child tables are instantaneously changed without ever being touched. * When data gets messed up--and it will at some point due to a programming mistake, surrogate keys make the clean up much much easier and in some cases only possible to do because there are surrogate keys. * Query performance is improved as the db is able to search attributes to locate the s.key and then join all child table by a single numeric key. Natural Keys especially composite NKeys make writing code a pain. When you need to join 4 tables the "where clause" will be much longer (and easier to mess up) than when single SKeys were used. Surrogate keys are the "safe" route. Natural keys are beneficial in a few places, I'd say around 1% of the tables in a db.
First of all, your second layer can be expressed at least four different ways, and they're all relevant to your question. Below I'm using pseudo-SQL, mainly with PostgreSQL syntax. Certain kinds of queries will require recursion and more than one additional index regardless of the structure, so I won't say any more about that. Using a dbms that supports clustered indexes can affect some decisions here, but don't assume that six joins on clustered indexes will be faster than simply reading values from a single, covering index; test, test, test. Second, there really aren't many tradeoffs at the first layer. Foreign keys can reference a column declared `not null unique` in exactly the same way they can reference a column declared `primary key`. The surrogate key increases the width of the table by 4 bytes; that's trivial for most, but not all, database applications. Third, correct foreign keys and unique constraints will maintain referential integrity in all four of these designs. (But see below, "About Cascades".) **A. Foreign keys to surrogate keys** ``` create table people ( FirstName integer not null references FirstNames (ID), LastName integer not null references Surnames (ID), primary key (FirstName, LastName) ); ``` **B. Foreign keys to natural keys** ``` create table people ( FirstName varchar(n) not null references FirstNames (FirstName), LastName varchar(n) not null references Surnames (Surname), primary key (FirstName, Surname) ); ``` **C. Foreign keys to surrogate keys, additional surrogate key** ``` create table people ( ID serial primary key, FirstName integer not null references FirstNames (ID), LastName integer not null references Surnames (ID), unique (FirstName, LastName) ); ``` **D. Foreign keys to natural keys, additional surrogate key** ``` create table people ( ID serial primary key, FirstName varchar(n) not null references FirstNames (FirstName), LastName varchar(n) not null references Surnames (Surname), unique (FirstName, Surname) ); ``` Now let's look at the ParentsOf table. **A. Foreign keys to surrogate keys in A, above** ``` create table ParentsOf ( PersonFirstName integer not null, PersonSurname integer not null, foreign key (PersonFirstName, PersonSurname) references people (FirstName, LastName), ParentFirstName integer not null, ParentSurname integer not null, foreign key (ParentFirstName, ParentSurname) references people (FirstName, LastName), primary key (PersonFirstName, PersonSurname, ParentFirstName, ParentSurname) ); ``` To retrieve the names for a given row, you'll need four joins. You can join directly to the "FirstNames" and "Surnames" tables; you don't need to join *through* the "People" table to get the names. **B. Foreign keys to natural keys in B, above** ``` create table ParentsOf ( PersonFirstName varchar(n) not null, PersonSurname varchar(n) not null, foreign key (PersonFirstName, PersonSurname) references people (FirstName, LastName), ParentFirstName varchar(n) not null, ParentSurname varchar(n) not null, foreign key (ParentFirstName, ParentSurname) references people (FirstName, LastName), primary key (PersonFirstName, PersonSurname, ParentFirstName, ParentSurname) ); ``` This design needs zero joins to retrieve the names for a given row. Many SQL platforms won't need to read the table at all, because they can get all the data from the index on the primary key. **C. Foreign keys to surrogate keys, additional surrogate key in C, above** ``` create table ParentsOf ( Person integer not null references People (ID), PersonParent integer not null references People (ID), primary key (Person, PersonParent) ); ``` To retrieve names, you must join *through* the "people" table. You'll need a total of six joins. **D. Foreign keys to natural keys, additional surrogate key in D, above** This design has the same structure as in C immediately above. Because the "people" table in D, farther above, has natural keys referencing the tables "FirstNames" and "Surnames", you'll only need two joins to the table "people" to get the names. **About ORMs** ORMs don't build SQL the way a SQL developer writes SQL. If a SQL developer writes a SELECT statement that needs six joins to get the names, an ORM is liable to execute seven simpler queries to get the same data. This might be a problem; it might not. **About Cascades** Surrogate ID numbers make every foreign key reference an implicit, undeclared "ON UPDATE CASCADE". For example, if you run this update statement against your table of surnames . . . ``` update surnames set surname = 'Smythe' where surname = 'Smith'; ``` then all the Smiths will become Smythes. The only way to prevent that is to revoke update permissions on "surnames". Implicit, undeclared "ON UPDATE CASCADE" is not always a Good Thing. Revoking permissions solely to prevent unwanted implicit "cascades" is not always a Good Thing.
Composite vs Surrogate keys for Referential Integrity in 6NF
[ "", "sql", "database", "6nf", "" ]
``` ID Date1 Date2 Date3 158 5/3/13 15:11 2/20/13 11:38 2/20/13 11:38 ``` I want to get the latest date from this three columns.
``` SELECT CASE WHEN Date1 IS NOT NULL AND Date1>=COALESCE(Date2,CAST('0001-01-01 00:00' AS DATETIME2)) AND Date1>=COALESCE(Date3,CAST('0001-01-01 00:00' AS DATETIME2)) THEN Date1 WHEN Date2 IS NOT NULL AND Date2>=COALESCE(Date1,CAST('0001-01-01 00:00' AS DATETIME2)) AND Date2>=COALESCE(Date3,CAST('0001-01-01 00:00' AS DATETIME2)) THEN Date2 WHEN Date3 IS NOT NULL AND Date3>=COALESCE(Date1,CAST('0001-01-01 00:00' AS DATETIME2)) AND Date3>=COALESCE(Date2,CAST('0001-01-01 00:00' AS DATETIME2)) THEN Date3 END AS latest FROM t1 ``` [**Example**](http://sqlfiddle.com/#!3/475f7/1)
Try using `CASE`: ``` SELECT ID, CASE WHEN Date1>=Date2 AND Date1>=Date3 THEN Date1 WHEN Date2>=Date1 AND Date2>=Date3 THEN Date2 WHEN Date3>=Date1 AND Date3>=Date2 THEN Date3 END AS GreatestDate FROM TableName ```
Get the Latest date from the Three columns
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "" ]
I need to find all rows whose one column belongs to the same value and also satisfy a condition on SQL server 2008. Table1: ``` col1 col2 100 0 100 1 100 1 200 0 200 0 200 0 300 0 300 0 200 0 ``` I need to get the rows that all its col2 = 0 for the same col1. Here, I expect ``` col1 col2 200 0 200 0 200 0 300 0 300 0 300 0 ``` Because all col2 is 0 for **col1 = 200 and col1 = 300** ``` SELECT * FROM table1 as a where a.col2 = 0 # but, how to say col1 belong to the same value ? ``` Thx!
Since you need the result for a *group of rows*, you could use grouping, e.g. like this: ``` SELECT col1 FROM table1 GROUP BY col1 HAVING SUM(col2) = 0 ; ``` That would give you all [distinct] `col1` values whose `col2` totals are 0. That assumes, of course, that `SUM` is applicable in this situation – in particular, that `col2` is not `bit` and that `col2` can't have negative values. If you really mean to check for 0 as a specific value for the rows to have (or not to have), you could use a different aggregate function: ``` SELECT col1 FROM table1 GROUP BY col1 HAVING COUNT(NULLIF(col2, 0)) = 0 ; ``` The above query would count non-0 values of `col2` in every group of `col1` and return only those `col1` values where the counts are 0. Finally, if you really need to return detail rows rather than distinct `col1` values, you could use the above query's result as a derived table and filter the source table on it: ``` SELECT * FROM table1 INNER JOIN ( SELECT col1 FROM table1 GROUP BY col1 HAVING COUNT(NULLIF(col2, 0)) = 0 ) AS filter ON table1.col1 = filter.col1 ; ``` However, there's another, potentially more efficient, way to do the same with the help of *window aggregation*, like this: ``` SELECT col1, col2, ... FROM ( SELECT *, cnt = COUNT(NULLIF(col2, 0)) OVER (PARTITION BY col1) FROM table1 ) AS s WHERE cnt = 0 ; ``` The counts in the above query would be returned *alongside detail data*, and the outer query would just be filtering on them to ultimately produce only rows that had the counts of 0. The difference is, this method references the source table just once, which may result in a more efficient query plan than the previous method.
Here is the query: ``` select col1,col2 from samp a where not exists (select 1 from samp where col1 = a.col1 and col2 <> a.col2) ``` Hope this will help you!
find all rows whose one column belongs to the same value and also another column satisfy a condition on SQL server 2008
[ "", "sql", "sql-server", "sql-server-2008", "" ]
Please refer to the below table: ``` +----------------------------------------------------------+ | +-----------+--------------------+---------------------+ | | | Company | GL Account | Amount | | | +-----------+--------------------+---------------------+ | | | Company 1 | Cash at Bank ANZ | $500,452.22 | | | +-----------+--------------------+---------------------+ | | | | Westpac Investment | $443,233.32 | | | +-----------+--------------------+---------------------+ | | | | NAB Cheque | $9,833.22 | | | +-----------+--------------------+---------------------+ | | | Company 2 | Cash at Bank ANZ | $938.22 | | | +-----------+--------------------+---------------------+ | | | | Investment Online | $940,404,400.20 | | | +-----------+--------------------+---------------------+ | | | Company 3 | Online Advantage | $93,393.00 | | | +-----------+--------------------+---------------------+ | | | | Direct Access | $30.30 | | | +-----------+--------------------+---------------------+ | | | | BAR Invest | $192,330,303,300.10 | | | +-----------+--------------------+---------------------+ | | | TOTAL | | $193,271,755,580.58 | | | +-----------+--------------------+---------------------+ | +----------------------------------------------------------+ ``` This above table is what my SSRS report currently looks like. As you can see, I have a group of companies. Each company has GL Accounts with multiple transactions however, I am only displaying the total for each GL account. For example, the top amount field "$500,452.22" is a total of ALL transactions for "Cash at Bank ANZ" for "Company 1". Furthermore, I have a total at the bottom, which is the total of ALL amount totals. I want to add an additional total field which shows the total for ALL amounts fore each company. Basically, the report should look like this: ``` +----------------------------------------------------------+ | +-----------+--------------------+---------------------+ | | | Company | GL Account | Amount | | | +-----------+--------------------+---------------------+ | | | Company 1 | Cash at Bank ANZ | $500,452.22 | | | +-----------+--------------------+---------------------+ | | | | Westpac Investment | $443,233.32 | | | +-----------+--------------------+---------------------+ | | | | NAB Cheque | $9,833.22 | | | +-----------+--------------------+---------------------+ | | | | TOTAL | $953,518.76 | | | +-----------+--------------------+---------------------+ | | | Company 2 | Cash at Bank ANZ | $938.22 | | | +-----------+--------------------+---------------------+ | | | | Investment Online | $940,404,400.20 | | | +-----------+--------------------+---------------------+ | | | | TOTAL | $940,405,338.42 | | | +-----------+--------------------+---------------------+ | | | Company 3 | Online Advantage | $93,393.00 | | | +-----------+--------------------+---------------------+ | | | | Direct Access | $30.30 | | | +-----------+--------------------+---------------------+ | | | | BAR Invest | $192,330,303,300.10 | | | +-----------+--------------------+---------------------+ | | | | TOTAL | $192,330,396,723.40 | | | +-----------+--------------------+---------------------+ | | | TOTAL | | $193,271,755,580.58 | | | +-----------+--------------------+---------------------+ | +----------------------------------------------------------+ ``` Every time I try to create this total field, the report just displays duplicates of each GL Account total and not the total of all amounts per company. Can anyone explain how to add the field that I want? FYI: This is the SQL Query that I used: ``` select GLAC.GLCo as Company, HQCO.Name as 'Company Name', GLAC.Description as 'GL Description', GLDT.Amount as Amount from GLAC LEFT JOIN HQCO ON GLAC.GLCo = HQCO.HQCo LEFT JOIN GLDT ON GLAC.GLCo = GLDT.GLCo and GLAC.GLAcct = GLDT.GLAcct where udCategory = 'Cash At Bank' and Active = 'Y' and (GLAC.GLCo = 1 or GLAC.GLCo = 5 or GLAC.GLCo = 6 or GLAC.GLCo = 7) ```
Grouping in SSRS is certainly not intuitive. I managed to achieve what I wanted by right clicking in the "Amount" field and then selecting "Insert Row" --> "Outside Group (Below)". I then put the sum in the new cell and everything was calculated how I desired.
Managing groups went from really simple in SSRS 2005 to very difficult and arcane in SSRS 2008. Adding a footer to a group is the least intuitive UX ever. To create a header or footer after the group has been created, go to the bottom of the design window where the `Row Groups` panel is. This will show your groups. Drop down the arrow on the `Detail` group (not the actual group you want to add the footer to but the group inside that) and you will see `Add Total` with a menu of `Before` or `After`. `Before` adds a header and `After` adds a footer to the group that encloses the item you are adding the total to. Once the footer has been created put the expression `=SUM(Fields!Amount.Value)` into the column and it will sum the amount for your group.
In SSRS, how can I SUM by multiple groups?
[ "", "sql", "reporting-services", "sum", "" ]
I am learning SQL trough a GalaXQL tutorial. I can't figure out the following question (Exercise 12): > Generate a list of stars with star ids below 100 with columns > "starname", "startemp", "planetname", and "planettemp". The list > should have all stars, with the unknown data filled out with NULL. > These values are, as usual, fictional. Calculate the temperature for a > star with ((class+7)\*intensity)\*1000000, and a planet's temperature is > calculated from the star's temperature minus 50 times orbit distance. What is the syntax to write a LEFT OUTER JOIN query when you have sub-query items "AS" that you need to join together? Here is what I have: ``` SELECT stars.name AS starname, startemp, planets.name AS planetname, planettemp FROM stars, planets LEFT OUTER JOIN (SELECT ((stars.class + 7) * stars.intensity) * 1000000 AS startemp FROM stars) ON stars.starid < 100 = planets.planetid LEFT OUTER JOIN (SELECT (startemp - 50 * planets.orbitdistance) AS planettemp FROM planets) ON stars.starid < 100 ``` Here is the database schema (sorry, cant post the image file due to low rep): ``` CREATE TABLE stars (starid INTEGER PRIMARY KEY, name TEXT, x DOUBLE NOT NULL, y DOUBLE NOT NULL, z DOUBLE NOT NULL, class INTEGER NOT NULL, intensity DOUBLE NOT NULL); CREATE TABLE hilight (starid INTEGER UNIQUE); CREATE TABLE planets (planetid INTEGER PRIMARY KEY, starid INTEGER NOT NULL, orbitdistance DOUBLE NOT NULL, name TEXT, color INTEGER NOT NULL, radius DOUBLE NOT NULL); CREATE TABLE moons (moonid INTEGER PRIMARY KEY, planetid INTEGER NOT NULL, orbitdistance DOUBLE NOT NULL, name TEXT, color INTEGER NOT NULL, radius DOUBLE NOT NULL); CREATE INDEX planets_starid ON planets (starid); CREATE INDEX moons_planetid ON moons (planetid); ```
Lets build this up slowly. First, lets see about getting just the information about stars: ``` SELECT name AS starName, (class + 7) * intensity * 1000000 AS starTemp FROM Stars WHERE starId < 100 ``` (this should look might familiar!) We get a list of all stars whose `starId` is less than 100 (the `WHERE` clause), grabbing the name and calculating temperature. At this point, we don't need a disambiguating reference to source. Next, we need to add planet information. What about an `INNER JOIN` (note that the actual keyword `INNER` is optional)? ``` SELECT Stars.name as starName, (Stars.class + 7) * Stars.intensity * 1000000 AS starTemp, Planets.name as planetName FROM Stars INNER JOIN Planets ON Planets.starId = Stars.starId WHERE Stars.starId < 100 ``` The `ON` clause is using an `=` (equals) condition to link planets to the star they orbit; otherwise, we'd be saying they were orbiting more than one star, which is very unusual! Each star is listed once for every planet it has, but that's expected. ...Except now we have a problem: Some of our stars from the first query disappeared! The `(INNER) JOIN` is causing *only* stars **with at least one planet** to be reported. But we still need to report stars without any planets! So what about a `LEFT (OUTER) JOIN`? ``` SELECT Stars.name as starName, (Stars.class + 7) * Stars.intensity * 1000000 AS starTemp, Planets.name as planetName FROM Stars LEFT JOIN Planets ON Planets.starId = Stars.starId WHERE Stars.starId < 100 ``` ... And we have all the stars back, with `planetName` being `null` (and only appearing once) if there are no planets for that star. Good so far! Now we need to add the planet temperature. Should be simple: ``` SELECT Stars.name as starName, (Stars.class + 7) * Stars.intensity * 1000000 AS starTemp, Planets.name as planetName, starTemp - (50 * Planets.orbitDistance) as planetTemp FROM Stars LEFT JOIN Planets ON Planets.starId = Stars.starId WHERE Stars.starId < 100 ``` ...except that on most RDBMSs, you'll get a syntax error stating the system can't find `starTemp`. What's going on? The problem is that the new column alias (name) isn't (usually) available until **after** the `SELECT` part of the statement runs. Which means we need to put in the calculation again: ``` SELECT Stars.name as starName, (Stars.class + 7) * Stars.intensity * 1000000 AS starTemp, Planets.name as planetName, ((Stars.class + 7) * Stars.intensity * 1000000) - (50 * Planets.orbitDistance) as planetTemp FROM Stars LEFT JOIN Planets ON Planets.starId = Stars.starId WHERE Stars.starId < 100 ``` (note that the db *may* actually be smart enough to perform the `starTemp` calculation only once per-line, but when writing you have to mention it twice in this context). Well, that's slightly messy, but it works. Hopefully, you'll remember to change both references if that's necessary... Thankfully, we can move the `Stars` portion of this into a subquery. We'll only have to list the calculation for `starTemp` once! ``` SELECT Stars.starName, Stars.starTemp, Planets.name as planetName, Stars.starTemp - (50 * Planets.orbitDistance) as planetTemp FROM (SELECT starId, name AS starName, (class + 7) * intensity * 1000000 AS starTemp FROM Stars WHERE starId < 100) Stars LEFT JOIN Planets ON Planets.starId = Stars.starId ``` Yeah, that looks like how I'd write it. Should work on essentially any RDBMS. Note that the parenthesis in `Stars.starTemp - (50 * Planets.orbitDistance)` is only there for clarity *for the reader*, the meaning of the math would remain unchanged if they were removed. Regardless of how well you know operator-precedence rules, always put in parenthesis when mixing operations. This becomes especially beneficial when dealing with `OR`s and `AND`s in `JOIN` and `WHERE` conditions - many people lose track of what's going to be effected. Also note that the implicit-join syntax (the comma-separated `FROM` clause) is considered bad practice in general, or outright deprecated on some platforms (queries will still run, but the db may scold you). It also makes certain things - like `LEFT JOIN`s - difficult to do, and increases the possibility of accidently sabotaging yourself. So please, avoid it.
``` SELECT * FROM (SELECT [...]) as Alias1 LEFT OUTER JOIN (SELECT [...]) as Alias2 ON Alias1.id = Alias2.id ```
LEFT OUTER JOIN with subquery syntax
[ "", "sql", "subquery", "outer-join", "" ]
**I need to delete all the data from some tables**. `DELETE FROM TableTwo`, `DELETE FROM TableOne` etc works (it doesn't reseed but I can learn to live with that). **I would like to do this by truncating the tables** (because it's faster and I'm certain that when I'm done emptying these tables the data integrity will fine), however TableOne is a dependency of TableTwo, so a naive approach would give me errors "cannot truncate... FOREIGN KEY constraint". I looked in the constraints folder and tried this: ``` ALTER TABLE [TableOne] DROP CONSTRAINT [DF__Blahblah__38EE7070] GO TRUNCATE TABLE [TableOne] GO ALTER TABLE [TableOne] ADD DEFAULT ((0)) FOR [Something] GO ``` However it gives the same error. Also 38EE7070 is unknown to me unless I manually check so I would have run into problems actually using this code anyway. I wonder if I'm leaving other unknown constraints untouched with, but, confusingly enough, when I try to check `EXEC sp_fkeys 'TableOne'` it shows empty results. Given the limitations of both sql server and azure a lot of the solutions in other questions don't seem to be workable. Does anyone know how I could proceed? Ie how do I drop and recreate these keys?
Nathan, There are a couple of things I should point out... 1) The example code you posted is dropping and creating a DEFAULT CONSTRAINT (i.e. something that sets the default value for a column if you do not specify a value during an insert). A default constraint has no affect on the ability to truncate a table (as you rightly point out, a table that has Foreign Key constraints cannot be truncated). 2) I think sp\_fkeys was for SQL 2000 and therefore may not work anymore in later versions of SQL Server (even though it still exists). Below I have attached a script I use to identify all Foreign Key constraints on a table, and generate Create, Drop and Check statements. Although I haven't ever used it on SQL Azure, I have used it many times in a Production enivronment for SQL Server 2008 R2. I hope it helps. Let me know if you have any questions. Ash ``` CREATE FUNCTION [utils].[uf_ForeignKeyScripts] ( @PrimaryKeyTable varchar(128), @PrimaryKeyTableSchema varchar(32) ) RETURNS @Scripts TABLE ( ForeignKeyName varchar(128) , IfExistsStatement varchar(1000) , DropStatement varchar(1000) , IfNotExistsStatement varchar(1000) , CreateStatement varchar(1000) , CheckStatement varchar(1000) , NoCheckStatement varchar(1000) ) AS /* This function returns statements used to create, drop, and check all Foreign Key constraints that reference a given table. These statements can be then added to T-SQL scripts. Example usage (ensure selection of the Results to Text option in SSMS) : 1) To create statements to check all foreign keys SELECT IfExistsStatement + CHAR(13) + CHAR(9) + CheckStatement + CHAR(13) FROM utils.uf_ForeignKeyScripts('t_Dim_Date','dbo') ; This will return a formatted statement to check the existence of a foreign key and if it exists, check that data does not violate the key. */ BEGIN INSERT INTO @Scripts ( ForeignKeyName , IfExistsStatement , DropStatement , IfNotExistsStatement , CreateStatement , CheckStatement , NoCheckStatement ) SELECT FK.name AS ForeignKeyName , 'IF EXISTS (SELECT * FROM sys.foreign_keys WHERE object_id = OBJECT_ID(N''' + SFK.name + '.' + FK.name + ''') ' + 'AND parent_object_id = OBJECT_ID(N''' + SFK.name + '.' + OBJECT_NAME(FK.parent_object_id) + '''))' AS IfExistsStatement , 'ALTER TABLE ' + SFK.name + '.' + OBJECT_NAME(FK.parent_object_id) + ' ' + 'DROP CONSTRAINT ' + FK.name + CHAR(13) + ';' AS DropStatement , 'IF NOT EXISTS (SELECT * FROM sys.foreign_keys WHERE object_id = OBJECT_ID(N''' + SFK.name + '.' + FK.name + ''') ' + 'AND parent_object_id = OBJECT_ID(N''' + SFK.name + '.' + OBJECT_NAME(FK.parent_object_id) + '''))' AS IfNotExistsStatement , 'ALTER TABLE ' + SFK.name + '.' + OBJECT_NAME(FK.parent_object_id) + ' ' + 'WITH CHECK ADD CONSTRAINT ' + FK.name + ' ' + 'FOREIGN KEY (' + C.FKColumns + ') ' + 'REFERENCES ' + ST.name + '.' + OBJECT_NAME(fk.referenced_object_id) + ' ' + '(' + C.FKColumns + ')' + CHAR(13) + ';' AS CreateStatement , 'ALTER TABLE ' + SFK.name + '.' + OBJECT_NAME(FK.parent_object_id) + ' ' + 'CHECK CONSTRAINT ' + FK.name + CHAR(13) + ';' AS CheckStatement , 'ALTER TABLE ' + SFK.name + '.' + OBJECT_NAME(FK.parent_object_id) + ' ' + 'NOCHECK CONSTRAINT ' + FK.name + CHAR(13) + ';' AS NoCheckStatement FROM sys.foreign_keys AS FK INNER JOIN sys.schemas AS SFK -- schema of foreign key table ON FK.schema_id = SFK.schema_id INNER JOIN sys.tables AS T -- primary key table ON FK.referenced_object_id = T.object_id INNER JOIN sys.schemas AS ST -- schema of primary key table ON T.schema_id = ST.schema_id CROSS APPLY ( /* Get all columns to handle composite keys */ SELECT SFKC.constraint_object_id , utils.uf_ConcatanateStringWithDelimiter(COL_NAME(SFKC.referenced_object_id, SFKC.referenced_column_id),', ') AS FKColumns FROM sys.foreign_key_columns AS SFKC WHERE SFKC.constraint_object_id = FK.object_id GROUP BY SFKC.constraint_object_id ) AS C WHERE OBJECT_NAME(T.object_id) = @PrimaryKeyTable AND ST.name = @PrimaryKeyTableSchema ; RETURN END ```
The only way is to drop the foreign key, truncate the table, and then re-create the foreign key. For an in-house solution, you would probably take this approach in order to account for the transaction log’s size and performance. In SQL Azure, however, you’re not concerned with the transaction logs.
How to clear data from tables with foreign key contraints in SQL Azure
[ "", "sql", "sql-server", "azure", "azure-sql-database", "" ]
``` ID | NAME | REFERENCE | STATUS ------------------------------- 1 | name1 | 123 | 0 2 | name1 | 123 | 1 3 | name2 | 111 | 0 4 | name3 | 222 | 0 5 | name5 | 555 | 1 ``` Let's say I have this table TABLE1. I need a select statement that will only find records with STATUS=0, but if "same" record exists with STATUS=1 (like those records with ID 1 and 2) So, query must find only third and fourth record.
Based on your suggested results, I am reading your question as "find only records with status = 0 where the same name/reference combination doesn't have a record with status = 1". One way you can do this is with a `not exists` clause: ``` select t.* from table t where status = 0 and not exists (select 1 from table t2 where t2.name = t.name and t2.reference = t.reference and t2.status = 1 ); ```
`MINUS` can be used to get the value with Status both 0 but not 1. ``` SELECT ID, Name, Reference, Status FROM Table1 WHERE (Name, Reference) IN (SELECT Name, Reference FROM Table1 WHERE Status = 0 MINUS SELECT Name, Reference FROM Table1 WHERE Status = 1) ```
ORACLE - remove from result set if two or more "same" records found
[ "", "sql", "oracle", "" ]
I have given an example of data table like below. key\_value is unique and is used to get search results. What I do currently is query with key\_value and then parse the content. ``` +-----+-------------+----------------+ | id | key_value | content | +-----+-------------+----------------+ | 1 | 001 | 100SW | | 2 | 157 | 80SW/20CO | | 3 | 222 | 50EA/50CMD | | 4 | 275 | 1EA/29LI/70BW | +-----+-------------+----------------+ ``` What I currently have: (example for key\_value: 275) ``` +-----+-------------+----------------+ | id | key_value | content | +-----+-------------+----------------+ | 4 | 275 | 1EA/29LI/70BW | +-----+-------------+----------------+ ``` What I want to achieve is something like this. (example for key\_value: 275) ``` +-------------+----------+----------+ | key_value | percent | content | +-------------+----------+----------+ | 275 | 1 | EA | | 275 | 29 | LI | | 275 | 70 | BW | +-------------+----------+----------+ ``` Is it possible to achieve on server side with queries? P.S.: Currently using SQL Server 2005 if it matters.
First you can create function ``` CREATE FUNCTION [dbo].func_split_string ( @input as varchar(max) ) RETURNS @result TABLE ( content VARCHAR(20), [percent] VARCHAR(20) ) AS BEGIN DECLARE @name VARCHAR(255) DECLARE @content VARCHAR(20) DECLARE @percent VARCHAR(20) DECLARE @pos INT SET @input = @input + '/' WHILE CHARINDEX('/', @input) > 0 BEGIN SELECT @pos = CHARINDEX('/', @input) SELECT @name = SUBSTRING(@input, 1, @pos-1) SELECT @percent = LEFT (@name, PATINDEX('%[a-zA-Z]%', @name)-1) SELECT @content = RIGHT (@name, LEN(@name)-PATINDEX('%[a-zA-Z]%', @name)+1) INSERT INTO @result ([percent], content) SELECT @percent, @content SELECT @input = SUBSTRING(@input, @pos+1, LEN(@input)-@pos) END RETURN END ``` and then you can run it like this ``` SELECT t.key_value, fS.[percent], fS.content FROM yourTableHere as t CROSS APPLY [dbo].func_split_string(t.[content]) as fS ```
i am not sure if this works for sql server 2005. it does however for sql server 2008. what you do first is a recursive cte, chopping the content at the `/` character (`with...`). you can then split the result in numeric and alphanumeric parts. for your testdata i have used `@t` as you can observe. replace `@t` with the name of your table in the `with... select...`. ``` declare @t table (id int, key_value varchar(3), content varchar(max)) insert into @t values (1, '001', '100SW') insert into @t values (2, '157', '80SW/20CO') insert into @t values (3, '222', '50EA/50CMD') insert into @t values (4, '275', '1EA/29LI/70BW') ;with data as ( select key_value , case when CHARINDEX('/', content) > 0 then SUBSTRING(content,1,CHARINDEX('/', content)-1) else content end as a , case when CHARINDEX('/', content) > 0 then substring(content,CHARINDEX('/', content)+2, LEN(content)-CHARINDEX('/', content)) else content end as b from @t union all select key_value , case when CHARINDEX('/', b) > 0 then SUBSTRING(b,1,CHARINDEX('/', b)-1) else b end as a , case when CHARINDEX('/', b) > 0 then substring(b,CHARINDEX('/', b)+2, LEN(b)-CHARINDEX('/', b)) else null end as b from data where b is not null ) select key_value , left(a,PATINDEX('%[a-zA-Z]%',a)-1) as [percent] , SUBSTRING(a,PATINDEX('%[a-zA-Z]%',a), LEN(a)-PATINDEX('%[a-zA-Z]%',a)+1) as content from data ```
Parse result text during SQL query or on Server side
[ "", "sql", "sql-server", "sql-function", "" ]
I am having a problem with creating a view. Saying: Expression must be of same data type. This occurs at name. ``` CREATE VIEW V_DETAILS_BY_CATEGORY AS SELECT category_id, name FROM category UNION ALL SELECT DISTINCT(category_id), COUNT(film_id) FROM film_category GROUP BY category_id; ``` EDIT: The exact proplem I have is I want to Join the resutls from the two select Statment using category\_id.
The count(film\_id) is a number, while name is (probably) some sort of string. You probably want to do a JOIN, not union (union takes rows that have the same structure from two different tables). Also, the distinct is not needed, since you're doing a group by. You probably want something like ``` SELECT category_id, category_name, COUNT(film_id) FROM film_category JOIN category USING category_id GROUP BY category_id, category_name; ```
Unions must contain the same types of data. If you convert your count to whatever data type the "name" field is, it might work. ``` SELECT category_id, name FROM category UNION ALL SELECT category_id, TO_CHAR(COUNT(film_id)) FROM film_category GROUP BY category_id; ``` But, I agree with okaram, you might actually be looking for a JOIN, not a UNION ALL, which is why I asked about the purpose.
UNION Error when creating a Veiw
[ "", "sql", "oracle", "" ]
I have the following query which takes about 10 minutes. I need this to be much quicker. Any ideas what can be done to tune this query? The `r_pos_transaction_head` table has a little under 500,000 records, and the `r_pos_transaction_detl` has a little under 900,000 records. I have created indexes where I thought appropriate (you can see these in use in the plan). ``` truncate table t_retail_history insert into t_retail_history select h.source_db as legacy_source_db, h.company as legacy_company, h.store_code as legacy_store_code, h.register as legacy_register, cast(h.register as char(1)) + '/' + cast(h.transaction_no as varchar(10)) as legacy_transaction_no, t_warehouse.store_number as store_number, h.transaction_no as reference, convert(varchar(10),dbo.datefromdays(h.date),103) as transaction_date, convert(varchar(5),dateadd(second,h.time,cast(cast(getdate() as date) as datetime)), 108) as transaction_time, d.product_code as legacy_product_code, coalesce(d.colour_no,0) as legacy_colour_no, coalesce(g_colour_name_replacement.new_colour_name,s.colour_name,'') as legacy_colour_name, coalesce(d.size_no,0) as legacy_size_no, coalesce(s.size_code,'') as legacy_size_code, d.price_inc_tax as legacy_price_inc_tax, d.sku_no as legacy_sku_no, null as barcode, d.quantity as qty, d.nett_total as sales_total, null as person_code, t_warehouse.destination_busdiv_prefix from svi.r_pos_transaction_head h inner join svi.r_pos_transaction_detl d on d.company = h.company and d.store_code = h.store_code and d.register = h.register and d.tx_code = h.transaction_no inner join svi.g_skus s on s.company = h.company and s.product_code = d.product_code and ( s.colour_position = d.colour_no or s.colour_position is null and d.colour_no = 0 ) and ( s.size_position = d.size_no or s.size_position is null and d.size_no = 0 ) left outer join g_colour_name_replacement on g_colour_name_replacement.product_code = d.product_code and g_colour_name_replacement.old_colour_name = s.colour_name left outer join t_warehouse on t_warehouse.legacy_svi_code = right('000' + cast(h.store_code as nvarchar(5)),3) where d.quantity <> 0 and d.nett_total <> 0 ``` ![Explain Plan](https://i.stack.imgur.com/HsO7g.png) Any help appreciated!
The query is correctly written, as everyone suggest, try to add some indexes on joined fields. For me the bad part of the query is : ``` and ( s.colour_position = d.colour_no or s.colour_position is null and d.colour_no = 0 ) and ( s.size_position = d.size_no or s.size_position is null and d.size_no = 0 ) ``` Because `OR condition` in `INNER JOIN` statement is **performance killer**. They are many trick to avoid it (like do 2 left join instead for each condition and then in where clause remove left join that occurs null). I just do some other research and I found this [**post on stackoverflow**](https://stackoverflow.com/questions/5901791/is-having-an-or-in-an-inner-join-condition-a-bad-idea) that propose you something. You should try the Union option to not rebuild all your SELECT fields part. I don't have time to rewrite all the query for you, keep me informed.
You can use indexed views in order to perform beter joining. Perform better indexing so instead of index scan it can use index seek. Image percentage sum is not 100% where is others?
How can I improve the performance of this slow query
[ "", "sql", "sql-server", "sql-server-2012", "" ]
Yesterday in a job interview session I was asked this question and I had no clue about it. Suppose I have a word "Manhattan " I want to display only the letters 'M','A','N','H','T' in SQL. How to do it? Any help is appreciated.
Well, here is my solution ([sqlfiddle](http://sqlfiddle.com/#!3/d41d8/34733)) - it aims to use a "Relational SQL" operations, which *may* have been what the interviewer was going for conceptually. Most of the work done is simply to turn the string into a set of (pos, letter) records as the relevant final applied [DQL](http://www.takveen.com/z-art_comp_ora_sql_ch1_p3.html) is a mere SELECT with a grouping and ordering applied. ``` select letter from ( -- All of this just to get a set of (pos, letter) select ns.n as pos, substring(ss.s, ns.n, 1) as letter from (select 'MANHATTAN' as s) as ss cross join ( -- Or use another form to create a "numbers table" select n from (values (1),(2),(3),(4),(5),(6),(7),(8),(9)) as X(n) ) as ns ) as pairs group by letter -- guarantees distinctness order by min(pos) -- ensure output is ordered MANHT ``` The above query works in SQL Server 2008, but the ["Numbers Table"](https://dba.stackexchange.com/questions/11506/why-are-numbers-tables-invaluable) may have to be altered for other vendors. Otherwise, there is nothing used that is vendor specific - no CTE, or cross application of a function, or procedural language code .. That being said, the above is to show a conceptual approach - SQL is designed for use with sets and relations and multiplicity across records; the above example is, in some sense, merely a perversion of such. --- Examining the intermediate relation, ``` select ns.n as pos, substring(ss.s, ns.n, 1) as letter from (select 'MANHATTAN' as s) as ss cross join ( select n from (values (1),(2),(3),(4),(5),(6),(7),(8),(9)) as X(n) ) as ns ``` uses a [cross join](http://en.wikipedia.org/wiki/Join_(SQL)#Cross_join) to generate the Cartesian product of the string (1 row) with the numbers (9 rows); the `substring` function is then applied with the string and *each* number to obtain each character in accordance with its position. The resulting set contains the records- ``` POS LETTER 1 M 2 A 3 N .. 9 N ``` Then the outer select groups each record according to the letter and the resulting records are ordered by the minimum (first) occurrence position of the letter that establishing the grouping. (Without the order by the letters would have been distinct but the final order would not be guaranteed.)
One way (if using SQL Server) is with a [recursive CTE (Commom Table Expression)](http://technet.microsoft.com/en-us/library/ms186243%28v=sql.105%29.aspx). ``` DECLARE @source nvarchar(100) = 'MANHATTAN' ; WITH cte AS ( SELECT SUBSTRING(@source, 1, 1) AS c1, 1 as Pos WHERE LEN(@source) > 0 UNION ALL SELECT SUBSTRING(@source, Pos + 1, 1) AS c1, Pos + 1 as Pos FROM cte WHERE Pos < LEN(@source) ) SELECT DISTINCT c1 from cte ``` [SqlFiddle for this is here](http://sqlfiddle.com/#!3/d41d8/34747). I had to inline the `@source` for SqlFiddle, but the code above works fine in Sql Server. The first `SELECT` generates the initial row(in this case 'M', 1). The second `SELECT` is the recursive part that generates the subsequent rows, with the `Pos` column getting incremented each time until the termination condition `WHERE Pos < LEN(@source)` is finally met. The final select removes the duplicates. Internally, `SELECT DISTINCT` sorts the rows in order to facilitate the removal of duplicates, which is why the final output happens to be in alphabetic order. Since you didn't specify order as a requirement, I left it as-is. But you could modify it to use a `GROUP` instead, that ordered on `MIN(Pos)` if you needed the output in the characters' original order. This same technique can be used for things like generating all the Bigrams for a string, with just a small change to the general structure above.
How to split and display distinct letters from a word in SQL?
[ "", "sql", "" ]
I would like to search for records that occurred in a specific time of day and between a date/time range. **Example:** My table: ``` ID | EmpID | AuthorizationTime ------------------------------- 1 | 21455 | '23/01/2012 12:44' 2 | 22311 | '23/01/2012 18:15' 3 | 21455 | '23/01/2012 23:04' 4 | 10222 | '24/01/2012 03:31' 5 | 21456 | '24/01/2012 09:00' 6 | 53271 | '25/01/2012 12:15' 7 | 10222 | '26/01/2012 18:30' 8 | 76221 | '27/01/2012 09:00' ``` Sample SP input parameters: ``` @from: 22/01/2012 08:00 @to: 24/01/2012 23:00 @fromtime: 18:30 @totime: 08:00 ``` Expected Output: ``` EntryID EmployeeID AuthorisationTime 3 21455 '23/01/2012 23:04' 4 10222 '24/01/2012 03:31' ``` I've tried the following select statements in the SP: ``` ... Select @wAuthorizationTime=' AuthorizationTime between ''' + CONVERT(nvarchar(30), @from )+ ''' and ''' + convert(nvarchar(50),@to )+ ''' ' Select @Where = @wAuthorizationTime; Declare @wHours nvarchar(1000)=''; if (ISNULL(@fromtime,'')<>'' and ISNULL(@ToTime,'')<> '') begin Select @wHours= ' (Cast(AuthorizationTime as time) between ''' + @fromTime + ''' and '''+ @ToTime +''')' end if (@wHours <> '') Select @Where=@Where + ' and ' + @wHours ... ``` The problem with this statement is that I'm not getting any results if the end time is lower than the start time (e.g. 23:00 to 03:00). It does work if I use a time frame that doesn't overlap (e.g. 18:00 to 23:59). What I need to do to get above results?
This should give you what you want: ``` select * from Times where AuthorizationTime >= @from and AuthorizationTime <= @to and ( (@fromtime > @totime and ((cast(AuthorizationTime as time) between '00:00:00' and @totime) or (cast(AuthorizationTime as time) between @fromtime and '23:59:59.999') ) ) or (@fromtime <= @totime and cast(AuthorizationTime as time) between @fromtime and @totime) ) ``` [SQL Fiddle](http://sqlfiddle.com/#!3/694e31/2/1)
Add a check to see if `@fromtime > @totime`. If that is the case, compare the TIME-casted value of AuthorizationTime like this: ``` (cast(AuthorizationTime as time) between '00:00:00' and @totime) or (cast(AuthorizationTime as time) between @fromtime and '23:59:59.999') ```
T-SQL Overlapping time range parameter in stored procedure
[ "", "sql", "t-sql", "stored-procedures", "sql-server-2008-r2", "" ]
I am trying to write the single search statement to search the Product Views in multiple ways. My requirements are as follow: 1. If @SearchType is not provided, show all products 2. If @SearchType is **ProductID** (Integer), search by ProductID 3. If @SearchType is **Code** (VARCHAR(50)), search by Code Here is my TSQL script. It works fine if I used, the ProductID which is Integer. ``` DECLARE @SearchType VARCHAR(20) = 'Code' DECLARE @Value1 VARCHAR(50) = 'XYZ123' SELECT * FROM dbo.vProductsCurrent WHERE (@SearchType IS NULL OR (@SearchType = 'ProductID' AND ProductID = @Value1 ) OR (@SearchType = 'Code' AND Code = @Value1 ) ) ``` But when I changed it to search by **Code**, I got the Conversion Failed Error. I don't understand why it is executing ProductID check even though @SearchType is **Code**. AFAIK, if the first condition returns false, it doesn't evaluate the remaining part of the query in "AND" conditions. ``` Msg 245, Level 16, State 1, Line 6 Conversion failed when converting the varchar value 'XYZ123' to data type int. ``` So, I forced the ProductID to convert to Varchar(10) before doing the search ``` OR (@SearchType = 'ProductID' AND CAST(ProductID AS VARCHAR(10)) = @Value1 ) ``` Since the data type of the ProductID is changed, the Indexing benefit of the ProductID no longer exists. Is there any better way to do that? PS. I don't want to use multiple If / Else .... Actually my query has about 10 possible search conditions, and not only ProductID and Code. I removed those for the question.
You need to pass all parameters with corresponding type or null if not needed. ``` DECLARE @Value1 INT = 11 DECLARE @Value2 VARCHAR(50) = 'XYZ123' DECLARE @Value3 DATETIME = '20140505' SELECT * FROM dbo.vProductsCurrent WHERE ( ProductID = @Value1 OR @Value1 IS NULL) AND (Code = @Value2 OR @Value2 IS NULL ) AND (DateFrom >= @Value3 OR @Value3 IS NULL ) -- AND ... ``` Inspired in previous comment from @Aaron Bertrand
Here you can use dynamic SQL queries, for dynamically constructing your query based on your condition. ``` Eg: DECLARE @SelectStatement NVARCHAR(2000); DECLARE @WhereClause NVARCHAR(2000); DECLARE @FullStatement NVARCHAR(4000); SET @SelectStatement = 'SELECT * FROM TableName' IF (cond1 is true) SET @WhereClause = '@SearchType = ''ProductID'' AND ProductID = @Value1'; ELSE SET @WhereClause = '@SearchType = ''Code'' AND Code = @Value1'; SET @FullStatement = @SelectStatement + ISNULL(@WhereClause,'') PRINT @FullStatement EXECUTE sp_executesql @FullStatement ```
TSQL Intelligence Search, Type Conversion Error, SQL 2008
[ "", "sql", "sql-server-2008", "t-sql", "search", "" ]
``` Table: users =============== id | name | =============== 1 | steve | 2 | peter | ================ Table: relation_favorites ======================= | idFavorite | idUser | ======================= | 1 | 1 | | 2 | 1 | | 4 | 1 | | 5 | 1 | | 1 | 2 | | 3 | 2 | | 4 | 2 | | 5 | 2 | ======================= ``` i want to select all the users with the favorite 1 AND 4 AND 5. i think that i need to use a LEFT JOIN but i cant see it how to do it. it doesnt work: ``` $query = "SELECT rf.id, u.id idUser FROM users u, relation_favorites rf WHERE rf.idUser = u.id AND rf.idFavorite = 1 AND rf.idFavorite = 4 AND rf.idFavorite = 5"; ``` it works but this isnt what i want: ``` $query = "SELECT rf.id, u.id idUser FROM users u, relation_favorites rf WHERE rf.idUser = u.id AND rf.idFavorite = 1"; ```
Using a JOIN and COUNT:- ``` SELECT u.id AS idUser, COUNT(rf.idFavorite) AS favourite_count FROM users u INNER JOIN relation_favorites rf ON rf.idUser = u.id WHERE rf.idFavorite IN (1, 4, 5) GROUP BY idUser HAVING favourite_count = 3 ``` You could also use multiple joins:- ``` SELECT u.id AS idUser FROM users u INNER JOIN relation_favorites rf1 ON rf1.idUser = u.id AND rf1.idFavorite = 1 INNER JOIN relation_favorites rf2 ON rf2.idUser = u.id AND rf2.idFavorite = 4 INNER JOIN relation_favorites rf3 ON rf3.idUser = u.id AND rf3.idFavorite = 5 ```
``` Select name from user where id in (Select distinct idUser from relation_favorites wher idFavorite in(1,4,5)) ``` Try this subquery may be it work for you. or you may be try this both query output same ``` Select distinct name from user x join relation_favorites y on x.Id=y.IdUser where y.idFavorite in(1,4,5) ``` EDIT : Try this Query ``` Select x.Name from Users x join relation_favorites y on x.Id=y.IdUser where y.idFavorite in (1,3,4) Group By x.Name Having Count(y.IdUser)=3 ```
query using two tables
[ "", "mysql", "sql", "" ]
I have a following situation (table schema: `ID, first name, last name, value`).: `table 1`: ``` ID | first name | last name | value -------------------------------------- 1 | John | Goodman | 5 2 | Peter | Snow | 6 3 | Mike | Walker | 7 4 | John | Goodman | 8 ``` `table 2`: ``` ID | first name | last name | value -------------------------------------- 1 | Peter | Snow | 2 2 | Bobby | White | 1 3 | Mike | Walker | 1 4 | Brad | West | 2 5 | Peter | Snow | 3 ``` I want to write full outer join to get sum on 4th column, but each name should be placed only once in a result (joined) table, like this: result table: ``` ID | first name | last name | value.table1 | value.table2 ----------------------------------------------------------- 1 | John | Goodman | 5 | 0 2 | Peter | Snow | 6 | 5 3 | Mike | Walker | 7 | 1 4 | Bobby | White | 0 | 1 5 | Brad | West | 0 | 2 ``` How can I achieve this ?
Something like this should work (if I ignore the `ID` column for now): ``` SELECT COALESCE(t1.FirstName,t2.FirstName) as FirstName, COALESCE(t1.LastName,t2.LastName) as LastName, COALESCE(t1.value,0) as t1value, COALESCE(t2.value,0) as t2value FROM (select FirstName,LastName,SUM(value) as value from table1 group by FirstName,LastName) t1 full outer join (select FirstName,LastName,SUM(value) as value from table2 group by FirstName,LastName) t2 on t1.FirstName= t2.FirstName and t1.LastName= t2.LastName ```
``` With s as (Select firstname, lastname , value From table1 Union all Select firstname, lastname , value From table2) Select firstname, lastname , sum(value) From s Group by firstname , lastname ```
Full outer join
[ "", "sql", "sql-server", "database", "t-sql", "" ]
This works fine: ``` select case (1+2) -- (or_some_more_complicated_formula_yielding_a_numeric_result) when 200 then '200' when 100 then '100' else 'other' end hi_med_low from dual ; ``` But I need to do something more like this: ``` select case (1+2) -- (or_some_more_complicated_formula_yielding_a_numeric_result) when greater than 200 then 'high' when less than 100 then 'low' else 'medium' end hi_med_low from dual ; ``` Suggestions?
`case` supports a syntax to evaluate boolean conditions. It's not as clean as you'd like as you need to re-write each expression, but it gets the job done: ``` select case when (1+2) > 200 then 'high' when (1+2) < 100 then 'low' else 'medium' end hi_med_low from dual ; ``` One possible mitigation could be to use a subquery for the formula, so you only have to write it once: ``` select case when formula > 200 then 'high' when formula < 100 then 'low' else 'medium' end hi_med_low from (select (1+2) AS formula from dual); ```
Yes, use `CASE WHEN`: ``` CASE WHEN some_formula > 200 THEN 'High' WHEN some_formula < 100 THEN 'Low' ELSE 'Medium' END ```
How do you test inequality with Oracle Case Statement
[ "", "sql", "oracle", "case-statement", "" ]
I have a table that is event driven, ie when an event occurs it gets updated. When the event 'start' comes in it records a persons location, when 'end' comes in it does not. I want to count the number of Ends but report their corresponding locations which is recorded in their 'Start' event. Note: there are other types of events which i want to ignore. ``` Table drop table Events; CREATE TABLE Events ( EventName VARCHAR(10) NOT NULL, EventPersonName VARCHAR(50) NOT NULL, EventPersonLocation VARCHAR(50) NULL, EventDate DATETIME2(0) NULL ); INSERT Events SELECT 'end', 'bob', 'Null', '2014-05-27 08:00' UNION ALL SELECT 'end', 'sally', 'null', '2014-05-27 07:00' UNION ALL SELECT 'Start', 'sally', 'Sydney', '2014-05-27 06:30' UNION ALL SELECT 'start', 'bob', 'Belfast', '2014-05-27 06:00' UNION ALL SELECT 'end', 'sally', 'null', '2014-05-27 05:00' UNION ALL SELECT 'start', 'jack', 'London', '2014-05-27 04:00' UNION ALL SELECT 'end', 'john', 'null', '2014-05-27 03:00' UNION ALL SELECT 'start', 'sally', 'New Yourk', '2014-05-27 02:00' UNION ALL SELECT 'start', 'john', 'Dublin', '2014-05-27 01:00'; ``` How can i find what values completed since 2014/05/27 00:30 where the result would be; ``` John, Dublin Sally, New York Sally, Sydney Bob, Belfast ``` I suspect i have to join the table to itself and this will give me 1 line for each the start and end then i can simply take the details i need but what about starts with no ends and ends with no starts (due to time filter)
Try this (it shows only those persons who have finished their events: SUM of events = 0): **Updated solution:** ``` DECLARE @Events TABLE ( EventName VARCHAR(10) NOT NULL, EventPersonName VARCHAR(50) NOT NULL, EventPersonLocation VARCHAR(50) NULL, EventDate DATETIME2(0) NULL ); INSERT @Events SELECT 'end', 'bob', null, '2014-05-27 08:00' UNION ALL SELECT 'end', 'sally', null, '2014-05-27 07:00' UNION ALL SELECT 'Start', 'sally', 'Sydney', '2014-05-27 06:30' UNION ALL SELECT 'start', 'bob', 'Belfast', '2014-05-27 06:00' UNION ALL SELECT 'end', 'sally', null, '2014-05-27 05:00' UNION ALL SELECT 'start', 'jack', 'London', '2014-05-27 04:00' UNION ALL SELECT 'end', 'john', null, '2014-05-27 03:00' UNION ALL SELECT 'start', 'sally', 'New Yourk', '2014-05-27 02:00' UNION ALL SELECT 'start', 'john', 'Dublin', '2014-05-27 01:00'; SELECT y.EventPersonName, y.EventNum, MIN(y.EventDate) AS StartDate, MAX(y.EventDate) AS EndDate, MAX(y.EventPersonLocation) AS EventPersonLocation FROM ( SELECT x.EventPersonName, x.EventDate, x.EventPersonLocation, SUM(CASE WHEN x.EventName = 'start' THEN +1 WHEN x.EventName = 'end' THEN -1 ELSE 1/0 END) OVER(PARTITION BY x.EventPersonName) AS SumOfEvents, (ROW_NUMBER() OVER(PARTITION BY x.EventPersonName ORDER BY x.EventDate ASC) + 1) / 2 AS EventNum FROM @Events x ) y WHERE y.SumOfEvents = 0 -- Only finished events GROUP BY y.EventPersonName, y.EventNum ORDER BY EventPersonName, y.EventNum; ``` Output: ``` EventPersonName EventNum StartDate EndDate EventPersonLocation --------------- -------- ---------------------- ---------------------- ------------------- bob 1 2014-05-27 06:00:00 2014-05-27 08:00:00 Belfast john 1 2014-05-27 01:00:00 2014-05-27 03:00:00 Dublin sally 1 2014-05-27 02:00:00 2014-05-27 05:00:00 New Yourk sally 2 2014-05-27 06:30:00 2014-05-27 07:00:00 Sydney ``` If you want to show only the names of persons then you could use: ``` SELECT y.EventPersonName FROM ( SELECT x.EventPersonName, EventWithSign = CASE WHEN x.EventName = 'start' THEN +1 WHEN x.EventName = 'end' THEN -1 ELSE 1/0 END FROM @Events x ) y GROUP BY y.EventPersonName HAVING SUM(y.EventWithSign) = 0 ```
This query gives you the results you want: ``` SELECT s.eventPersonName, s.eventPersonLocation, s.eventDate AS startDate, e.eventDate AS endDate FROM events e JOIN events s ON s.eventPersonName=e.eventPersonName AND s.eventName ='start' AND s.eventDate = ( SELECT MAX(p.eventDate) FROM events p WHERE p.eventPersonName=e.eventPersonName AND p.eventDate<e.eventDate) WHERE e.eventName='end'; ``` I have tested it on [SQLFiddle](http://sqlfiddle.com/#!3/8636b/3). ## Considerations: This query will consider only those events that respect the start-end expected sequence. So if for some person you have partial data (like start-end-end-start) it will ignore ends immediately preceded by ends and starts immediately followed by starts. If can be made to behave differently, but this would seem to me like a good enough approach. This query can do some strange things if you have events for the same person with the same datetime. It contains a JOIN on MAX(eventDate) and this can produce multiple rows in such a case.
Count occurrences of row couples
[ "", "sql", "sql-server-2008", "join", "group-by", "" ]
I'm making this database with superheroes for one of my school projects and I have a superheroes table (obviously) and an enemies table. So the enemies table have two foreign keys: ``` bad_superhero_id ``` and ``` good_superhero_id ``` The purpose of this table is to link the good superheroes with the bad superheroes (their enemies) from the characters table (superheroes). The both foreign keys are taking values from the id of the superheroes table. The problem is that my teacher doesn't like this and I don't know why. I mean, I saw this example in a book called ***Beginning PHP5, Apache, and MySQL Web Development*** and I also asked my coworkers that have good experience in creating database structure. They said it's not a problem, but my teacher wanted me to give her example where this is used, because she doesn't think it's good relationship and wants me to create a stupid workaround that she thought of. I still think this is not a bad way to create this kind of relationship so I wanted to ask here to get third opinion on this problem. I will be grateful if you give your opinion so that I can understand is it bad, good or doesn't matter practice to use relationship like this. **EDIT:** ``` CREATE TABLE superhero ( id INT NOT NULL AUTO_INCREMENT, nick_name VARCHAR, align ENUM ('good', 'bad'), PRIMARY KEY(id) ) ENGINE=INNODB; CREATE TABLE enemies_link ( id INT NOT NULL AUTO_INCREMENT, good_sh_id INT NOT NULL, bad_sh_id INT NOT NULL, PRIMARY KEY (id), FOREIGN KEY (good_sh_id, bad_sh_id) REFERENCES superheroes(id) ON UPDATE CASCADE ON DELETE RESTRICT ) ENGINE=INNODB; ``` ![my database relationship](https://i.stack.imgur.com/KlAvF.png) **EDIT2:** Yes, I forgot to add that I want that n to n connection. Let's say spider-man have venom and green goblin for his enemies and on the other hand venom has some other good superheroes as enemies and so on.
Your design is not intrinsically a bad design but it needs work. You are using a cross/intersection table which defines an n-to-n relationship. These are used all the time in production databases, like the relationship between Student and Course where a student can be taking several courses and a course will have many students signed up. Yours is just referring both sides to the same table. That's fine too. A table of parts, for example, can contain both components and modules with a component used to make many modules and a module made up of many components. In your particular instance, you have a flag which designates if the superhero is bad or good. That's good (although the concept of "bad hero" is somewhat jolting -- wouldn't "superbeing" be a better designation?), but the flag and the id must be defined together in a unique constraint/index. That may seem superfluous, since the id is a primary key and therefore unique all by itself. But a foreign key can only refer to a unique field or set of fields. As for the cross table, you really don't need a separate id field. In fact, that opens up a possible chink in data integrity. When modeling, always try to make data integrity a prime factor. Make it as close to impossible as you can to get bogus data into the table. The table key will all be the foreign key fields as one big composite key. If a separate key is required by foolish design standards, then be sure to define the foreign key fields together in a unique index. Then you must enforce the values of the good/bad flags to insure the 'good' FK can only point to a 'good' superhero, and so forth. ``` CREATE TABLE superhero( id INT NOT NULL AUTO_INCREMENT, nick_name VARCHAR( 20 ), align ENUM( 'good', 'bad' ) not null default 'good', PRIMARY KEY( id ), constraint unique id_align_uq( id, align ) ) ENGINE=INNODB; CREATE TABLE enemies_link( good_sh_id INT NOT NULL, good_align enum( 'good', 'bad' ) not null check( good_align = 'good' ), bad_sh_id INT NOT NULL, bad_align enum( 'good', 'bad' ) not null check( bad_align = 'bad' ), PRIMARY KEY( good_sh_id, good_align, bad_sh_id, bad_align ), FOREIGN KEY( good_sh_id, good_align ) REFERENCES superhero( id, align ) ON UPDATE CASCADE ON DELETE RESTRICT, FOREIGN KEY( bad_sh_id, bad_align ) REFERENCES superhero( id, align ) ON UPDATE CASCADE ON DELETE RESTRICT ) ENGINE=INNODB; ```
Your teacher may be right: you very likely should define both superhero and enemy ids as separate foreign keys: ``` FOREIGN KEY good_sh_id REFERENCES superheroes(id), FOREIGN KEY bad_sh_id REFERENCES superheroes(id) ``` The syntax you specified, would instead specify superhero references as [a composite foreign key](https://stackoverflow.com/questions/9780163/composite-key-as-foreign-key-sql). I have to admit I'm not sure what this even means. The only way composite foreign keys make sense to me is when you use them to reference a composite primary key.
Table with 2 foreign keys coming up from one primary
[ "", "mysql", "sql", "database", "foreign-keys", "" ]
**I have** `big_table` with about hundred of million rows, and `correlation_table` about few thousands of rows. `Corelation_table` correlates the rows of `big_table` to one another by `dep_id` field value. **I want** Select all ids of correlating rows form `big_table`. **Example:** ``` big_table id name dep_id <other_data> 341 "vehicle" 6 ..... 342 "byce" 19 ..... 343 "ferrari" 6 ..... correlation_table dep_id1 dep_id1 6 19 expected result: id1 id2 341 342 343 342 ``` **Question:** *how to do that in the most effective way in MySQL?*
You can join the big table twice to the correlation table by giving it two different aliases, like this: ``` SELECT b1.id AS id1, b2.id AS id2 FROM big_table b1 INNER JOIN Correlation_table c ON c.dep_id1 = b.id INNER JOIN big_table b2 ON b2.id = c.dep_id2 ``` But if you want to select all these records from a hundred million row table, this might take a while. But I don't think there's a more effective way than this. Indexes *might* help, but hardly if you are going to select all records. The order of the joins might help a little, so you can start with the correlation table and see if that is faster: ``` SELECT b1.id AS id1, b2.id AS id2 FROM Correlation_table c INNER JOIN big_table b1 ON b.id = c.dep_id1 INNER JOIN big_table b2 ON b2.id = c.dep_id2 ``` It depends on the exact numbers. Are all rows in the big\_table connected or not, are you going to select all of them, or not, how many 'groups' (dep\_ids) are there, etcetera.
try this: ``` select b1.id, b2.id from big_table b1 join correlation_table on depl_id = dep_id1 join big_table b2 on dep_id1 = b2.bep_id ``` It should join the tables together
MySQL: select correlating rows
[ "", "mysql", "sql", "" ]
I am designing a *JasperReports* report with *iReport* and my query is like this: ``` select * from TABLE_NAME where COLUMN_NAME = $P{PARAMETER_NAME} ``` my problem starts when I try to make it generic i.e. when the parameter is not passed I want the query to bring all the data from the table without filtering. I tried to put a default value for the parameter as `"all(PARAMETER_NAME)"` but that would only work when referring to the parameter with `$P!{PARAMETER_NAME}` which means the parameter value will be treated as part of the query thus the query becomes ``` select * from TABLE_NAME where COLUMN_NAME = $P!{PARAMETER_NAME} ``` but in this case the parameter must either not be passed or passed as an SQL query which is not wise option when sending the parameter from *Java* application for many considerations. waiting for your suggestions.
I found that I can do it by the following: ``` select * from TABLE_NAME where COLUMN_NAME = $P{PARAMETER_NAME} or $P{PARAMETER_NAME} is NULL ``` if there is better suggestions please provide.
try this : it is specific to oracle, but works for the situation mentioned by you `SELECT * FROM TABLE_NAME TM WHERE (CASE WHEN $P!{PARAMETER_NAME} IS NULL THEN 'TRUE' ELSE (DECODE(TM.COLUMN_NAME, '1001', 'TRUE', 'FALSE')) END) = 'TRUE';` Firstly, if the check of '$P!{PARAMETER\_NAME} IS NULL' fails and it moves to ELSE part and depending upon match, it returns values. `SELECT * FROM TABLE_NAME TM WHERE (CASE WHEN $P!{PARAMETER_NAME} IS NULL THEN 'TRUE' ELSE (DECODE(TM.COLUMN_NAME, '1001', 'TRUE', 'FALSE')) END) = 'TRUE';` Here the case statement decides if the parameter is to be checked. In case it equates to NULL, it will return TRUE, which will match with the TRUE after equal sign and return all records. You may use case statement instead of DECODE `SELECT * FROM TABLE_NAME TM WHERE (CASE WHEN '1001' IS NULL THEN 'TRUE' ELSE (CASE WHEN TM.COLUMN_NAME = '1001' THEN 'TRUE' ELSE 'FALSE' END) END) = 'TRUE';`
How can I disable the where clause at the query when the parameter is not passed to the the report?
[ "", "sql", "jasper-reports", "" ]
I have an sql command similar to below one. ``` select * from table1 where table1.col1 in (select columnA from table2 where table2.keyColumn=3) or table1.col2 in (select columnA from table2 where table2.keyColumn=3) ``` Its performance is really bad so how can I change this command? (pls note that the two sql commands in the paranthesis are exactly same.)
Try ``` select distinct t1.* from table1 t1 inner join table2 t2 ON t1.col1 =t2.columnA OR t1.col2 = t2.columnA ```
This is your query: ``` select * from table1 where table1.col1 in (select columnA from table2 and t2.keyColumn = 3) or table1.col2 in (select columnA from table2 and t2.keyColumn = 3); ``` Probably the best approach is to build an index on `table2(keyColumn, columnA)`. It is also possible that `in` has poor performance characteristics. So, you can try rewriting this as an `exists` query: ``` select * from table1 t1 where exists (select 1 from table2 t2 where t2.columnA = t1.col1 and t2.keyColumn = 3) or exists (select 1 from table2 t2 where t2.columnA = t2.col1 and t2.keyColumn = 3); ``` In this case, the appropriate index is `table2(columnA, keyColumn)`.
SQL command usage of in / or
[ "", "sql", "foxpro", "" ]
I have these tables on my SQL Server 1. view\_items - itemcode, itemname, description, length, width, height, weight 2. items - itemid, itemname, itemcode, description 3. items\_specs - itemspecsid, itemid, length, width, height, weight I tried to import data from my view (**view\_items**) to **items** table using **SSIS** with **OLEDB Source** and **OLEDB Destination**. I managed to import it successfully but my problem is that how can I be able to use the inserted **itemid** to be my foreign key to **item\_specs** table.
You would need 2 Data flow tasks. The first data flow component would populate the items table as you have done. The second data flow task would pull the use view\_items as the source and then you could use a lookup transformation.![Lookup ItemId](https://i.stack.imgur.com/jNV47.png) In the columns section you can map based on the uniqueness of the record. For example if your itemname is unique then as below and check the itemid column to be the lookup column. -![enter image description here](https://i.stack.imgur.com/2hxwa.png)
1)This is how I will start: Turn the identity\_insert off on the items table. Find the max from the items, and increment it by 1. This is your item id for the current row of information retrieved from the view\_items. You can use a look up table to store the max id. This item id can now be inserted into both items and items\_specs Once done, turn the identity\_insert ON
SSIS Populating to multiple table
[ "", "sql", "ssis", "" ]
I am having trouble displaying the average age alongside other data, i know it should display more then one field but it only displays one field. ``` SELECT EmpName, Age,AVG(Age) FROM Employee WHERE Age > ( SELECT AVG(Age) FROM Employee ) ```
``` SELECT EmpName, Age, (SELECT AVG(Age) FROM Employee) AverageAge FROM Employee WHERE Age > (SELECT AVG(Age) FROM Employee); ``` here is the right code , i think u can't use AVG(Age) next to SELECT cause it's not an exist column.
You cannot get the line *and* get the overall average. Try to join them: ``` SELECT e.EmpName , e.Age , a.AverageAge FROM Employee e CROSS JOIN ( SELECT AVG(Age) AverageAge FROM Employee ) a WHERE Age > AverageAge ```
SQL AVG cant get to work
[ "", "sql", "average", "" ]
When you need to change data in a MySQL table we used this command: `Replace into table_name` But for Sql Server I have no idea how to use a similar command. Does anyone could help me please. I liked to replace this: ``` ("replace into products" + " (date, hour, IdProduct, Ref1, Ref2, Ref3) values (?,?,?,?,?,?)"); ``` I thank you all for your help. And I'm sorry. greetings
In SQL Server you can use UPDATE Command for changing data in a table: ``` UPDATE products SET date = new_value_1, hour = new_value_2, ... ... WHERE (where_clause) ```
Without REPLACE INTO or INSERT ON DUPLICATE KEY UPDATE, you need to find out which records should be updated or inserted. I had to do something like this in cases where there was no composite primary key, but it would work as a substitute for REPLACE INTO. * Create a temporary table that has the columns to be INSERTED or UPDATED. Also in this temp table you need a "found" column of type BIT * Put the rows that are to be inserted or updated into this temp table. Leave "found" as 0. * Do UPDATE temporary table SET found = 1 and join to your destination table. * Update your destination table JOINed to the temporary table on the primary key (or unique key) fields. * INSERT into the destination table everything from the temporary table where found = 0. SQL coming as soon as the page will let me post it, having technical issues here. For now, added a picture. [![enter image description here](https://i.stack.imgur.com/L5ZLC.png)](https://i.stack.imgur.com/L5ZLC.png) EDIT: The SQL in the picture, in TEXT format: ``` DROP TABLE IF EXISTS CREATE TABLE #tmp ( col_1, col_2, col_n found bit NOT NULL DEFAULT 0, PRIMARY KEY CLUSTERED (pkey_column1, pkey_column2, pkey_column_n) ); INSERT INTO #tmp (col_1, col_2, col_n) SELECT col_1, col_2, col_n FROM source_table; UPDATE a SET a.found = 1 FROM #tmp a INNER JOIN destination_table b ON a.pkey_column_1 = b.pkey_column_1 AND a.pkey_column_2 = b.pkey_column_2 AND a.pkey_column_n = b.pkey_column_n; UPDATE b SET b.col_1 = a.col_1, b.col_2 = a.col_2, b.col_n = a.col_n FROM #tmp a INNER COIN destination_table_b ON a.pkey_column_1 = b.pkey_column_1 AND a.pkey_column_2 = b.pkey_column_2 AND a.pkey_column_n = b.pkey_column_n; INSERT INTO destination_table (col_1, col2, col_n) SELECT col1_, col_2, col_n FROM #tmp WHERE found = 0 ```
How do a MySQL REPLACE INTO command in SQL Server?
[ "", "mysql", "sql", "" ]
So, I've been stuck on this problem for last couple of days and I still couldn't come up with solution. I want to group given month into weeks which is fairly easy but the (horrible)business requirement is to consider a single day also as a week if it falls on any day between Monday to Sunday. The end day of the week is going to be Sunday. For example I'll take month of August for demonstration. According to business requirement, this is how the data should be displayed for the given month ``` First week - August 1st to August 2nd, 2015 Second week - August 3nd to August 9th, 2015 Third week - August 10th to August 16th, 2015 Fourth week - August 17th to August 23rd, 2015 Fifth week - August 24th to August 30th, 2015 Sixth week - August 31st, 2015 ``` I'm completely clueless on how to proceed with the problem due to the sixth week occurrence. I came across this query on AskTom which display 5 weeks but resets back to 1 on the 31st of August. Moreover, the query doesn't look like an elegant solution. ``` select dt, to_char( dt+1, 'w' ) from ( select to_date('1-aug-2015')+rownum dt from all_objects where rownum < 31 ); ``` Looking for suggestions/insights on the problem. Thanks
``` WITH x (dt) AS ( SELECT DATE '2015-08-01' + LEVEL - 1 dt FROM DUAL CONNECT BY DATE '2015-08-01' + LEVEL - 1 < DATE '2015-09-01') SELECT dt, SUM ( CASE WHEN TO_CHAR (dt, 'd') = '2' --if the day is monday OR TO_CHAR (dt, 'fmdd') = '1' --or if its the first day of the month, assign 1. THEN 1 ELSE 0 END) OVER (ORDER BY dt) wk_nr FROM x; ``` 1. First generate all days for the given month. 2. Identify the beginning of each week and the start of the month by marking it as 1. Mark rest of the days as 0. Here to\_char(dt,'d') gives 2 for monday. But may change based on NLS territory of the session. 3. Now that you have beginning of each week, use SUM to calculate the cumulative sum. This gives you the desired week number. Sample [fiddle](http://sqlfiddle.com/#!4/d41d8/30358). --- ***UPDATE*** Looks like 10g doesn't support [column alias](http://docs.oracle.com/cd/B10501_01/server.920/a96540/statements_103a.htm#2065648) with the CTE name. Remove it and try. ``` WITH x AS (SELECT .... ```
``` --TRY THIS ``` --I DID IT FOR SYSDATE. ``` SELECT DT, CASE WHEN TO_CHAR(DT+1, 'W')='1' AND SUBSTR(DT,1,2)>'24' THEN '6' ELSE TO_CHAR(DT+1, 'W') END FROM (SELECT TO_DATE(SYSDATE)+ROWNUM DT FROM ALL_OBJECTS ); ``` --THE QUERY IN YOUR EXAMPLE. ``` SELECT DT,CASE WHEN TO_CHAR( DT+1, 'w' )='1' AND SUBSTR( DT,1,2)>'24' THEN '6' ELSE TO_CHAR( DT+1, 'w' ) END from ( select to_date('1-aug-2015')+rownum dt FROM ALL_OBJECTS WHERE ROWNUM < 31 ); ```
Show number of weeks in a given month
[ "", "sql", "oracle", "oracle10g", "" ]
Is there a way to find all the tables that have an X column name within the Y database? So If X.Column Exists in Y.Database Print all.tables with x.column Thanks
Most, but not all, databases support the `information_schema` tables. If so, you can do: ``` select table_name from information_schema.columns t where column_name = YOURCOLUMNNAME; ``` If your database doesn't support the `information_schema` views, then any reasonable database has an alternative method for getting this information. You may need to specify the database name, but that depends on the database. It could be: ``` select table_name from YOURDATABASENAME.information_schema.columns t where column_name = YOURCOLUMNNAME; ``` or ``` select table_name from YOURDATABASENAME.information_schema.columns t where column_name = YOURCOLUMNNAME and schema_name = YOURDATABASENAME; ```
For sql server try this... ``` SELECT t.name FROM sys.tables t INNER JOIN sys.columns c ON t.[object_id] = c.[object_id] WHERE c.name LIKE '%ColumnName%' ```
Find all tables that have X column name
[ "", "sql", "sql-server", "" ]
I have a table `user_access` with the following columns in MySQL database: ``` id int user int access_time datetime ``` I am trying to run a query that gives me the number of times a `user` accesses the system (`access_time`) in 15-minute intervals within a given a given set of timestamps. My current query is: ``` select user, count(user) as users from user_access where (access_time between '2013-05-28 02:00:00' and '2013-05-28 10:00:00') group by user ``` The results I'm trying to achieve would look like: ``` Time User No of Times -------------------------------------------------- 8:00am - 8:15am user1 20 8:00am - 8:15am user2 5 8:15am - 8:30am user1 15 8:15am - 8:30am user2 23 ```
First of all, you have a subtle error in your `WHERE` clause. You need: ``` where access_time >= '2013-05-28 02:00:00' and access_time < '2013-05-28 10:00:00' ``` because your quarter-hour ranges run from a particular time until the moment *before* another particular time. You need `<`, not `<=`, for the end of your time range. Then, you need an expression that can take an arbitrary `DATETIME` expression and convert it to the `DATETIME` of the beginning of the quarter-hour in which it occurs. This will do that. ``` DATE_FORMAT(datestamp,'%Y-%m-%d %H:00:00') + INTERVAL (MINUTE(datestamp) - MINUTE(datestamp) MOD 15) MINUTE ``` It turns, for example `'2014-05-07 14:53:22'`, into `'2014-05-07 14:45:00'`. You can define it as a stored function like this if you like: ``` DELIMITER $$ DROP FUNCTION IF EXISTS `TRUNC_15_MINUTES`$$ CREATE FUNCTION `TRUNC_15_MINUTES`(datestamp DATETIME) RETURNS DATETIME NO SQL DETERMINISTIC RETURN DATE_FORMAT(datestamp,'%Y-%m-%d %H:00:00') + INTERVAL (MINUTE(datestamp) - MINUTE(datestamp) MOD 15) MINUTE$$ DELIMITER ; ``` You can then write your query like this: ``` select TRUNC_15_MINUTES(access_time) AS period_starting, user, count(user) as users from user_access where access_time >= '2013-05-28 02:00:00' and access_time < '2013-05-28 10:00:00' group by TRUNC_15_MINUTES(access_time), user order by TRUNC_15_MINUTES(access_time), user ``` This is written up here. <http://www.plumislandmedia.net/mysql/sql-reporting-time-intervals/>
What you need to create is a formula that will turn each access\_time into its nearest quarter-hour. Something like: ``` CONCAT(Hour(access_time),":",floor(Minute(access_time)/15))*15) ``` Then you can put that formula both in your select statement and your group by statement. You can modify this pseudocode statement to change your formatting, add the end time of each quarter hour, etc.
How to group datebase records into 15-minute time intervals
[ "", "mysql", "sql", "time-series", "aggregate-functions", "" ]
Just running a SQL and wrote a query with a sub query but it has returned a null value, when its not suppose too. Not to sure if anyone can help without the database but here is my query: \*This is the exercise question: 'Use an SQL statement with a sub-query to list the employee ID and full name of those employees who are managers. Use the column headings Manager ID and Manager.' ``` SELECT ReportsTo As 'Manager ID', CONCAT(FirstName,' ',Lastname) As 'Manager' FROM Employees WHERE EmployeeID IN (SELECT ReportsTo FROM Employees); ```
Assuming I'm understanding your question, you want to return those employees that are marked as managers (denoted by the reportto column). If so, you could use `EXISTS`: ``` select employeeid, concat(firstname,' ', lastname) fullname from employees e where exists ( select 1 from employees e2 where e.employeeid = e2.reportsto) ``` * [SQL Fiddle Demo](http://sqlfiddle.com/#!8/c6b7e/1) This will return a list of employees with their id and full name whose id exists in the `reportsto` field. If you prefer the `in` syntax, then this should work the same: ``` select employeeid, concat(firstname,' ', lastname) fullname from employees e where employeeid in ( select reportsto from employees ) ```
First, the subquery appears incorrect. You are trying to say "*Grab me ReportsTo and peoples names from the Employees table where the Employee ID is equal to any of the ReportsTo fields in the same table*" You would need to change the ReportsTo in the subquery to EmployeeID. IE ``` SELECT ReportsTo As 'Manager ID', CONCAT(FirstName,' ',Lastname) As 'Manager' FROM Employees WHERE EmployeeID IN (SELECT EmployeeID --Here is where i've changed the value FROM Employees); ``` However this query in itself is no different from just simply writing: ``` SELECT ReportsTo As 'Manager ID', CONCAT(FirstName,' ',Lastname) As 'Manager' FROM Employees ``` If you are trying to filter out your results somehow, what "Employees" records do you want to filter out? PLease give more info.
SQL Returning a null value
[ "", "mysql", "sql", "sql-server", "" ]
With the following MySQL table containing debit or credit "actions" with associated amounts, how is it possible to select all CLIENT\_IDs with a non-zero "balance"? I have tried joining the table to itself in order to calculate all debit and credit totals, but something isn't working correctly. ``` CLIENT_ID ACTION_TYPE ACTION_AMOUNT 1 debit 1000 1 credit 100 1 credit 500 2 debit 1000 2 credit 1200 3 debit 1000 3 credit 1000 4 debit 1000 ``` My MySQL query that doesn't work: ``` SELECT client_id, SUM(t_debits) AS debits, SUM(t_credits) AS credits, SUM(t_debits)-SUM(t_credits) AS balance FROM table_name AS t_debits LEFT JOIN table_name AS t_credits ON t_credits.client_id=t_debits.client_id WHERE t_debits.action_type='debit' AND t_credits.action_type='credit' AND balance!=0 GROUP BY t_debits.client_id, t_credits.client_id; ``` The result I am expecting is something like: ``` CLIENT_ID DEBITS CREDITS BALANCE 1 1000 600 400 2 1000 1200 -200 4 1000 0 1000 ``` I have no idea what else to try. Any help would be great.
``` DROP TABLE IF EXISTS my_table; CREATE TABLE my_table (transaction_id INT NOT NULL AUTO_INCREMENT PRIMARY KEY ,client_id INT NOT NULL ,action_type VARCHAR(12) NOT NULL ,action_amount INT NOT NULL ); INSERT INTO my_table(client_id,action_type,action_amount) VALUES (1 ,'debit', 1000), (1 ,'credit', 100), (1 ,'credit', 500), (2 ,'debit', 1000), (2 ,'credit', 1200), (3 ,'debit', 1000), (3 ,'credit', 1000), (4 ,'debit', 1000); SELECT client_id , SUM(COALESCE(CASE WHEN action_type = 'debit' THEN action_amount END,0)) total_debits , SUM(COALESCE(CASE WHEN action_type = 'credit' THEN action_amount END,0)) total_credits , SUM(COALESCE(CASE WHEN action_type = 'debit' THEN action_amount END,0)) - SUM(COALESCE(CASE WHEN action_type = 'credit' THEN action_amount END,0)) balance FROM my_table GROUP BY client_id HAVING balance <> 0; +-----------+--------------+---------------+---------+ | client_id | total_debits | total_credits | balance | +-----------+--------------+---------------+---------+ | 1 | 1000 | 600 | 400 | | 2 | 1000 | 1200 | -200 | | 4 | 1000 | 0 | 1000 | +-----------+--------------+---------------+---------+ ```
You need to use `case` statement ``` select client_id, debits, credits, debits-credits as balance from (SELECT client_id, SUM(case when ACTION_TYPE='debit' then action_amount else 0 end) AS debits, SUM(case when ACTION_TYPE='credit' then action_amount else 0 end) AS credits FROM categories GROUP BY client_id) a where debits-credits<>0; ``` [Fiddle](http://www.sqlfiddle.com/#!2/c74711/9)
Using MySQL to calculate balances from debits and credits in a single table
[ "", "mysql", "sql", "" ]
I am trying to insert values into a temp table from within a cursor loop. It is hard to explain so I will just show what I have so far. ``` declare @sqlStatement varchar(max) declare @tmpTable table ( Table_Name varchar(max) ,Count int ) declare cur CURSOR FAST_FORWARD FOR Select 'Select ''' + TABLE_NAME + ''' [Table_Name],COUNT(*) [Count] From [' + TABLE_SCHEMA + '].[' + TABLE_NAME + ']' from information_schema.tables Where TABLE_TYPE = 'BASE TABLE' OPEN cur FETCH NEXT FROM cur INTO @sqlStatement WHILE @@FETCH_STATUS = 0 BEGIN declare @tmp varchar(max) = 'INSERT INTO @tmpTable ' + @sqlStatement exec @tmp Select * From @tmpTable fetch next from cur into @sqlStatement END CLOSE cur DEALLOCATE cur ``` And I am getting the error ``` The name 'INSERT INTO @tmpTable Select 'table' [Table_Name],COUNT(*) [Count] From [dbo].[table]' is not a valid identifier. ``` But I don't see anything wrong with that statement. I think it may have to do with the escape characters in within the string?
The format for the `exec` call must include parentheses: ``` exec(@tmp) ``` You will also need to change to a `#temp` table as a table variable will not be within the scope of the `exec`'d batch.
There are couple of issues, your while loop is missing 'FETCH NEXT FROM cur INTO @sqlStatement' which will lead it to indefinite loop. But this is not really the problem here. use exec ( ) with brackets it should solve the problem. ## Or if I was you I will try to achieve this using a set based statement like following : ``` declare @tmpTable table ( Table_Name varchar(max), CountX int ) insert into @tmpTable SELECT sc.name +'.'+ ta.name TableName ,SUM(pa.rows) CountX FROM sys.tables ta INNER JOIN sys.partitions pa ON pa.OBJECT_ID = ta.OBJECT_ID INNER JOIN sys.schemas sc ON ta.schema_id = sc.schema_id WHERE ta.is_ms_shipped = 0 AND pa.index_id IN (1,0) GROUP BY sc.name,ta.name ORDER BY SUM(pa.rows) DESC select * from @tmpTable ```
executing an insert into within a cursor loop
[ "", "sql", "ssms", "sql-insert", "database-cursor", "" ]
How want to use the sum of the time(n) operator so that i can calculate the overall total of the time but Sql server saying can't add the Time(n) column i have a casted column which contain difference of two dates, and being casted as Time(n) by me. Now i want to add those column to get how much time i had used in total How much hours minute and seconds so i apply ``` select Sum(cast ((date1-date2) as Time(0))) from ABC_tbl ``` where date1 is reaching time and date2 is startingtime in Date format and i want to total of all hours
Convert the time to an integer value before you sum it (for example, seconds): ``` SELECT SUM( datediff(second, '00:00:00', [TimeCol]) ) FROM ... ``` Replace [TimeCol] with the name of the Time(n) column. This gives you the total time in seconds, which you can then easily convert to minutes, hours, etc...
``` select Sum(DATEDIFF(Minute,date1,date2)) AS TIME from ABC_tbl ``` u have to calculate the date difference with DATEDIFF function then use SUM function to calculate your sum of time. you can change Minute to Second-Hour-month etc..
How to Sum (Time(n)) in Sql Server?
[ "", "sql", "sql-server", "" ]
I have data that looks something like: ``` Date UserID Visits 2012-01-01 2 5 ... ``` I would like to output a list of users who have > x visits on at least y dates (e.g., the users who have >5 visits for at least 3 dates from January 3 to January 10).
Try this: ``` SELECT SUB.UserId, COUNT(*) FROM ( SELECT VL.UserId FROM VisitLog VL WHERE VL.Visits > 5 AND VL.Date BETWEEN '2014-01-03' AND '2014-01-10') SUB GROUP BY SUB.UserId HAVING COUNT(*) >= 3 ``` The sub query returns all rows where the number of `Visits > 5` between your sample date range. The results of this are then counted to return only users where this condition has been matched at least 3 times. You don't give much information but if you have multiple records per date per user then use this query (exactly the same principal, just an inner grouping to sum by user and date): ``` SELECT SUB.UserId, COUNT(*) FROM ( SELECT VL.UserId, VL.Date FROM VisitLog VL WHERE VL.Date BETWEEN '2014-01-03' AND '2014-01-10' GROUP BY VL.UserId, VL.Date HAVING SUM(VL.Visits) > 5) SUB GROUP BY SUB.UserId HAVING COUNT(*) >= 3 ```
Try this: ``` select * from users where id in ( select UserID from userVisits where date between '2014-01-03' and '2014-01-10' and visits >= 5 group by userid having count(*) >= 3) ```
SQL: List of users that meet a condition N times
[ "", "sql", "postgresql", "" ]
I have a set of SQL queries that each is an insert statement that gets its values from other tables: ``` insert into table1 values ( (select tableID from Table1 where Name = 'Name1'), (select tableID from Table2 where Name = 'Name2'), (select tableID from Table3 where Name = 'Name3') ) ``` If the select statements can't find the value in the table (ie, if there is no 'Name1' in Table1), then that field becomes NULL. Aside from changing the table design to not allow NULL's, is there a way I can modify my SQL to fail rather than insert NULL, if the nested select statements can't find the specified value?
What I would do is store the values in variables initially: ``` DECLARE @tableID1 INT = (SELECT tableID FROM Table1 WHERE Name = 'Name1') ``` Then you can do an ``` IF @tableID1 IS NOT NULL AND @tableID2 IS NOT NULL... INSERT... ``` or alternatively an `INSERT INTO table1 SELECT @tableID1 WHERE @tableID1 IS NOT NULL`. If for some reason you wanted to do it in a single messy statement you could do: ``` INSERT INTO table1 (select tableID from Table1 where Name = 'Name1'), (select tableID from Table2 where Name = 'Name2'), (select tableID from Table3 where Name = 'Name3') WHERE EXISTS (select 1 from Table1 where Name = 'Name1'), AND EXISTS (select 1 from Table2 where Name = 'Name2'), AND EXISTS (select 1 from Table3 where Name = 'Name3') ```
The construct you are looking for is `insert . . . select`. This allows you to put a `where` clause on the query. Because your subqueries have to return one row, you can do a `cross join`, which will return at most one row: ``` insert into table1(id1, id2, id3) select t1.tableID, t2.tableId, t3.tableId from (select tableID from Table1 where Name = 'Name1') t1 cross join (select tableID from Table2 where Name = 'Name2') t2 cross join (select tableID from Table3 where Name = 'Name3') t3 where t1.tableID is not null and t2.tableId is not null and t3.tableId is not null; ``` My guess is that the `where` clause is unnecessary. `NULL` in your case is probably not the value of the table id, but represents that no row was found. A `cross join` on a table with no rows returns no rows, so nothing will be inserted.
How to modify SQL INSERT query to fail if nested SELECT returns NULL
[ "", "sql", "sql-server", "" ]
After looking at [how to count the occurrences of distinct values](https://stackoverflow.com/questions/1346345/mysql-count-occurrences-of-distinct-values) in a field, I am wondering how to count the occurrences of each distinct value if the distinct values are known (or enumerated). For example, if I have a simple table - ``` TrafficLight Colour ------------ ------ 1 Red 2 Amber 3 Red 4 Red 5 Green 6 Green ``` where one column (in this case Colour) has known (or enumerated) distinct values, how could I return the count for each colour as a separate value, rather than as an array, as in the linked example. To return an array with a count of each colour (using the same method as in the linked example), the query would be something like `SELECT Colour COUNT(*) AS ColourCount FROM TrafficLights GROUP BY Colour`, and return an array - ``` Colour ColourCount ------ ----------- Red 3 Amber 1 Green 2 ``` What I would like to do is to return the count for each Colour AS a separate total (e.g. RedCount). How can I do this?
For mysql you can do so,by using expression in `sum(expr)` will result as boolean and you can count the occurrences for your colors individually ``` SELECT SUM(Colour = 'Red') AS RedCount, SUM(Colour = 'Amber') AS AmberCount, SUM(Colour = 'Green') AS GreenCount FROM t ``` ## [Demo](http://sqlfiddle.com/#!2/ab029/1)
Try this query: ``` SELECT (SELECT COUNT(*) AS ColourCount FROM tableA GROUP BY colour HAVING colour = 'Red') AS red_lights, (SELECT COUNT(*) AS ColourCount FROM tableA GROUP BY colour HAVING colour = 'Green') AS green_lights, (SELECT COUNT(*) AS ColourCount FROM tableA GROUP BY colour HAVING colour = 'Amber') AS amber_lights FROM tableA ``` [Here is the Fiddle](http://sqlfiddle.com/#!2/2b8113/7)
MySQL: Count occurrences of known (or enumerated) distinct values
[ "", "mysql", "sql", "" ]
I have a table `LOGPROCESS` that stores start/end of each process and I would like to do a select and return only one row using something like 'as DateStart' 'as DateEnd'. This is how the logs are saved for each process ( ThreadName could be different ): ``` |ThreadName |LEVEL |Who | TYPE | QUERY_STRING | DATE |DevTest |INFO |Tests.Run | [RECURRING_START] | clientId=300&campId=3130 | 2014-06-27 15:42:57.803 |62013c67-886e-455b-b8aa-ef9139fcc1d4|INFO |Tests.Send | [RECURRING_END] | clientId=300&campId=3130&totalSends=0 | 2014-06-27 15:43:58.701 ``` The only values that represents that START/END are from the same process is part of query string called `CampId`. I'm trying to perform a select like this ( name of end thread is the only that interest ): ``` select ThreadName, (SELECT Date from logprocess where <what goes here>) as DateStart (SELECT Date fromlogprocess where <what goes here>) as DateEnd from logprocess where ``` But I didn't find out what should put between <> . My desired output is something like (the ThreadName from EndProcess, Date from Start process and Date from EndProcess): ``` |ThreadName | STARTDATE | ENDDATE |62013c67-886e-455b-b8aa-ef9139fcc1d4 | 2014-06-27 15:42:57.803 | 2014-06-27 15:43:58.701 ``` Any help? Thank you in advance.
Your challenge is extracting `CampID` from your query string to identify your rows. You can do this using CHARINDEX and SUBSTRING as follows (I have separated out the logic into APPLY to hopefully make it more clear what is being done): ``` SELECT CampID = SUBSTRING(QUERY_STRING, StartIndex, EndIndex - StartIndex), t.* FROM T -- FIND POSITION OF '&CampID=' IN QUERY_STRING CROSS APPLY (SELECT StartIndex = CHARINDEX('&campID=', '&' + QUERY_STRING) + 7) si -- FIND POSITION OF '&' AFTER '$CampID=' IN QUERY_STRING CROSS APPLY (SELECT EndIndex = CHARINDEX('&', QUERY_STRING + '&', StartIndex + 1)) ei; ``` Once you have your CampID, it is then fairly simple to get your data out using conditional aggregates: ``` SELECT CampID = SUBSTRING(QUERY_STRING, StartIndex, EndIndex - StartIndex), ThreadName = MAX(CASE WHEN [TYPE] = '[RECURRING_END]' THEN ThreadName END), StartDate = MAX(CASE WHEN [TYPE] = '[RECURRING_START]' THEN [DATE] END), EndDate = MAX(CASE WHEN [TYPE] = '[RECURRING_END]' THEN [DATE] END) FROM T t CROSS APPLY (SELECT StartIndex = CHARINDEX('&campID=', '&' + QUERY_STRING) + 7) si CROSS APPLY (SELECT EndIndex = CHARINDEX('&', QUERY_STRING + '&', StartIndex + 1)) ei GROUP BY SUBSTRING(QUERY_STRING, StartIndex, EndIndex - StartIndex); ``` **[Example on SQL Fiddle](http://sqlfiddle.com/#!3/5f885/1)**
Well, it's not pretty: ``` SELECT b.ThreadName ,a.[date] as StartDate ,b.[date] as EndDate FROM logprocess a LEFT JOIN logprocess b ON SUBSTRING(a.query_string ,PATINDEX('%campId=%', a.query_string)+7 ,LEN(a.query_string)-(PATINDEX('%campId=%', a.query_string)+6)) = SUBSTRING(b.query_string ,PATINDEX('%campId=%', a.query_string)+7 ,LEN(a.query_string)-(PATINDEX('%campId=%', a.query_string)+6)) WHERE a.type = '[RECURRING_START]' AND b.type = '[RECURRING_END]' ``` [SQLFiddle](http://sqlfiddle.com/#!3/36bc3/17) This should work as long as the the `campId` is always the last variable in the `[RECURRING_START]` row, and then appears in the same position in the `[RECURRING_END]` row. I would imagine it's incredibly slow.
SQL Server - Retrieve two rows in a unique row using alias
[ "", "sql", "sql-server", "" ]
Im creating db project and one thing is no so clear for me. Have a look at this: ![part of database model](https://i.stack.imgur.com/nbnuc.png) Now im not sure what's the best way to make physical model. TOOL and RESOURCE doesn't need any other fields than ITEM. Only Recipe needs new column(for). But I have to somehow distinguish recipes from tools and resources. Ok I can create 2 same tables with different names for resources and tools, and one for recipe, but what I do with items then? Create is as view? Then it won't be really subclassing. Or i just create something like that: Create ITEM table with (id,name,description) and then just RESOURCE with (item\_id) thats foreign key and same with tools and add trigger that when I insert into resources it really adds to ITEM and then just adds foreign key in resources? Thats seems like most sensible option, but is it possible?
This is a classic "It depends" answer. The answer really depends on the potential relationships and whether an item can belong to more than one type. If there will be relations that all items can be in, then a single items table would be preferable since you can have a single foreign key column in your relation table. If there will be relations that only specific types of item can have, then you will need to use [Class Table Inheritance](http://martinfowler.com/eaaCatalog/classTableInheritance.html), so as you have said for Tool for example, you would have another table called `Tools`, with a single primary key column `ItemID` that references Item. This way you can create foreign keys to `Tools.ItemID` limiting relationships to a certain type of item. If an item can belong to more than one type then you should almost certainly use Class Table Inheritance. If you don't need to limit relations, and an item can only belong to one type, then you can just identify item types by having an `ItemType` column in the Items table, no need to use table inheritance at all. If there won't be relations that more than one itemtype can be in, and there will be relations that only one can appear in then it may be simpler, and less maintenance to use 3 tables, and use a view to combine them, it should still provide all the functionality needed. In summary, Inheritance will require more tables, and more management, but is more flexible, so it depends.
Based on just the requirements that you've presented, I would suggest physically implementing only a single table (item) with a **partitioning attribute**, e.g. `item_type`. You can use check constraints to ensure that the extra columns that apply only to recipes are null for other item types and not-null for recipes. If it makes sense for your application, for example if you have to coerce an ORM into working the way you want it to, then you could create views for the three different subtypes. If your needs expand to include many distinct predicates (columns) per sub-type, then you might want to consider splitting out the distinct predicates into sub-type tables, leaving the common predicates at the super-type level. You would still want to have a partitioning attribute in either case. There is no hard and fast rule for when to implement physical sub-types. It is a matter of trade-offs and to some extent, personal preference.
Database subclassing design
[ "", "sql", "database", "database-design", "" ]
**Mock up table:** ![enter image description here](https://i.stack.imgur.com/sU2iM.jpg) I need to eliminate duplicate records from the first table. In case we have duplicate address\_num such as 100, we will go with the row of AddressType "Home". Please note that address\_num do not come in duplicates of 3 or more; they either appear once or twice in a table. For example, address\_num 200 appeared once. Therefore, we just grabbed it as is. ID column is the primary key. The second table with the right check mark is the results table. How can I write an oracle SQL query that prints out the second table?
You can do this with `row_number()` and an intelligent use of `order by`: ``` select t.* from (select t.*, row_number() over (partition by address_num order by (case when AddressType = 'Home' then 1 else 0 end) desc ) as seqnum from table t ) t where seqnum = 1; ```
Something like: ``` SELECT id, address_num, ... FROM ( SELECT id, address_num, ... , row_number() over (partition by address_num order by addresstype) as rn FROM T ) AS X WHERE rn = 1 ```
Pick a Single Row Out of Each Set of Duplicates
[ "", "sql", "oracle", "oracle-sqldeveloper", "" ]
Postgresql Database ``` Table User ----------- ID | Name 1 | John 2 | Bob 3 | Sarah Table Photo ------------- ID | Caption 1 | Vacation 2 | Birthday 3 | Christmas Table Comment -------------- ID | User ID | Photo ID| Text 1 | 1 | 1 | Mexico Looks Great 2 | 2 | 1 | Sure Does 3 | 3 | 1 | Too Hot 4 | 1 | 2 | Look at that cake 5 | 3 | 2 | No ice cream? 6 | 1 | 3 | So Happy ``` Desire: I want to get all the photos that ONLY John(1) and Sara(3) commented on. How do I build a SQL query that looks for photos that only have comments from user #1 and user #3, I want to EXCLUDE results where more(or less) than those two commented on.
The clearest and most readable way, is the `Photo`s containing comments by: User1 `Intersect` User2 `Except` Any other user This [SQL Fiddle](http://sqlfiddle.com/#!15/dc786/3) and query will return that: ``` SELECT * FROM Photo WHERE ID IN ( SELECT "Photo ID" FROM Comment WHERE "User ID" = 1 INTERSECT SELECT "Photo ID" FROM Comment WHERE "User ID" = 3 EXCEPT SELECT "Photo ID" FROM Comment WHERE "User ID" NOT IN (1, 3) ) ```
lets do three joins, one for john, one for sara, one for everyone else. Then we'll limit what we get back with the where clause. ``` select p.* from photo p left join comment john on john.photo_id=p.photo_id and john.user_id=1 left join comment sara on sara.photo_id=p.photo_id and sara.user_id=3 left join comment everyone_else on everyone_else.photo_id=p.photo_id and everyone_else.user_id<>3 and everyone_else.user_id<>1 where everyone_else.id is null and john.id is not null and sara.id is not null ```
Postgres Many to many mapping sql query
[ "", "sql", "postgresql", "relational-division", "" ]
I need to select details from a table according to the language present. That is, I have a primary language culture and and a Secondary Culture. I need to select rows that have matching rows for primary if any else I need to select matching rows for secondary culture. I have more than 5 different tables to join all will have the same conditions. ## My table ``` ID int Name nvarchar(50) MasterKey tinyint Culture nvarchar(50) ``` ## Required Output ``` Name Culture ----- ------- @#@$ en-AR ``` ## Query ``` declare @Primary nvarchar(20) = 'en-AR' ,@Secondary nvarchar(20) = 'en-US' SELECT LangTable1.Name AS Name ,LangTable1.Culture AS Culture FROM LangTable1 WHERE LangTable1.Culture=case WHEN LangTable1.Culture=@Primary THEN @Primary ELSE @Secondary ``` ## result ``` Name Culture ----- ------- Minhaj en-US @#@$ en-AR Nann en-US ```
Other two ways to get that is using a `UNION` The first query removes the values of the secondary language that are duplicates of values of the primary languages checking with an `EXIST` ``` DECLARE @primary nvarchar(20) = 'en-AR' DECLARE @secondary nvarchar(20) = 'en-US' SELECT Name, Culture FROM LangTable1 WHERE Culture = @primary UNION ALL SELECT Name, Culture FROM LangTable1 a WHERE Culture = @secondary AND NOT EXISTS (SELECT 1 FROM LangTable1 b WHERE a.Name = b.Name AND b.Culture = @primary) ``` The second query removes the values of the secondary language that are duplicates of values of the primary languages using `EXCEPT` ``` DECLARE @primary nvarchar(20) = 'en-AR' DECLARE @secondary nvarchar(20) = 'en-US' SELECT Name, Culture FROM LangTable1 WHERE Culture = @secondary EXCEPT SELECT Name, @secondary FROM LangTable1 WHERE Culture = @primary UNION ALL SELECT Name, Culture FROM LangTable1 WHERE Culture = @primary ```
This might give you what you want: ``` ;with cte(Name, Culture, CultureRank) as ( select t.Name, t.Culture, case when t.Culture = @primary then 1 when t.culture = @secondary then 2 else 3 end CultureRank from LangTable1 t ) select c.Name, c.Culture from cte c where c.CultureRank < 3 and c.CultureRank = (select MIN(CultureRank) from cte c2 where c2.Name = c.Name) ``` [SQL Fiddle](http://sqlfiddle.com/#!3/3b0e2/2/0)
select columns with conditions with priority
[ "", "sql", "sql-server-2008", "t-sql", "" ]
Lets say I have three tables with these columns, ``` Players - id, name Events - id, name Games - first_player_id, second_player_id, event_id. ``` And I need the `players` details who are playing in a `game` which is happening in an `event`. And I could write query like, ``` SELECT players.id, events.id as event_id, (SELECT name as player_one_name from players where id = games.first_player_id), (SELECT name as player_two_name from players where id = games.second_player_id), games.id as game_id FROM events INNER JOIN games on events.id = games.event_id INNER JOIN players on games.first_player_id = players.id;" ``` Here I am using two sub queries to fetch players name. **And it gives correct results**. Can this query be optimized? For ex, can I remove any `subquery` or `innerjoin` ? FYI, I use `PostgreSQL` database. Thanks.
If you do not want sub queries in your select statement then you must provide a join for each subset. Since your database is set oriented the two INNER JOINS would prove more efficient. ``` SELECT players.id, events.id as event_id, player_one_name=player_one.name, player_tow_name=player_two.name FROM events INNER JOIN games on events.id = games.event_id INNER JOIN players player_one on games.first_player_id = player_one.id INNER JOIN players player_two on games.second_player_id = player_two.id ```
You must do a join for each foreign key ``` SELECT players_a.id, events.id as event_id, players_a.name as player_one_name, players_b.name as player_two_name, games.id as game_id FROM events INNER JOIN games on events.id = games.event_id INNER JOIN players players_a on games.first_player_id = players.id INNER JOIN players players_b on games.first_player_id = players.id ```
SQL Query using multiple JOINS without sub query
[ "", "sql", "postgresql", "" ]
I have a sql query which returns a list of students which are enrolled in a conference, and their preferences for each session. When pulling the data from my database, each user session selection is showing in its own row as shown below: ``` **userid question answer** 1 S1 choose: a1, b1, c1 a1 1 S2 choose: a2, b2, c2 b2 1 S3 choose: a3, b3, c3 b3 2 S1 choose: a1, b1, c1 b1 2 S2 choose: a2, b2, c2 c2 2 S3 choose: a3, b3, c3 a3 3 S1 choose: a1, b1, c1 a1 3 S2 choose: a2, b2, c2 b2 3 S3 choose: a3, b3, c3 b3 ``` I would like to make each session a colum, so that each userid with the questions and answers is shown in one row. Like so: ``` user1 question1 answer1 question2 answer2 question3 answer3 user2 question1 answer1 question2 answer2 question3 answer3 user3 question1 answer1 question2 answer2 question3 answer3 ``` I'm limited to my SQL query knowledge so I would greatly appreciate your help....how can I reach the above results? Thanks in advance
In response to your comment, assuming you know the number of potential questions, one option is to use `max` with `case`: ``` select userid, max(case when questionid = 1 then question end) question1, max(case when questionid = 1 then answer end) answer1, max(case when questionid = 2 then question end) question2, max(case when questionid = 2 then answer end) answer2, ... from yourtable group by userid ``` This assumes you have a `questionid` available. If not, you could use the `question` field or create a `ROW_NUMBER` and it would work the same way: ``` select userid, max(case when rn = 1 then question end) question1, max(case when rn = 1 then answer end) answer1, max(case when rn = 2 then question end) question2, max(case when rn = 2 then answer end) answer2, ... from ( select *, row_number() over (partition by userid order by question) rn from yourtable ) t group by userid ``` --- Edit, if you need a dynamic solution for this, since you are trying to `pivot` multiple columns, you first need to unpivot your results. One option for doing this is to use `CROSS APPLY`. Then you can `PIVOT` the results back: ``` DECLARE @cols AS NVARCHAR(MAX), @query AS NVARCHAR(MAX) select @cols = STUFF((SELECT ',' + QUOTENAME('Question:' + question) +',' + QUOTENAME('Answer:' + question) from yourtable group by question FOR XML PATH(''), TYPE ).value('.', 'NVARCHAR(MAX)') ,1,1,'') set @query = 'select userid, ' + @cols + ' from ( select userid, col+question new_col, value from yourtable cross apply ( VALUES (question, ''Question:''), (answer, ''Answer:'') ) x (value, col) ) src pivot ( max(value) for new_col in (' + @cols + ') ) piv ' execute(@query) ``` * [SQL Fiddle Demo](http://sqlfiddle.com/#!3/c6391/1)
Use dynamic SQL. Select a distinct list of sessions into a temp table that you've created with an identity column. ``` DECLARE @SQL varchar(1000), @Count int, @Counter int CREATE TABLE #report (userid int) CREATE TABLE #questions (MyIdx int IDENTITY(1,1), question varchar(50)) INSERT INTO #report (UserID) SELECT DISTINCT userid from mytable INSERT INTO #questions (question) SELECT DISTINCT question FROM mytable ORDER BY question SELECT @Count = COUNT(*) FROM #questions, @Counter = 0 WHILE @Counter < @Count BEGIN SET @Counter = @Counter + 1 SET @SQL = 'ALTER TABLE #report ADD Q' + CONVERT(varchar, @Counter) + ' varchar(50)' EXEC (@SQL) SET @SQL = 'ALTER TABLE #report ADD A' + CONVERT(varchar, @Counter) + ' varchar(50)' EXEC (@SQL) SET @SQL = 'UPDATE #report SET Q' + CONVERT(varchar, @Counter) + ' = b.question FROM #report a INNER JOIN mytable b ON a.userid = b.userid INNER JOIN #questions c ON b.question = c.question WHERE c.MyIdx = ' + CONVERT(varchar, @Counter) EXEC (@SQL) SET @SQL = 'UPDATE #report SET A' + CONVERT(varchar, @Counter) + ' = b.answer FROM #report a INNER JOIN mytable b ON a.userid = b.userid AND a.question = b.Q' + CONVERT(varchar, @Counter) EXEC (@SQL) END SELECT * FROM #report ORDER BY userid DROP TABLE #report DROP TABLE #questions ``` That is, if you have an unknown number of sessions. Otherwise, go with the other answer.
Return 1 row per unique id instead of multiple rows
[ "", "sql", "sql-server", "" ]
I am writing a query like this... ``` select consumer_name,count(select * from consumer where product='TTT') from consumer; ``` I want filtered count... What will be perfect query for this?
Maybe something like this: ``` select consumer_name, ( SELECT COUNT(*) FROM consumer AS tbl WHERE tbl.product='TTT' ) from consumer; ```
Try ``` select consumer_name, COUNT(your_column_name which you want to count) from consumer WHERE tbl.product='TTT' group by colusumer_id; ```
How to select count of particular rows in Oracle?
[ "", "sql", "database", "oracle", "" ]
I have a **view** taken column from table question and table answer. table question have id and question text while table answer have id, question\_id(fk from table question) and answer text. ``` SELECT QuestionID, QuestionTxt, [1], [2], [3], [4] FROM (SELECT ROW_NUMBER() OVER (PARTITION BY QuestionID ORDER BY newid()) AnswerInQuestionID, a.AnswerTxt, q.QuestionTxt, q.QuestionId FROM dbo.TblQuestion q JOIN dbo.TblAnswer a ON q.QuestionId = a.answer_question_id) A PIVOT (MAX(a.AnswerTxt) FOR AnswerInQuestionID IN ([1], [2], [3], [4])) AS piv ``` From this SQL, I am able to randomize the answer but it only shows the text. I want the the text answer together with its id
It's hard to tell from your question where you are hoping to get the AnswerID. If you want it in a single column, you will need to use @StayPuft's answer. If you want it as part of the text column, you can use the SQL below: ``` SELECT questionText, [1], [2], [3] FROM ( SELECT ROW_NUMBER() OVER (PARTITION BY QuestionID ORDER BY newid()) AnswerInQuestionID, CAST(AnswerID AS VARCHAR(20)) + ' - ' + answerTxt AS answerTxt, QuestionText FROM questions q JOIN answers a ON q.QuestionID=a.answer_question_id ) A PIVOT ( MAX(answerTxt) FOR AnswerInQuestionID IN ([1], [2], [3] ) ) as piv ``` If you want them as three separate columns, you can use the SQL below: ``` SELECT questionText, LEFT([1], 1) AS AnswerID1, RIGHT([1], LEN([1]) - 1) AS AnswerText1, LEFT([2], 1) AS AnswerID2, RIGHT([1], LEN([1]) - 1) AS AnswerText1, LEFT([3], 1) AS AnswerID3, RIGHT([2], LEN([3]) - 1) AS AnswerText3 FROM ( SELECT ROW_NUMBER() OVER (PARTITION BY QuestionID ORDER BY newid()) AnswerInQuestionID, CAST(AnswerID AS VARCHAR(20)) + answerTxt AS answerTxt, QuestionText FROM questions q JOIN answers a ON q.QuestionID=a.answer_question_id ) A PIVOT ( MAX(answerTxt) FOR AnswerInQuestionID IN ([1], [2], [3] ) ) as piv ``` EDIT: Added CHARINDEX() function and pipes ('|') to allow for varying number length. There are probably cleaner ways of doing this, but this worked for me: ``` SELECT questionText, LEFT([1], CHARINDEX('|', [1]) -1) AS AnswerID1, RIGHT([1], LEN([1]) - CHARINDEX('|', [1])) AS AnswerText1, LEFT([2], CHARINDEX('|', [2]) -1) AS AnswerID2, RIGHT([2], LEN([2]) - CHARINDEX('|', [2])) AS AnswerText2, LEFT([3], CHARINDEX('|', [3]) -1) AS AnswerID3, RIGHT([3], LEN([3]) - CHARINDEX('|', [3])) AS AnswerText3 FROM ( SELECT ROW_NUMBER() OVER (PARTITION BY QuestionID ORDER BY newid()) AnswerInQuestionID, CAST(AnswerID AS VARCHAR(20)) +'|'+ answerTxt AS answerTxt, QuestionText FROM questions q JOIN answers a ON q.QuestionID=a.answer_question_id ) A PIVOT ( MAX(answerTxt) FOR AnswerInQuestionID IN ([1], [2], [3] ) ) as piv ```
First let me say that SQL Fiddle is awesome! And if you wanted to display the answer ID - this is what I did: ``` SELECT questionText, [1], [2], [3], answerID FROM ( SELECT ROW_NUMBER() OVER (PARTITION BY QuestionID ORDER BY newid()) AnswerInQuestionID, answerTxt, QuestionText, answerID FROM questions q JOIN answers a ON q.QuestionID=a.answer_question_id ) A PIVOT ( MAX(answerTxt) FOR AnswerInQuestionID IN ([1], [2], [3] ) ) as piv ```
select id with text from table into view
[ "", "sql", "sql-server", "" ]
I habe a problem with the new lead olap function in sql server 2012. ``` CREATE TABLE Atext (id int, bez varchar(10), von date); GO INSERT INTO Atext VALUES (1, 't1', '2001-01-01'), (1, 't2', '2012-01-01'), (2, 'a1', '2020-01-01'), (2,'a1' , '2030-01-01'), (2, 'b', '2040-05-01'), (2, 'a3', '2989-05-01'); GO SELECT id, bez, von, lead(von,1,0) over (partition by id ORDER BY von) -1 as bis FROM Atext order by id, Von ``` The select query throws an error: ``` Msg 206, Level 16, State 2, Line 1 Operand type clash: int is incompatible with date ``` Why is there a restrictions in terms of the data type datetime? I know a workaround but it is not very nice: ``` SELECT id, bez, CAST(vonChar AS DATE) AS Von, CASE WHEN bisChar <> '0' THEN (DATEADD(DAY,-1,(CAST(( CASE WHEN bisChar <> '0' THEN vonChar ELSE NULL END)AS DATE)) )) ELSE NULL /*'9999-12-31'*/ END AS Bis FROM ( SELECT id, bez, vonChar , lead(vonChar,1,0) over (partition BY id ORDER BY vonChar) AS bisChar FROM ( SELECT id, bez, CAST(von AS VARCHAR(10)) vonChar FROM Atext) tab ) tab2 ORDER BY id, Von ``` Microsoft SQL Server 2012 (SP1) - 11.0.3128.0 (X64)
You can use DATEADD() in this query. Also the default value of LEAD for data type can't be `0` so I've changed it to NULL or you can use any DATE constant. ``` SELECT id, bez, von, DATEADD(DAY,-1,lead(von,1,NULL) over (partition by id ORDER BY von)) as bis FROM Atext order by id, Von ``` `SQLFiddle demo`
Same as all the other answers but with `DATEADD` as an argument of lead: ``` SELECT id, bez, von, lead(DATEADD(DAY,-1, von),1, null) over (partition by id order by von) from Atext order by id, Von ``` [SQLFiddle](http://sqlfiddle.com/#!6/ed02a/20) example
sql server lead - problems with date
[ "", "sql", "sql-server", "date", "lead", "" ]
i have a table 'DEMO' like below, I need some specified result in my select query ``` PID Is_Viewed 1 False 1 False 1 False 1 False 2 True 2 False ``` Now i need to select only those record which if having even a single value 'True' then result will be 1 else 0 in select statment. i have tried much but not getting the desired result... my query is ``` /// select statement SELECT distinct PId,CASE WHEN EXISTS(Select Is_Viewed from DEMO where Is_viewed=1) then 1 else 0 end as Is_viewed FROM DEMO i WANT result like PId Is_Viewed 1 0 2 1 ```
Maybe something like this: ``` SELECT PId, MAX(CASE WHEN Is_Viewed='True' THEN 1 ELSE 0 END) AS Is_Viewed FROM DEMO GROUP BY PId ``` **Edit** Considering the data that you have supplied. ``` DECLARE @tbl TABLE(PID INT,Is_Viewed VARCHAR(10)) INSERT INTO @tbl VALUES (1,'False'), (1,'False'), (1,'False'), (1,'False'), (2,'True'), (2,'False') ``` With this query ``` SELECT PId, MAX(CASE WHEN Is_Viewed='True' THEN 1 ELSE 0 END) AS Is_Viewed FROM @tbl AS tbl GROUP BY PId ``` The output is this: ``` PId Is_Viewed 1 0 2 1 ```
I guess that `Is_Viewed` is defined as a `bit` type since SQL server doesn't have a BOOLEAN type. Then try this: ``` SELECT PID, MAX(CAST(Is_Viewed as INT)) as Is_Viewed FROM T GROUP BY PID ``` `SQLFiddle demo`
How to get only specified values record from multiple values of one Record in sql server
[ "", "sql", "sql-server", "database", "stored-procedures", "" ]
Good Afternoon! I'm having trouble list the last two records each idmicro Ex: ``` idhist idmicro idother room unit Dtmov 100 1102 0 8 coa 2009-10-23 10:40:00.000 101 1102 0 1 coa 2009-10-28 10:40:00.000 102 1102 0 2 dib 2008-10-24 10:40:00.000 103 1201 0 6 diraf 2008-10-23 10:40:00.000 104 1201 0 7 diraf 2009-10-21 10:40:00.000 105 1201 0 4 dimel 2008-10-22 10:40:00.000 ``` Would look like this: ex: result ``` idhist idmicro idoutros room unit Dtmov 101 1102 0 1 coa 2009-10-28 10:40:00.000 102 1102 0 2 dib 2008-10-24 10:40:00.000 103 1201 0 6 diraf 2008-10-22 10:40:00.000 104 1201 0 7 diraf 2009-10-21 10:40:00.000 ``` I'm starting to delve into SQL and am having trouble finding this solution Sorry Thank you. EDIT: I am using SQL server, and I made no query. Yes! is based on the date and time
You can do the same thing with an imbricated `SELECT` statement. ``` SELECT * FROM ( SELECT row_number() OVER ( PARTITION BY idmicro ORDER BY idhist ) AS ind ,* FROM data ) AS initialResultSet WHERE initialResultSet.ind < 3 ``` Here is a sample [SQLFiddle](http://sqlfiddle.com/#!3/d3e79/5) with how this query works.
``` WITH etc AS ( SELECT * ,row_number() OVER ( PARTITION BY idmicro ORDER BY idhist ) AS r ,count() OVER ( PARTITION BY idmicro ORDER BY idhist ) cfrom TABLE ) SELECT * FROM etc WHERE r > c - 2 ```
List the last two records for each id
[ "", "sql", "sql-server", "" ]
I have three temp tables in SQL: Temp1 ``` UserID Score_1 123 100 456 200 ``` Temp2 ``` UserID Score_2 456 300 ``` Temp3 ``` UserID Score_3 123 400 789 500 ``` I want to combine all these three tables into one as ``` UserID Score_1 Score_2 Score_3 123 100 NULL 400 456 200 300 NULL 789 NULL NULL 500 ``` Is there any way we could achieve this through SQL? I tried using each joins and even some combinations of union and joins, but couldn't get the format properly. ``` select P.Score_1 , U.Score_2 , A.Score_3 from #mytemp1 P join #mytemp2 U on P.UserID = U.UserID join #mytemp3 A on P.UserID = A.UserID ``` ``` select P.Score_1 , U.Score_2 , A.Score_3 from #mytemp1 P left join #mytemp2 U on P.UserID = U.UserID left join #mytemp3 A on P.UserID = A.UserID ``` ``` select P.Score_1 , U.Score_2 , A.Score_3 from #mytemp1 P right join #mytemp2 U on P.UserID = U.UserID right join #mytemp3 A on P.UserID = A.UserID ``` ``` select allData.UserID, P.Score_1 , U.Score_2 , A.Score_3 from (select UserID from #mytemp1 union select UserID from #mytemp2 union select UserID from #mytemp3 ) allData join #mytemp1 t1 ON allData.UserID = t1.UserID join #mytemp2 t2 ON allData.UserID = t2.UserID join #mytemp3 t3 ON allData.UserID = t3.UserID ``` Each table may contain a different number of rows, the rows count can be even 0. Thanks in advance, Jonathon
What you are looking for is a FULL OUTER JOIN: ``` SELECT COALESCE(t1.UserID, t2.UserID, t3.UserID) as UserID, t1.Score_1, t2.Score_2, t3.Score_3 FROM Temp1 t1 FULL OUTER JOIN Temp2 t2 ON t1.UserID = t2.UserID FULL OUTER JOIN Temp3 t3 ON COALESCE(t1.UserID, t2.UserID) = t3.UserID ``` SQL fiddle [here](http://www.sqlfiddle.com/#!3/5d685/1) The difference between a FULL OUTER JOIN and a LEFT (or RIGHT) OUTER JOIN, is that the FULL join will permit some missing keys (`UserID` in this case) in *both* tables and still return results, while the LEFT and RIGHT variants require at least one of the tables to contain all the keys (which table depends on if you use LEFT or RIGHT). Note the use of COALESCE on the second join - this is required because we won't know which table (if any) may have a match for Temp3. Leaving this out would require guessing which one - and if a match actually occurs only in the other table, you'll end up with duplicate rows. Using COALESCE here avoids the need to make that guess.
Try this: ``` SELECT COALESCE(T1.UserID,T2.UserID,T3.UserID) as UserID,T1.Score_1,T2.Score_2,T3.Score_3 FROM Temp1 T1 FULL OUTER JOIN Temp2 T2 ON T1.UserID=T2.UserID FULL OUTER JOIN Temp3 T3 ON T1.UserID=T3.UserID ``` Result: ``` USERID SCORE_1 SCORE_2 SCORE_3 123 100 (null) 400 456 200 300 (null) 789 (null) (null) 500 ``` See result in [**SQL Fiddle**](http://www.sqlfiddle.com/#!3/c6b0d/1). If you want to replace `NULL` values with 0 you can use `ISNULL`: ``` SELECT COALESCE(T1.UserID,T2.UserID,T3.UserID) as UserID,ISNULL(T1.Score_1,0) as Score_1,ISNULL(T2.Score_2,0) as Score_2,ISNULL(T3.Score_3,0) as Score_3 FROM Temp1 T1 FULL OUTER JOIN Temp2 T2 ON T1.UserID=T2.UserID FULL OUTER JOIN Temp3 T3 ON T1.UserID=T3.UserID ``` Example in [**SQL Fiddle**](http://www.sqlfiddle.com/#!3/c6b0d/4).
Combine columns of three tables
[ "", "sql", "sql-server", "sql-server-2008", "sql-server-2012", "" ]
Just a quick question which I need to ask, my MYSQL table is using MYISAM storage engine and looking at some previous questions they say queries under this engine are automatically committed (No transactions allowed only auto-commit). Now, does that mean, if I do the following query: ``` UPDATE `ExampleTable` SET `ExampleField` += '50' WHERE ...; UPDATE `ExampleTable2` SET `ExampleField2` -= '50' WHERE ...; ``` It will either succeed (and autocommit/update both) or fail and update neither? Or is my definition of query incorrect and MYISAM will only autocommit one command at once? -So I cannot do the above query reliably under MYISAM? (Bonus question) If so, I have heard of INNODB engine which supports transactions. could I use that instead? How much speed would I be losing in return for reliable queries? Thanks for any help.
> Basically what i'm asking: Is the code above one query which MYISAM will autocommit, or is that SQL treated as two queries? In MyISAM, the code you show is treated as two queries. Any concurrent client could see the data changed in progress. If you want *transaction isolation* so that no concurrent thread can see the data until you commit, then you need to use InnoDB. I always recommend InnoDB as a default choice over MyISAM. InnoDB has been the default storage engine since MySQL 5.5 (circa 2010). There are a few cases where MyISAM still wins, but they are fewer and fewer. InnoDB when tuned well performs *better* than MyISAM. Even [this blog from 2007](http://www.mysqlperformanceblog.com/2007/01/08/innodb-vs-myisam-vs-falcon-benchmarks-part-1/) shows benchmark results that InnoDB is on par with, or faster than, MyISAM under most workloads. InnoDB has continued to be improved since then, while MyISAM has been stagnant and [is gradually being phased out](http://www.tocker.ca/2014/05/22/mysql-soon-to-store-system-tables-in-innodb.html). Performance is important, but to me, *not corrupting your data* is even more important. MyISAM is susceptible to losing data in a crash. InnoDB has automatic crash recovery. MyISAM also fails to support atomic changes. For example, if you run an UPDATE that takes 5 seconds, and 2.5 seconds into it you kill the query, about half of your rows have been changed and the rest have not. With InnoDB, this will never happen. P.S. The operators `+=` and `-=` are [not supported in MySQL](http://dev.mysql.com/doc/refman/5.6/en/non-typed-operators.html) (neither are they supported in any other SQL database I know of). So strictly speaking, neither of the queries you show would do anything anyway, except return an error.
Yes, you are right if you do this: ``` UPDATE `ExampleTable` SET `ExampleField` += '50' WHERE ...; UPDATE `ExampleTable2` SET `ExampleField2` -= '50' WHERE ...; ``` it is something like: ``` START TRANSACTION; UPDATE `ExampleTable` SET `ExampleField` += '50' WHERE ...; COMMIT; START TRANSACTION; UPDATE `ExampleTable2` SET `ExampleField2` -= '50' WHERE ...; COMMIT; ``` So the first query may be executed succesfully and the second one may return some error. If you need to execute both queries or nothing you should choose some engine that support transactions (The most popular is InnoDB) and send this: ``` START TRANSACTION; UPDATE `ExampleTable` SET `ExampleField` += '50' WHERE ...; UPDATE `ExampleTable2` SET `ExampleField2` -= '50' WHERE ...; COMMIT; ``` Or another option is join both queries into one and in this case you can use MyISAM engine and you will not lost any data.
MySQL Transaction/Commit Query
[ "", "mysql", "sql", "myisam", "" ]
I've been struggling with altering my query for days now and I just can't get the result I want. I kind of hoping someone can help me out how to change my query. Having the following query now: ``` SELECT t.tId, t.tName, a.aId, a.aName, u.uId, u.uName FROM tableT t LEFT JOIN t_a ta USING (tId) LEFT JOIN t_u tu USING (tId) LEFT JOIN tableA a ON a.aId = ta.aId LEFT JOIN tableU u ON u.uId = tu.uId ``` It returns rows from tableT with matched row in tableA and tableU. Now I have to change it to get a result where all row from tableA are returned as well. If I'm not mistaken I want a FULL OUTER JOIN with tableA and a LEFT JOIN with tableU. (I'm aware that FULL OUTER JOIN doesn't exists, which makes it even more confusing for me) table t\_a contains many to many relations and table t\_u contains many to one relations from tableT's respective. In short, when I search for a value, I want to have the following rows to be returned: Any tableT row containing the value **and** matched rows from tableU **and** matched rows from tableA. Any other (unmatched) tableA row containing the value. No other (unmatched) rows from tableU. Values needed before: ``` tableT tableA tableU ANY ANY ANY ANY NULL ANY ANY NULL NULL ``` Values needed now: ``` tableT tableA tableU ANY ANY ANY ANY NULL ANY ANY NULL NULL **NULL** **ANY** **NULL** ``` My apologies when I'm asking for help under the wrong circumstances. Thanks in advance for any help. A have a fiddle here: <http://sqlfiddle.com/#!2/f6301/11> It returns all desired rows except for the row containing: ``` TID TNAME AID ANAME UID UNAME NULL NULL 4 aName4 NULL NULL ```
If I'm not mistaken the query I need is as follows: ``` SELECT t.tId, t.tName, a.aId, a.aName, u.uId, u.uName FROM tableT t LEFT JOIN t_a ta USING (tId) LEFT JOIN t_u ti USING (tId) LEFT JOIN tableA a USING (aId) LEFT JOIN tableU u USING (uId) UNION SELECT t.tId, t.tName, a.aId, a.aName, NULL, NULL FROM tableT t RIGHT JOIN t_a ta USING (tId) RIGHT JOIN tableA a USING (aId) WHERE t.tId IS NULL ``` But I'm sure it can be optimized or different. It will/should now return all rows from tableT with possible related tableA data and possible related tableU data and in addition any other tableA rows.
I think what you want is to start with all the `tid`s from `tableT` and `t_a`: ``` FROM ((SELECT t.tID from tableT) UNION (SELECT ta.tID from tableA t_a) ) ids left outer join tableT t USING (tId) LEFT JOIN t_i ti USING (tId) LEFT JOIN tableA a ON a.aId = ta.aId LEFT JOIN tableU u ON u.uId = ti.uId ```
mysql combine multiple joins
[ "", "mysql", "sql", "join", "" ]
I have data in a **table A** as shown below. I want to group memberID as one set and sort the Date\_Time in each set as shown in snapshot below ``` Date_Time PersonID Status 4/2/14 10:15 AM ghi Closed 4/1/14 9:15 AM ghi Cancelled 4/1/14 11:00 AM abc Cancelled 4/2/14 8:12 AM def Closed 4/1/14 9:17 AM def Hold 4/3/14 2:17 PM abc Hold 4/2/14 8:30 AM abc Open 4/3/14 8:16 AM jkl Closed 4/1/14 12:10 PM jkl Open 4/1/14 11:30 AM abc Hold ``` The final example-snapshot attached below you will see **memberID: *ghi*** first row **date time *‘4/1/2014 9:15:00 AM’*** is greater than **memberID: *def*** 1st row **date\_Time *‘4/1/2014 9:17:00 AM’*** highlighted in yellow. And that is the main reason **memberID *ghi*** set tweaks as first set and then follows by **memberID *def*** set and then **meberID *abc*, *jkl* …** etc... ![enter image description here](https://i.stack.imgur.com/10tiV.jpg) Could some one please help me how to write MS-SQL query to achieve the final result. Thank you so much for your help.
If I'm understanding your question correctly, you need to `join` the table back to itself using the `min` aggregate to establish a sort order: ``` select t.* from yourtable t join ( select personid, min(date_time) min_date_time from yourtable group by personid ) t2 on t.personid = t2.personid order by t2.min_date_time, t.date_time ``` * [SQL Fiddle Demo](http://sqlfiddle.com/#!3/144b8/3)
Here's [another way](http://sqlfiddle.com/#!3/a0717/1): ``` SELECT Date_Time, PersonID, Status FROM dbo.atable ORDER BY MIN(Date_Time) OVER (PARTITION BY PersonID), PersonID, Date_Time ; ``` The approach, though, is same as in [sgeddes's answer](https://stackoverflow.com/a/23925821/297408), it just has different syntax. It calculates minimum `Date_Time` values using `MIN(...) OVER (...)`. That makes a join to a derived table completely unnecessary. One other little difference is that I have added `PersonID` to the ORDER BY clause to make sure that people with identical minimum `Date_Time` values do not have their rows mixed up.
How to write a sql query to Sort or Group the dataset
[ "", "sql", "sql-server", "sql-server-2008", "sql-server-2005", "sql-server-2012", "" ]
Specifically, can I do this? ``` CREATE PROC AutoDestructiveStoredProcedure AS DROP PROC AutoDestructiveStoredProcedure PRINT 'Still alive.' GO ``` Is it a bad practice? What is the expected behavior? Does it change based on implementation? What would be the difference between executing this in SQL Server, MySQL and Oracle?
Yes, this will work -- at least in SQL Server 2008 R2. It continues executing until the end of the procedure and after that the procedure is gone. Is it bad practice? I think so. In my mind, the main reason is that it mixes DDL with DML, imposing unexpected side effects on what is normally a well-understood operation (calling a stored procedure). Unfortunately, I can't answer your question with respect to how it works on MySQL or Oracle.
Yes you can do this not sure why you would want to ``` CREATE PROC AutoDestructiveStoredProcedure AS PRINT 'Being killed' DROP PROC AutoDestructiveStoredProcedure PRINT 'Still alive.' exec createAutoDestruct print 'Alive Again' GO create proc createAutoDestruct as Exec sp_executesql N'CREATE PROC AutoDestructiveStoredProcedure AS PRINT ''Being killed'' DROP PROC AutoDestructiveStoredProcedure PRINT ''Still alive.'' exec createAutoDestruct print ''Alive Again'' ' GO AutoDestructiveStoredProcedure AutoDestructiveStoredProcedure ```
Can a SQL Stored Procedure drop itself and continue execution?
[ "", "sql", "" ]
I have a query where I am looking into three different tables and for the purposes of this post I only need to see three columns; RecordID, FieldType and Tranid. The Record ID can have multiple Field types and each field type will have a distinct tranid. ![enter image description here](https://i.stack.imgur.com/m61z0.png) What I am trying to do is grab all the entire set of data if any of the Field Types = 'CO' ``` SELECT Header.RecordID, Detail.FieldType, Header.TranID FROM Header INNER JOIN (select * from Detail where fieldtype = 'CO') as Detail ON Header.RecordID = Detail.RecordID INNER JOIN TranDef ON Header.TranID = TranDef.TranID WHERE (Header.CalendarDate BETWEEN GETDATE() - 10 AND GETDATE()) ``` But this is not working. I'm only getting one row of data back, where as I said, I want all the data for the record, not just the one row. What am I doing wrong here?
Your query does not work because it limits the rows selected from `Detail` to these with `fieldtype = 'CO'`, and inner joins to it. It looks like you are looking for a `WHERE EXISTS` query: ``` SELECT h.RecordID, d.FieldType, h.TranID FROM Branch b INNER JOIN Header h ON b.BranchID = h.BranchID INNER JOIN Detail d ON h.RecordID = d.RecordID WHERE (h.CalendarDate BETWEEN GETDATE() - 10 AND GETDATE()) AND EXISTS ( SELECT * FROM Detail dd WHERE dd.RecordID = h.RecordID AND dd.fieldtype = 'CO' ) ``` The idea is to join to all `Detail` records, and then filter the result based on existence or non-existence of `'CO'` records among the `Detail` rows linked to the corresponding `Header` row.
You can try including the below in the WHERE criteria: ``` SELECT RecordID, FieldType, TranID FROM Branch br, Header hr, Region rgn WHERE br.FieldType = 'CO' and hr.FieldType = 'CO' and rgn.FieldType = 'CO' ```
Grab all rows for a Records where a single instance occurs in a different row for the record
[ "", "sql", "sql-server-2008", "t-sql", "" ]
Getting error Invalid column name '@ColumnNames'. in the last line (insert clause), any idea why ? ``` Declare @ColumnNames varchar(2000) Declare @OrderId int set @OrderId = 110077 select @ColumnNames = COALESCE(@ColumnNames + ', ', '') + COLUMN_NAME from INFORMATION_SCHEMA.COLUMNS where TABLE_NAME='OrderItems' Insert into dbo.OrderHistory(@ColumnNames) select * from dbo.[Order] where ID= @OrderId ```
`@ColumnNames` is a string of text, not a list of columns. As a result, when you try to use it as a list of column names in the insert query, it fails. You can use dynamic SQL to do what you desire, like so: ``` declare @insertquery nvarchar(1000) set @insertquery = N'insert into dbo.orderhistory(' + @ColumnNames + ') select * from dbo.[Order] where ID=' + cast(@OrderId as nvarchar(10)) sp_executesql @insertquery ```
You should use dynamic sql. And dont forget to perform data casting constructing query string! ``` Declare @ColumnNames varchar(2000) Declare @OrderId int set @OrderId = 110077 select @ColumnNames = COALESCE(@ColumnNames + ', ', '') + COLUMN_NAME from INFORMATION_SCHEMA.COLUMNS where TABLE_NAME='OrderItems' Declare @DynSqlStatement varchar(max); set @DynSqlStatement = 'Insert into dbo.OrderHistory('+ @ColumnNames + ') select * from dbo.[Order] where ID= ' + cast(@OrderId as varchar(10)); exec( @DynSqlStatement ); ```
Selecting column name dynamically in an insert query
[ "", "sql", "sql-server", "t-sql", "dynamic-sql", "" ]
Why I got this message if the query seems right: **"Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression. The statement has been terminated. "** But when I try the query splitted it works fine, I mean, what could happen? and this is the query : ``` UPDATE llantas_dictamen_scrap SET clave_operador = (SELECT REPLACE(scrap, '-', '') as clave_operador FROM (SELECT RIGHT(clave_operador, CHARINDEX('-',REVERSE(clave_operador), 1)) as scrap FROM llantas_dictamen_scrap) t ) ``` Of course I'm returning more than one one value, What that supposed to mean?
``` UPDATE llantas_dictamen_scrap SET clave_operador = RIGHT(clave_operador, CHARINDEX('-',REVERSE(clave_operador), 1) -1) WHERE CHARINDEX('-',clave_operador, 1) > 0 ``` This one adjusts the position of the call to RIGHT, excluding the '-' we found. There's no need to make a call to REPLACE, because once we know the position of the last trailing '-' , we can just excluding by taking the right-most characters (-1 for its position). You can't return multiple values for a single row. You are updating multiple rows, yes, for FOR EACH ROW, you are giving clave\_operador a single value.
You are setting a single column value equal to more than one column value. If you want to update using the same query your using you have to specify which column you want to use. If you specify to select top 1... with order by that should give you a usable single subquery result for your update statement.
Subquery returned more than one value... in sql query, What that means?, and what is wrong?
[ "", "sql", "sql-server", "" ]
My db table is called transactions and is like this: ``` Name | Date (DateTime) | Type | Stock | Volume | Price | Total Tom 2014-05-24 12:00:00 Sell Barclays 100 2.2 220.0 Bob 2014-04-13 15:00:00 Buy Coca-Cola 10 12.0 120.0 varchar DateTime varchar varchar int float float ``` My initial problem was to remove from the table ALL the transactions that belong to a user whose first transaction is later than a certain threshold. My query was: ``` DELETE FROM transactions WHERE name NOT IN (SELECT name FROM transactions2 WHERE date < CAST('2014-01-01 12:00:00.000' as DateTime)); Query OK, 35850 rows affected (3 hours 5 min 28.88 sec) ``` I think this is a poor solution, I had to duplicate the table to avoid deleting from the same table from where I am reading, and the execution took quite a long time (3 hours for a table containing ~170k rows) Now I am trying to delete ALL the transactions that belong to a user whose latest transaction happened before a certain threshold date. ``` DELETE FROM transactions WHERE name IN (SELECT name FROM transactions HAVING max(date) < CAST('2015-01-01 12:00:00.000' as DateTime) ); ``` Sadly, the subquery finds only one result: ``` SELECT name FROM transactions HAVING max(date) < CAST('2015-01-01 12:00:00.000' as DateTime)'; +------------+ | name | +------------+ | david | +------------+ ``` I guess I am getting only one result because of the max() function. I am not an expert of SQL but I understand quite well what I need in terms of sets and logic. I would be really happy to have suggestions on how to rewrite my query. EDIT: Here is a sqlfiddle with the schema and some data: <http://sqlfiddle.com/#!2/389ede/2> I need to remove ALL the entries for alex, because his last transactions happened before a certain threshold (let's say 1 Jan 2013). Don't need to delete tom's transactions because he has his latest later than 1 Jan 2013.
Your first query can be formulated as: `delete users from transactions where it does not exist a transaction for that user before ?. This is easy to transform to sql: ``` delete from transactions t1 where not exists ( select 1 from transactions t2 where t1.name = t2.name and t2.date < ? ) ``` mysql still does not support (AFAIK) deleting from a table that is referenced in a select, so we need to rewrite it as: ``` delete t1.* from transactions t1 left join transactions t2 on t1.name = t2.name and t2.date < ? where t2.name is null ``` date is a reserved word so you will have to quote that. Your second query can be solved the same way, delete from transaction where it does not exists a transaction after a certain date. I'll leave it as an exercise.
Alvin here is a simplified scenario from your fiddle with dates: ``` CREATE TABLE transactions ( id int(11) NOT NULL AUTO_INCREMENT , name varchar(30) NOT NULL , value datetime NOT NULL , PRIMARY KEY (id) ) ENGINE=InnoDB; INSERT INTO transactions (name, value) VALUES ('alex', '2011-01-01 12:00:00') , ('alex', '2012-06-01 12:00:00'); ``` Let's investigate what happens in: ``` SELECT t1.name as t1_name, t1.value as t1_value , t2.name as t2_name, t2.values as t2_value FROM transactions t1 LEFT JOIN transactions t2 ON t1.name = t2.name T1_NAME T1_VALUE T2_NAME T2_VALUE alex January, 01 2011 12:00:00+0000 alex January, 01 2011 12:00:00+0000 alex January, 01 2011 12:00:00+0000 alex June, 01 2012 12:00:00+0000 alex June, 01 2012 12:00:00+0000 alex January, 01 2011 12:00:00+0000 alex June, 01 2012 12:00:00+0000 alex June, 01 2012 12:00:00+0000 ``` I.e. 4 rows. If we now add the join predicate: ``` SELECT t1.name as t1_name, t1.value as t1_value , t2.name as t2_name, t2.values as t2_value FROM transactions t1 LEFT JOIN transactions t2 ON t1.name = t2.name AND t2.value > CAST('2011-06-01 12:00.000' as DateTime) ``` This leaves us with two rows. If we change the time to '2012-06-01 12:00.000' we still have two rows due to the left join, but the t2 columns will be null. If we now add the WHERE clause: ``` SELECT t1.name as t1_name, t1.value as t1_value , t2.name as t2_name, t2.values as t2_value FROM transactions t1 LEFT JOIN transactions t2 ON t1.name = t2.name AND t2.value > CAST('2012-06-01 12:00.000' as DateTime) WHERE t2.name is null; ``` we still have two rows. With CAST('2011-06-01 12:00.000' as DateTime) there are no rows. Remember that the construction is equivalent with: ``` SELECT t1.name as t1_name, t1.value as t1_value FROM transactions t1 WHERE NOT EXISTS ( SELECT 1 FROM transactions t2 WHERE t1.name = t2.name AND t2.value > CAST('2012-06-01 12:00.000' as DateTime) ); ``` So, if it does not exist a row for the name where value > '2012-06-01 12:00.000' we have a match. Does that clarify?
SQL query based on subquery. Retrieve transactions with data > threshold
[ "", "mysql", "sql", "datetime", "subquery", "conditional-statements", "" ]
I have postgres function in which i am appending values in query such that i have, ``` DECLARE clause text = ''; ``` after appending i have some thing like, ``` clause = "and name='john' and age='24' and location ='New York';" ``` I append above in where clause of the query i already have. While executing query i am getting "`and`" just after "`where`" result in error How to use `regex_replace` so that i remove the first "and" from clause before appending it to the query ?
You do not need regex: ``` clause = substr(clause, 5, 10000); clause = substr(clause, 5, length(clause)- 4); -- version for formalists ```
Instead of fixing `clause` after the fact, you could avoid the problem by using [`concat_ws`](http://www.postgresql.org/docs/9.1/static/functions-string.html) (concatenate with separator): ``` clause = concat_ws(' and ', "name='john'", "age='24'", "location ='New York'") ``` will make `clause` equal to ``` "name='john' and age='24' and location ='New York'" ```
How to use regex replace in Postgres function?
[ "", "sql", "regex", "postgresql", "postgresql-9.1", "plpgsql", "" ]