Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
i have got two table clinical\_study and location\_countries, they both have NCT\_ID (clinical trial) field in common, i have to pull data from both tables applying a condition on first table(Gender) and then also checking for country match in the second table, i successfully managed to run the below query, but its under country field i am getting UK and other countries ``` select clinical_study.NCT_ID, clinical_study.BRIEF_SUMMARY, clinical_study.STUDY_TYPE, clinical_study.GENDER, location_countries.COUNTRY from clinical_study inner join location_countries ON clinical_study.NCT_ID=location_countries.NCT_ID where clinical_study.GENDER LIKE'Male' or clinical_study.GENDER like 'Both' and location_countries.COUNTRY ='United Kingdom' ``` NCT\_ID.....BRIEF\_SUMMARY.....STUDY\_TYPE.....GENDER.....COUNTRY xys........xyz...............xyz............Both.......United Kingdom xys........xyz...............xyz............Male.......France xys........xyz...............xyz............Male.......United Kingdom xys........xyz...............xyz............Male.......Sweden could you please advice if i am missing a trick here
It looks like you need to add some brackets ``` select clinical_study.NCT_ID, clinical_study.BRIEF_SUMMARY, clinical_study.STUDY_TYPE, clinical_study.GENDER, location_countries.COUNTRY from clinical_study inner join location_countries ON clinical_study.NCT_ID=location_countries.NCT_ID where (clinical_study.GENDER LIKE'Male' or clinical_study.GENDER like 'Both') and location_countries.COUNTRY ='United Kingdom' ```
I think that AND has precedence over OR so it will be evaluated first. So you can use brackets to force precedence. ``` where (clinical_study.GENDER LIKE'Male' or clinical_study.GENDER like 'Both') and location_countries.COUNTRY ='United Kingdom' ```
SQL Server Join two tables with where condition on both tables
[ "", "sql", "sql-server", "join", "" ]
Context: The accesoirescondition exist of three fixed condition values. These conditions are representing the state of the accesoires of the product. I got a GROUP BY on product.productid and would only see the most serious accesoires condition for that specific product. Desired situation: ``` PRODUCTID PRODUCTNAME ACCESOIRESID ACCESOIRESCONDITION 1 product1 2 defect 2 product2 3 working 3 product3 6 working ```
I would add a table, rather than a redundant field, for the conditioncode like this: ``` CREATE TABLE product ( productid int, productname varchar(20) ); CREATE TABLE cond ( conditionid int, conditionname varchar(20) ); CREATE TABLE accesoires ( accesoiresid int, productid int, accesoiresname varchar(20), accesoirescondition int ); INSERT INTO product VALUES (1, 'product1'), (2, 'product2'), (3, 'product3'); INSERT INTO cond VALUES (1, 'defect'), (2, 'obsolete'), (3, 'working'); INSERT INTO accesoires VALUES (1, 1, 'accesoires1', 3), (2, 1, 'accesoires2', 1), (3, 1, 'accesoires3', 2), (4, 2, 'accesoires4', 3), (5, 3, 'accesoires5', 3), (6, 3, 'accesoires6', 2); ``` Then you can query like this: ``` SELECT p.productid, p.productname, a.accesoiresid, c.conditionname FROM product p JOIN accesoires a on p.productId = a.productId JOIN cond c on c.conditionid = a.accesoirescondition WHERE a.accesoirescondition = (SELECT MIN(accesoirescondition) FROM accesoires WHERE productId = p.productId ) ``` The result is: ``` PRODUCTID PRODUCTNAME ACCESOIRESID CONDITIONNAME 1 product1 2 defect 2 product2 4 working 3 product3 6 obsolete ``` Which looks a lot like your desired output, expect for AccesoiresId in line 2. But that can *never* be 3 with the data you provided (there *IS* no record like that in Accesoires for productid =2) <http://sqlfiddle.com/#!2/a1db44/4>
Try this query: ``` select a.productid , a.productname , b.accesoiresid , b.accesoirescondition from product a left join accesoires b on a.productid=b.productid inner join ( select d.productid, min(case when d.accesoirescondition = 'defect' then 1 when d.accesoirescondition = 'obsolete' then 2 when d.accesoirescondition = 'working' then 3 end) as severity from accesoires d group by d.productid ) c on b.productid = c.productid and c.severity = case when b.accesoirescondition = 'defect' then 1 when b.accesoirescondition = 'obsolete' then 2 when b.accesoirescondition = 'working' then 3 end ``` This will give you the more severe results for each of the products. `sqlfiddle demo`
Order values from join with group by
[ "", "sql", "" ]
Scenario: A company has many branches in many states. A state may have more than one branch. Whenever an employee is transferred from one branch to another, an entry is made to a table like following ``` | EID | DT | BRANCH | STATE | |-----|-------------|--------|-------| | 1 | 01-JAN-2000 | A | AA | | 1 | 01-JAN-2001 | B | AA | | 1 | 01-JAN-2002 | C | AA | | 1 | 01-JAN-2003 | D | AA | | 1 | 01-JAN-2004 | E | BB | | 1 | 01-JAN-2005 | F | BB | | 1 | 01-JAN-2006 | G | BB | | 1 | 01-JAN-2007 | H | BB | | 1 | 01-JAN-2008 | A | AA | | 1 | 01-JAN-2009 | B | AA | | 1 | 01-JAN-2010 | C | AA | | 1 | 01-JAN-2011 | D | AA | ``` The requirement is to find out the duration for which a employee has been in a certain state. the output should be something like this ``` | STATE | MIN | MAX | Duration | |-------|-------------|-------------|-------------| | AA | 01-JAN-2000 | 01-JAN-2003 | 3 | | BB | 01-JAN-2004 | 01-JAN-2007 | 3 | | AA | 01-JAN-2008 | 01-JAN-2011 | 3 | ``` I can't seem to figure out how to do it in PL/SQL. The long way would be to use a for loop to traverse through each row and find the duration. But is there a way to do it in PLSQL without using loops? here's a SQLFiddle `Demo`
Here is one of the approaches to get it done: ``` select max(z.state) as state , min(z.dt) as min_date /* main query */ , max(z.dt) as max_date , trunc((max(z.dt) - min(z.dt)) / 365) as duaration from (select q.eid , q.dt /* query # 2*/ , state , sum(grp) over(order by q.dt) as grp from (select eid , dt , state /* query # 1*/ , case when state <> lag(state) over(order by dt) then 1 end as grp from t1 ) q ) z group by z.grp ``` Result: ``` STATE MIN_DATE MAX_DATE DUARATION ----- ----------- ----------- ---------- AA 01-JAN-00 01-JAN-03 3 BB 01-JAN-04 01-JAN-07 3 AA 01-JAN-08 01-JAN-11 3 ``` [**SQLFiddle Demo**](http://www.sqlfiddle.com/#!4/e4442d/39) --- **Addendum #1**: Explanation of the query. In order to get minimum and maximum date we simply have to apply `group by` clause, it's obvious, but we can't, because there is a logical difference between `AA` state before `BB` and one after `BB` state. So we have to do something to separate them, put them into different logical groups. And that's what inner-most (`/* query # 1*/`) and `/* query # 2*/` do. The query #1 finds moments when state changes(compare current-row `state` with the previous one. `lag() over()` function is used to reference previous row in the data set), and query #2 is forming a logical group by calculating running total of `grp` (`sum() over()` analytic function is responsible for that). Query #1 gives us: ``` EID DT STATE GRP ---------- ----------- ----- ---------- 1 01-JAN-2000 AA 1 01-JAN-2001 AA 1 01-JAN-2002 AA 1 01-JAN-2003 AA 1 01-JAN-2004 BB 1 --<-- moment when state changes 1 01-JAN-2005 BB 1 01-JAN-2006 BB 1 01-JAN-2007 BB 1 01-JAN-2008 AA 1 --<-- moment when state changes 1 01-JAN-2009 AA 1 01-JAN-2010 AA 1 01-JAN-2011 AA ``` Query #2 forms logical groups: ``` EID DT STATE GRP ---------- ----------- ----- ---------- 1 01-JAN-2000 AA 1 01-JAN-2001 AA 1 01-JAN-2002 AA 1 01-JAN-2003 AA 1 01-JAN-2004 BB 1 1 01-JAN-2005 BB 1 1 01-JAN-2006 BB 1 1 01-JAN-2007 BB 1 1 01-JAN-2008 AA 2 1 01-JAN-2009 AA 2 1 01-JAN-2010 AA 2 1 01-JAN-2011 AA 2 ``` Then, in main query, we are simply grouping by `GRP` to produce final output.
[SQL Fiddle](http://sqlfiddle.com/#!4/e4442d/86) ``` WITH groups AS ( SELECT t1.*, ROW_NUMBER() OVER ( ORDER BY dt ) - ROW_NUMBER() OVER ( PARTITION BY state ORDER BY dt ) AS grp FROM t1 ) SELECT state, MIN( dt ) AS first_date, MAX( dt ) AS last_date, TRUNC( ( MAX( dt ) - MIN( dt ) ) / 365 ) AS duration FROM groups GROUP BY state, grp ORDER BY first_date ``` **[Results](http://sqlfiddle.com/#!4/e4442d/86/0)**: ``` | STATE | FIRST_DATE | LAST_DATE | DURATION | |-------|--------------------------------|--------------------------------|----------| | AA | January, 01 2000 00:00:00+0000 | January, 01 2003 00:00:00+0000 | 3 | | BB | January, 01 2004 00:00:00+0000 | January, 01 2007 00:00:00+0000 | 3 | | AA | January, 01 2008 00:00:00+0000 | January, 01 2011 00:00:00+0000 | 3 | ``` As for how it works: * The `groups` sub-query selects each row and allocates it to a group by subtracting the number of rows there have been of the row's `state` from the total number of rows of any `state` - the result is that: + Any sequential series of rows with the same state will have the same group number; and + For any given state, as the date increases then each group of rows will have an increasing group number (this does not necessarily apply when comparing groups of different states but this does not matter given the grouping used in the final bit). * The final query then groups everything on `state` and `grp` and finds the `min`, `max` and `difference` for the dates within each group.
How to get min max date per row type
[ "", "sql", "oracle", "plsql", "" ]
i have some problem about sql , i want to select the bigger warehouse\_product\_id and with same product\_id row,i have try to use please help Table: <http://postimg.org/image/mkavxnmg9/> i want to: <http://postimg.org/image/8msomu47l/>
You seem to be looking for a greatest-n-per-group solution. For a given `product_id` and `warehouse_id` pair you need to get the max `warehouse_product_id`: ``` SELECT t1.* FROM aTable t1 JOIN ( SELECT product_id, warehouse_id, MAX(warehouse_product_id) maxVal FROM aTable GROUP BY product_id, warehouse_id ) t2 ON t1.product_id = t2.product_id AND t1.warehouse_id = t2.warehouse_id AND t1.warehouse_product_id = t2.maxVal ``` Alternatively: ``` SELECT t1.* FROM aTable t1 LEFT JOIN aTable t2 ON t1.product_id = t2.product_id AND t1.warehouse_id = t2.warehouse_id AND t1.warehouse_product_id < t2.warehouse_product_id WHERE t2.warehouse_product_id IS NULL ``` These solutions should work on most DBMS.
``` SELECT warehouse_product_id FROM [your table] where product_id="value" ``` unless I missed what you're getting at, which is entirely possible
want to select same product_id but bigger id row
[ "", "sql", "" ]
Is there any difference between `INT PRIMARY KEY` and `INTEGER PRIMARY KEY` when defining a schema for a table? When int primary key is used, I got `sqlite_autoindex` *thing* generated; when integer primary key , I got `sqlite_sequence` table generated. what's the difference? what side effects can have the first and second variants?
Yes, there is a difference: `INTEGER` is a special case in SQLite, when the database does *not* create a separate primary key, but reuses the `ROWID` column instead. When you use `INT` (or any other type that "maps" to `INTEGER` internally) a separate primary key is created. That is why you see `sqlite_autoindex` created for the `INT` primary key, and no index created for the one of type `INTEGER`: SQLite reuses a built-in indexing structure for the integer primary key, rendering the autoindex unnecessary. That is why the `INTEGER` primary key is more economical, both in terms of storage and in terms of performance. See [this link for details](http://www.sqlite.org/lang_createtable.html#rowid).
UPDATE: SQLite's ROWID column is now a [64-bit integer](https://www.sqlite.org/autoinc.html): > In SQLite, a column with type INTEGER PRIMARY KEY is an alias for the ROWID (except in WITHOUT ROWID tables) which is always a 64-bit signed integer. It is all explained in [SQLite 3 documentation](https://sqlite.org/datatype3.html): > **2.0 The INTEGER PRIMARY KEY** > > One exception to the typelessness of SQLite is a column whose type is INTEGER PRIMARY KEY. (And you must use "INTEGER" not "INT". A column of type INT PRIMARY KEY is typeless just like any other.) INTEGER PRIMARY KEY columns must contain a 32-bit signed integer. Any attempt to insert non-integer data will result in an error. > > INTEGER PRIMARY KEY columns can be used to implement the equivalent of AUTOINCREMENT. If you try to insert a NULL into an INTEGER PRIMARY KEY column, the column will actually be filled with an integer that is one greater than the largest key already in the table. Or if the largest key is 2147483647, then the column will be filled with a random integer. Either way, the INTEGER PRIMARY KEY column will be assigned a unique integer. You can retrieve this integer using the sqlite\_last\_insert\_rowid() API function or using the last\_insert\_rowid() SQL function in a subsequent SELECT statement.
Difference between INT PRIMARY KEY and INTEGER PRIMARY KEY SQLite
[ "", "sql", "sqlite", "" ]
I'm trying to get an SQL query to select all records from last month, I have this which from looking numerous places is exactly what I should need, and should work: ``` SELECT * FROM orders WHERE DATEPART(yy,DateOrdered) = DATEPART(yy,DATEADD(m,-1,GETDATE())) AND DATEPART(m,DateOrdered) = DATEPART(m,DATEADD(m,-1,GETDATE())) ``` However I keep getting the error: ``` #1305 - FUNCTION retail.DATEPART does not exist ``` The query I'm using is word for word from other answers on here, yet I'm getting this error. Thank you for any help -Tom
DATEPART is a Transact-SQL function, usable with Microsoft SQL Server. From the question tags, I assume you are using MySQL as your Database Management System. Take a look at [MySQL DATEDIFF](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_datediff)
That would not work in mysql. To translate that to mysql you could do: ``` SELECT * FROM orders WHERE YEAR(DateOrdered) = YEAR(DATE_SUB(CURDATE(), INTERVAL -1 MONTH)) AND MONTH(DateOrdered) = MONTH(DATE_SUB(CURDATE(), INTERVAL -1 MONTH)) ``` See [here](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html) for the date functions available in mysql.
Error: DATEPART does not exist, frp, SQL to select last months records
[ "", "mysql", "sql", "function", "datepart", "" ]
I have a table test5.having 10000 records.there is two column in table called es\_id and cac\_id.I want to make these column exact match.if its not macthed the update cac\_is with es\_id. for example: table **`test5`** ``` es_id cac_id xxx xxx yyy yyy zzz abc ddd null bbb zzz ``` output should be : ``` es_id cac_id xxx xxx yyy yyy zzz zzz ddd ddd bbb bbb ``` test5 table has 100000 records so update the table manually is vary deficult.Please suggest the better way to do this. Please assist.Thanks!
You can simply do: ``` UPDATE test5 SET es_id = CASE WHEN es_id IS NULL THEN cac_id ELSE es_id END, cac_id = CASE WHEN cac_id <> es_id OR cac_id IS NULL THEN es_id ELSE cac_id END WHERE cac_id <> es_id OR cac_id IS NULL OR es_id IS NULL; ``` This assumes that when es\_id is null, cac\_id is not, and the other way around `sqlfiddl demo`
How about ``` UPDATE test5 SET cac_id = es_id; ``` 100K records isn't all that much, and you want them all the same anyway, so why check for the condition? Assuming this isn't a recurring process, of course.
compare two column of a tableand if not matched then update in oracle sql
[ "", "sql", "oracle", "" ]
I have a difficulty because when comparing two fields in a subquery, although the fields are identical i.e. they both have NULL values, the comparison returns a FALSE result Therfore NULL = NULL is returning FALSE Now I know that NULLs are supposed to be compared with the IS operator, however when I compare two fields how am I supposed to know they contain a null? I need to compare two fields for identical data both if the values are NULL or not. Consider this SQL: ``` SELECT * FROM fts.fts_customers_data_50360001 WHERE fts.fts_customers_data_50360001.record_type = 15 AND fts.fts_customers_data_50360001.mid = 103650360001 AND NOT EXISTS ( SELECT fts.temp_fees_50360001.record_type FROM fts.temp_fees_50360001 WHERE fts.temp_fees_50360001.record_type = fts.fts_customers_data_50360001.record_type AND fts.temp_fees_50360001.merch_id = fts.fts_customers_data_50360001.mid AND fts.temp_fees_50360001.fee_curr = fts.fts_customers_data_50360001.currency AND fts.temp_fees_50360001.card_scheme = fts.fts_customers_data_50360001.card_scheme AND fts.temp_fees_50360001.tran_type = fts.fts_customers_data_50360001.fee_type AND fts.temp_fees_50360001.area = fts.fts_customers_data_50360001.region AND fts.temp_fees_50360001.srvc_type = fts.fts_customers_data_50360001.card_type ); ``` In the query above, fts.temp\_fees\_50360001.card\_scheme = fts.fts\_customers\_data\_50360001.card\_scheme both have NULL values inside but the comparison returns false .. too bad ANY IDEAS WOULD BE MUCH APPRECIATED
As the others have pointed out, `NULL` cannot be compared with `NULL`. In Postgres you can shorten your expressions by using the operator `IS DISTINCT FROM` which is a null-safe replacement for `<>`. In your case you'd need to use `IS NOT DISTINCT FROM` to compare for equality (looks a bit the wrong way round but unfortunately there is no corresponding `IS EQUAL TO` defined in the SQL standard). [From the manual](http://www.postgresql.org/docs/current/static/functions-comparison.html): > *Ordinary comparison operators yield null (signifying "unknown"), not true or false, when either input is null. For example, 7 = NULL yields null, as does 7 <> NULL. When this behavior is not suitable, use the IS [ NOT ] DISTINCT FROM constructs:* So, instead of ``` (fts.temp_fees_50360001.record_type = fts.fts_customers_data_50360001.record_type OR (fts.temp_fees_50360001.record_type IS NULL AND fts.fts_customers_data_50360001.record_type IS NULL) ) ``` you can use: ``` (fts.temp_fees_50360001.record_type IS NOT DISTINCT FROM fts.fts_customers_data_50360001.record_type) ``` to handle NULL values automatically. The condition looks a bit strange if you want to compare for equality but it still is quite short.
First of all, use aliases for your tables, your query will be MUCH more readable: ``` select * from fts.fts_customers_data_50360001 as d where d.record_type = 15 and d.mid = 103650360001 and not exists ( select * from fts.temp_fees_50360001 as f where f.record_type = d.record_type and f.merch_id = d.mid and f.fee_curr = d.currency and f.card_scheme = d.card_scheme and f.tran_type = d.fee_type and f.area = d.region and f.srvc_type = d.card_type ) ``` As for your question, there's several ways to do this, for example, you can use syntax like this: ``` ... ( f.card_scheme is null and d.card_scheme is null or f.card_scheme = d.card_scheme ) ... ``` Or use `coalesce` with some value that couldn't be stored in your column: ``` ... coalesce(f.card_scheme, -1) = coalesce(d.card_scheme, -1) ... ``` Recently I also like using `exists` with `intersect` for this type of comparisons: ``` ... exists (select f.card_scheme, f.tran_type intersect select d.card_scheme, d.tran_type) ... ``` Just a side note - you have to be careful when writing queries like this and check query plans to be sure your indexes are used.
SQL: field = other_field returns false even if they are identical (NULL values)
[ "", "sql", "postgresql", "null", "comparison", "" ]
I have an email dispatch table that needs to hold in excess of 4 million rows. A particular report also needs to reference this table to produce some stats, and it's taking longer than I would like to run (Currently around 30 seconds). I have checked the estimated execution plan, which shows that 94% of the cost is incurred by a single predicate (If I am interpreting this correctly). Note the following example shows small snippets to keep it brief. Other indexes are in place targeted at other queries. ![enter image description here](https://i.stack.imgur.com/LY1ZV.png) Predicate: ``` [EmsDb].[dbo].[MailDispatchPending].[MailCampaignId]=[@MailCampaignId] OR [@MailCampaignId] IS NULL ``` This I believe is pointing to the following SQL: ``` WHERE @MailCampaignId IS NULL OR MailCampaignId = @MailCampaignId -- Restrict to mail campaign where parameter passed ``` I have attempted to improve performance by testing the following indexes. Neither influence the execution plan output, or improve query speed. ``` /****** Object: Index [IX_MailCampaignId] Script Date: 11/27/2013 11:21:00 ******/ CREATE NONCLUSTERED INDEX [IX_MailCampaignId] ON [dbo].[MailDispatchPending] ( [MailCampaignId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO /****** Object: Index [IX_MailCampaignId] Script Date: 11/27/2013 11:21:00 ******/ CREATE NONCLUSTERED INDEX [IX_MailCampaignId] ON [dbo].[MailDispatchPending] ( [Id] ASC, [MailCampaignId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO ``` Can anyone suggest a better type of index or different strategy to help me improve performance here?
Don't use this approach: ``` WHERE @MailCampaignId IS NULL OR MailCampaignId = @MailCampaignId ``` Instead use: ``` IF @MailCampaignId IS NULL BEGIN SELECT .. FROM ... END ELSE BEGIN SELECT .. FROM ... WHERE MailCampaignId = @MailCampaignId END ``` It might feel like more work, but SQL-Server uses cached execution plans, and unless you force the recompile it will use the same execution plan whether the parameter is null or not. If you use the above approach you guarantee the correct plan is used depending on whether or not the parameter is null
The `IS NULL OR` part is what's messing with your performance, because if the input is NULL you're asking for every row which is the scan. I'd remove that part if possible and see if that doesn't help, and if not possible, then I'd rethink on when I'd need all of them and consider an alternate approach for example multiple queries instead of one or even storage changes.
SQL Server Query Optimisation - What is the optimal index here?
[ "", "sql", "sql-server", "indexing", "" ]
I am using MySQL. In my database I have the following tables: 1. A `student` table. The primary key of this table is `sid` . 2. A `high_school` table. Primary key is `hid` . 3. A `university` table. Primary key is `uid` . Then: * There is a **joint table** for `student` & `high_school`, the table is named `joint_table_A` it has two columns : `sid` and `hid` which link to the rows in `student` & `high_school` respectively. * There is another **joint table** for `student` & `university` , named `joint_table_B` , it also has two columns : `sid` and `uid` which link to the rows in `student` & `university` respectively. My problem & question: I want to have those `students` linked by `sid` in `joint_table_A` where `hid = 3` being also link to `joint_table_B`'s `sid` and put the value of `uid` to be `1` . What is the SQL query for this operation ? **===== UPDATE ======** Currently I get 3 answers below, one using `JOIN`, one using `LEFT JOIN` & last one using `INNER JOIN` . I get confused by those `JOINs` (I am still learning SQL). If you could add some explanations which would be more helpful for me to understand your answers so that I know the differences among your answers. Thanks!
``` SELECT s.* FROM students s INNER JOIN joint_table_A a ON s.sid = a.sid INNER JOIN joint_table_B b ON s.sid = b.sid WHERE a.hid = 3 AND b.uid = 1 ``` To try and explain the difference between `LEFT JOIN` and `INNER JOIN` for this query: Imagine you have 1000 records in `students` but there is only 50 records in `joint_table_A`, giving the hid's of 50 students highschools, and another 50 in `joint_table_B`. When you `LEFT JOIN`, all rows in the left table (`students`) are kept even if they can't be joined to a record in the right table. When you `INNER JOIN` only rows that can be joined are kept - where the `sid` exists in both left and right tables. **Using LEFT JOIN** 1. 1000 `student` join to 50 `joint_table_A` records - after the join 950 records have `hid = NULL`. 2. 1000 records from the previous join are joined to `joint_table_B` - 950 of these have `uid = NULL`. 3. Now all 1000 records are checked for `WHERE a.hid = 3 AND b.uid = 1` **Using INNER JOIN** 1. 1000 `student` join to 50 `joint_table_A` records - after the join only the 50 records that could join remain. 2. 50 records from the previous join are joined to `joint_table_B` - 20 of these can be joined (not all the same students who had their high school recorded also had university recorded). 3. Now 20 records are checked for `WHERE a.hid = 3 AND b.uid = 1` You can see why `INNER JOIN` is the one to use in this situation.
``` SELECT student.* FROM student JOIN joint_table_A USING (sid) JOIN joint_table_B USING (sid) WHERE joint_table_A.hid = 3 AND joint_table_B.uid = 1 ```
SQL Query for : rows linked by one joint table link to another joint table
[ "", "mysql", "sql", "" ]
I'm a SAS beginner and I'm curious if the following task can be done much more simple as it is currently in my head. I have the following (simplified) meta data in a table named user\_date\_money: User - Date - Money with various users and dates for every calendar day (for the last 4 years). The data is ordered by User ASC and Date ASC, sample data looks like this: ``` User | Date | Money Anna 23.10.2013 5 Anna 24.10.2013 1 Anna 25.10.2013 12 .... Aron 23.10.2013 5 Aron 24.10.2013 12 Aron 25.10.2013 4 .... Zoe 23.10.2013 1 Zoe 24.10.2013 1 Zoe 25.10.2013 0 ``` I now want to calculate a five day moving average for the Money. I started with the pretty popular apprach with the lag() function like this: ``` data cma; set user_date_money; if missing(money) then do; OBS = 0; money = 0.0; end; else OBS = 1; money5 = lag5(money); OBS5= lag5(obs); if missing(money5) then money5= 0.0; if missing(obs5) then obs5= 0; if _N_ = 1 then do; SUM = 0.0; N = 0; end; else; sum = sum + money-money5; n = n + obs-obs5; MEAN = sum / n ; retain sum n; run; ``` as you see, the problem with this method occurs if there if the data step runs into a new user. Aron would get some lagged values from Anna which of course should not happen. Now my question: I am pretty sure you can handle the user switch by adding some extra fields like laggeduser and by resetting the N, Sum and Mean variables if you notice such a switch but: Can this be done in an easier way? Perhaps using the BY Clause in any way? Thanks for your ideas and help! Best regards
I think the easiest way is to use PROC EXPAND: ``` PROC EXPAND data=user_date_money out=cma; ID date; BY user; CONVERT money=MEAN / transformin=(setmiss 0) transformout=(movave 5); RUN; ``` And as mentioned in John's comment, it's important to remember about missing values (and about beginning and ending observations as well). I've added SETMISS option to the code, as you made it clear that you want to 'zerofy' missing values, not ignore them (default MOVAVE behaviour). And if you want to exclude first 4 observations for each user (since they don't have enough pre-history to calculate moving average 5), you can use option 'TRIMLEFT 4' inside TRANSFORMOUT=().
If your particular need is simple enough, you can calculate it using PROC MEANS and a multilabel format. ``` data mydata; do id = 1 to 5; datevar = '01JAN2010'd-1; do month = 0 to 4; datevar=intnx('MONTH',datevar,1,'b'); sales = floor(500*rand('normal',7))+1500; output; end; end; run; proc format; value movingavg (multilabel notsorted) '01JAN2010'd-'31MAR2010'd = 'JAN-MAR 2010' '01FEB2010'd-'30APR2010'd = 'FEB-APR 2010' '01MAR2010'd-'31MAY2010'd = 'MAR-MAY 2010' /* ... more of these ... */ ; quit; proc means data=mydata; class id datevar/mlf order=data; types id*datevar; format datevar movingavg.; var sales; run; ``` The PROC FORMAT can be done programatically by use of the CNTLIN dataset, see SAS documentation for PROC FORMAT for more information.
sas - calculate moving average for grouped data with BY statement
[ "", "sql", "sas", "datastep", "" ]
I want to select the country with the highest population. This is my query: ``` SELECT continent, name, population FROM country HAVING population = MAX(population); ``` Somehow it returns 0 rows. If i use a nested query it works: ``` SELECT continent, name, population FROM country WHERE population = (SELECT MAX(population) FROM country); ``` So my question is: What is wrong with the first query? PS: You can download the database here: <http://dev.mysql.com/doc/index-other.html> --- Alright i think finally i figured out the whole process. Here how it works and why the alternative suggestions/solutions didn't work: (First of all, as "Dan Bracuk" said in his answer, we have to combine GROUP BY with HAVING and add the aggregate function at the begininng in SELECT statement) So let's go step by step and try this: ``` SELECT continent, name, population, MAX(population) FROM country; ``` This yields us the first row appended MAX(pop) at the end: **"North America", "Aruba", "103000", "1277558000"** So MAX(population) is just one entry so it restricts the row number to 1 and because i added the continent, name, population columns, mysql just selects the first row from the table. So if i now write down: ``` SELECT continent, name, population, MAX(population) FROM country HAVING population = MAX(population); ``` I get 0 rows, because 103000 is not equal to 1277558000. If i use: ``` SELECT continent, name, population, MAX(population) FROM country GROUP BY name; ``` for instance, i get a list of all countries where on each row MAX(population) = population. So additionally adding "HAVING population = MAX(population)" has no effect since it's already true. I hope i understood it correctly and could clarify others who wondered why the other solutions didnt work.
``` Select continent, name, sum(population) As sumPop From country Group By continent Order By sumPop Desc Limit 1; ``` Wow, after lots of trial and error, I see what the issue is now. First, your initial query seems wrong. With the database table you are using, you can have multiple regions within the continent. Therefore, your query will only return (if working correctly) the continent with the single largest region instead of a sum of all regions. Secondly, you absolutely can use the `max` function within the `having` clause, such as in this example: ``` Select continent, name, max(population) As sumPop From country Group By continent Having sumPop = max(population) ``` (If you do not group by Continent, you will only get one row back.) However, this is really of no use to you because at this point, the max population is whatever the population is for that row. Not for every row. This is because the `having` clause is only looking at the values for that one row (for however many rows you have). No matter what, you will need a sub query if you want to use the `having` clause in your case (or you can use my initial query), such as below. ``` Select continent, name, sum(population) As sumPop, maxPop From country Left Join (Select sum(population) As maxPop From country Group By continent Order By maxPop Desc Limit 1) as TmpTable On (maxPop > 0) Group By continent Having sumPop = maxPop ``` `http://sqlfiddle.com/#!2/3ab95/26` (This sqlfiddle does not contain all table rows, so the result may be different from your own.)
There are a couple of things wrong with it. In fact, in some db engines it would throw an error. First, the having keyword is used with aggregate results, not fields. Here is an example. ``` select field1, count(*) records from sometable group by field1 having records > 10 ``` Note that some databases will not allow you use alias names in the having clause. Next, you say this: ``` having population = max(population) ``` max(population) hasn't really been defined. You are not selecting it from anywhere.
mysql world database: select country with max population
[ "", "mysql", "sql", "" ]
I have a table that looks like this in an Oracle DB: ``` TransactionID Customer_id Sequence Activity ---------- ------------- ---------- ----------- 1 85 1 Forms 2 51 2 Factory 3 51 1 Forms 4 51 3 Listing 5 321 1 Forms 6 321 2 Forms 7 28 1 Text 8 74 1 Escalate ``` And I want to be able to sort out all rows where `sequence` is the highest for each `customer_id`. I there a `MAX()` function I could use on sequence but based on `customer_id` somehow? I would like the result of the query to look like this: ``` TransactionID Customer_id Sequence Activity ---------- ------------- ---------- ----------- 1 85 1 Forms 4 51 3 Listing 6 321 2 Forms 7 28 1 Text 8 74 1 Escalate ```
``` select t1.* from your_table t1 inner join ( select customer_id, max(Sequence) mseq from your_table group by customer_id ) t2 on t1.customer_id = t2.customer_id and t1.sequence = t2.mseq ```
Please Try it ``` with cte as ( select Customer_id,MAX(Sequence) as p from Tablename group by Customer_id ) select b.* from cte a join Tablename b on a.p = b.Sequence where a.p = b.Sequence and a.Customer_id=b.Customer_id order by b.TransactionID ```
Sorting by max value
[ "", "sql", "oracle", "" ]
This does not compile in SQL server 2012: ``` with q as ( select row_number() over (order by ActionName) as rn, * from [xxx].[dbo].[Action] a ) ``` It says 'Incorrect syntax near the keyword 'as'. Should this compile and if not how to fix this?
start your statement with `;` try this ``` ;with q as ( select row_number() over (order by ActionName) as rn, * from [xxx].[dbo].[Action] a ) SELECT * FROM q ``` and CTE should be followed by a SELECT statement.
``` ;with q as ( select row_number() over (order by a.ActionName) as rn, a.* from [xxx].[dbo].[Action] a ) ```
invalid syntax in SQL server query
[ "", "sql", "sql-server", "" ]
Is there any existing (non-proprietary) solution how to mirror current day data onto separate disk with minimum delay (under second), so that I can use smaller, non redundant but more expensive disk for high rate of reads? (specifically for SQL Server 2012) More background: * We have a transnational DB that needs to be redundant, there are few dozen thousands of new rows daily (handled without any problem); but main perf impact on DB is from reads that happen few times per second. Reads need to have fresh data (at max 1 second old), but they are only interested in current day data. All writes need to go to disk array with also very small delay. * I can use small fast but non-redundant SSD disk to optimize reads. There are few proprietary options - e.g. send each update to two separate nonrelated DBs - one slower and redundant; and other faster for reading. However is there any existing solution for this - how to mirror all inserts to secondary DB (and purge each midnight)?
You can use an indexed view to store the current days data on a separate dataspace. You'll have to build a new view each day so you may want to use a synonym to do the actual switching so as to avoid locks on insert while creating the view. Make sure you end date the filter on the column. This is how i would do it using to agent jobs: ``` Job 1 (10 Minutes to midnight) create view my_view_yyyymmdd (yyyymmdd = today + 1) with schema_binding as select from mytable where datecol >=day+1 and datecol < today+2 --dateliterals; create clustered index on my_view_yyyymmdd (yyyymmdd = today + 1) Job2 (midnight + 1 second) drop synonym mysyn; create synonym mysyn for my_view_yyyymmdd (yyyymmdd = today ) drop view my_view_yyyymmdd my_view_yyyymmdd (yyyymmdd = today - 1) ``` You don't mention updates so a clustered index on date shouldn't fragment enough to worry about. You may want to set the fillfactor on the index to 100 which will reduce the size on disk by 20 % over the default. This will help further reduce the IO hit.
You say in the comments that the data to be queried fits in RAM on the SQL Server box. This means that SQL Server will never perform even a single physical IO to read data. It all comes from its buffer pool. Read latencies will be extremely low. Disk performance will not matter at all. If you properly structure and index the data read queries will execute blindingly fast. Usually, having enough RAM to fit all data into memory is the single most beneficial thing you can do to performance. You can use any high availability solution you like. It will only impact writes, not reads. We already concluded in the comments that writes do not matter much here. In other words, I don't think you have a problem.
SQL Server : mirror/cache current day data for performance reasons
[ "", "sql", "sql-server", "performance", "caching", "mirroring", "" ]
HI i want to update all records in my table when i am having an id of a specific record with the following sql: ``` Update table1 SET field2 = field2 + 1 Where id = 129; Update table1 SET field2 = field2 -1 where feild2 > (SELECT feild2 from table1 WHERE id = 129); ``` But MYSQL says You can't specify target table 'table1' for update in FROM clause Is there a way to update like feild on same table.
Try this one by giving new alias to your subquery ``` Update table1 SET field2 = field2 -1 where feild2 > (SELECT t.feild2 FROM (SELECT feild2 from table1 WHERE id = 129) t) ```
Try this: ``` SET @var =(SELECT feild2 from table1 WHERE id = 129); Update table1 SET field2 = field2 -1 where feild2 > @var; ```
Update all recored below current record given by id
[ "", "mysql", "sql", "sql-update", "" ]
I want to limit a column that it can only have a value when another column has a value. example: (this doesn't work) ``` create table testConstraint ( col1 int not null identity(1, 1) primary key, col2 int, col3 int check (col2 is not null), col4 int) ``` This is not possible because he cannot reference another column. Error: > Column CHECK constraint for column 'col3' references another column, > table 'testConstraint'. Another try was: (also doesn't work) ``` create table testConstraint ( col1 int not null identity(1, 1) primary key, col2 int, col3 int, col4 int) GO alter table testConstraint add constraint ck_columnNotNull check (case when col2 is null then col3 is null end) GO ``` Anyone have an idea how this would be possible with a constraint?
You can write a trigger. Also, you can try this (1) ``` ALTER TABLE TestConstraint ADD CONSTRAINT CK_TestConstraint CHECK (NOT ( (col3 is not null) and (col2 is null) )) GO ``` or this (2) ``` ALTER TABLE TestConstraint ADD CONSTRAINT CK_TestConstraint CHECK ( ((col3 is not null) and (col2 is not null)) or ((col3 is null) and (col2 is null)) ) GO ``` depending on what exactly you need. I just tested it and it works OK, I think. ``` insert into TestConstraint (col2, col3, col4) values (null, 1, 2) -- ERROR insert into TestConstraint (col2, col3, col4) values (1, 1, 2) -- OK ```
Only simple logic is required, plus it needs (as per your second attempt) to be a table check constraint, so you can't declare it inline with the declaration of `col3`: ``` create table testConstraint ( col1 int not null identity(1, 1) primary key, col2 int, col3 int, col4 int) GO alter table testConstraint add constraint ck_columnNotNull check ( col3 is null or col2 is not null ) GO ``` If `col3` is `null`, then we don't *care* what the value of `col2` is. Conversely, if it's *not* `NULL`, then we do want to enforce the `col2` *isn't* null. That's what the two sides of the `or` effectively give us.
SQL Check constraint on column referencing other columns
[ "", "sql", "sql-server", "constraints", "" ]
I have a table and I want to group rows that have at most x difference at col2. For example, ``` col1 col2 abg 3 abw 4 abc 5 abd 6 abe 20 abf 21 ``` After query I want to get groups such that ``` group 1: abg 3 abw 4 abc 5 abd 6 group 2: abe 20 abf 21 ``` In this example difference is 1. How can write such a query?
For Oracle (or anything that supports window functions) this will work: ``` select col1, col2, sum(group_gen) over (order by col2) as grp from ( select col1, col2, case when col2 - lag(col2) over (order by col2) > 1 then 1 else 0 end as group_gen from some_table ) ``` Check it on [SQLFiddle](http://sqlfiddle.com/#!4/3f541/1).
This should get what you need, and changing the gap to that of 5, or any other number is a single change at the @lastVal +1 (vs whatever other difference). The prequery "PreSorted" is required to make sure the data is being processed sequentially so you don't get out-of-order entries. As each current row is processed, it's column 2 value is stored in the @lastVal for test comparison of the next row, but remains as a valid column "Col2". There is no "group by" as you are just wanting a column to identify where each group is associated vs any aggregation. ``` select @grp := if( PreSorted.col2 > @lastVal +1, @grp +1, @grp ) as GapGroup, PreSorted.col1, @lastVal := PreSorted.col2 as Col2 from ( select YT.col1, YT.col2 from YourTable YT order by YT.col2 ) PreSorted, ( select @grp := 1, @lastVal := -1 ) sqlvars ```
SQL - Group by numbers according to their difference
[ "", "sql", "database", "oracle", "group-by", "" ]
I have username which is manually entered by diffrent end users. There is no validation mechanism while inputting username hence there could be spelling error . For example user can enter Tina,Teena for same username . I need to return Tina when i Input Tina or vice versa. Elaborately ``` SELECT * FROM tbl where where username='tina' ``` should return Tina, Teena and so on
You could investigate the `SOUNDEX` function in T-SQL. It would match Teena to Tina for instance (and visa-versa). See <http://technet.microsoft.com/en-us/library/ms187384.aspx> As the function name suggests, it matches words that sound the same. So, although it would match Tina to Teena, it wouldn't match Tina to Sina (as they don't 'sound' the same).
``` SELECT name FROM (SELECT 'tina' AS name UNION ALL SELECT 'teena' AS name UNION ALL SELECT 'tena' AS name)tmp WHERE Soundex(name) LIKE '%' + Soundex('tina') + '%' ```
matching similar text in sql server
[ "", "sql", "sql-server", "pattern-matching", "" ]
Hey guys I'm using this query to get the maximum value of a table that uses auto increment ``` SELECT table_schema, table_name, data_type, ( CASE data_type WHEN 'tinyint' THEN 255 WHEN 'smallint' THEN 65535 WHEN 'mediumint' THEN 16777215 WHEN 'int' THEN 4294967295 WHEN 'bigint' THEN 18446744073709551615 end >> IF(Locate('unsigned', column_type) > 0, 0, 1) ) AS MAX_VALUE FROM information_schema.columns WHERE table_schema NOT IN ( 'MYSQL', 'INFORMATION_SCHEMA', 'PERFORMANCE_SCHEMA' ) AND extra = 'auto_increment'" ``` I'm having a hard time understanding what the IF statement is doing. ``` end >> IF(Locate('unsigned', column_type) > 0, 0, 1) ) AS MAX_VALUE ``` Any ideas??
The `>>` operator is a bitwise right-shift. An `INT UNSIGNED` for example has a max value of 4294967295, which is 232-1, or 11111111111111111111111111111111 in base 2. But the signed `INT` uses one of those bits for a sign bit, so the max value of a signed integer is 2147483647, which is 231-1, or 011111111111111111111111111111111 in base 2. Converting 11111111111111111111111111111111 to 011111111111111111111111111111111 can be done by a right-shift of 1 bit. The leftmost bit of the 32-bit word is filled in with a zero. Now for the `IF`: MySQL names the data type either as "int" or "int unsigned". So if the word "unsigned" is found by the string function `LOCATE()`, then the max value is the full range of 4294967295, so bit-shift it by 0 bits. Else "unsigned" does not appear in the data type name, and the int is signed, so bit-shift it by 1 bit. Another way of stating this: * Max `INT UNSIGNED` = 4294967295 = 11111111111111111111111111111111 * Max `INT` = 2147483647 = 011111111111111111111111111111111 = **4294967295 >> 1** You may also be interested in a similar script I wrote: <https://github.com/billkarwin/bk-tools/blob/master/pk-full-ratio.sql>
The double angle brackets (greater than signs) are a binary shift operator. If the column type is an unsigned integer, for example, it will shift 1 position (which essentially squares the number), and if it is a regular signed integer, it will not shift at all. A signed integer uses one bit to indicate whether the number of positive or negative. An unsigned integer uses that bit as part of the value, so it can hold a much higher max value, but cannot hold negative values at all. Make sense?
Mysql Query to get max value of tables using auto increment
[ "", "mysql", "sql", "linux", "bash", "shell", "" ]
This a simple my table ``` +-----------+----------------+-----------+ | id | date | meter | ------------+----------------+-----------+ | 1 | 2103-11-01 | 5 | | 2 | 2103-11-10 | 8 | | 4 | 2103-11-14 | 10 | | 6 | 2103-11-20 | 18 | | 7 | 2103-11-25 | 25 | | 10 | 2103-11-29 | 30 | +-----------+----------------+-----------+ ``` how do I get the results to the use of meters between two ranges of the results of recording time, like bellow ``` +----------------+----------------+-------+-----+--------+ | date1 | date2 | start | end | amount | +----------------+----------------+-------+-----+--------+ | 2013-11-01 | 2013-11-10 | 5 | 8 | 3 | | 2013-11-10 | 2013-11-14 | 8 | 10 | 2 | | 2013-11-14 | 2013-11-20 | 10 | 18 | 8 | | 2013-11-20 | 2013-11-25 | 18 | 25 | 7 | | 2013-11-25 | 2013-11-29 | 25 | 30 | 5 | +----------------+----------------+-------+-----+--------+ ```
Edit: I got it: ``` select meters1.date as date1, min(meters2.date) as date2, meters1.meter as start, meters2.meter as end, (meters2.meter - meters1.meter) as amount from meters meters1, meters meters2 where meters1.date < meters2.date group by date1; ``` Outputs: ``` +------------+------------+-------+-----+--------+ | date1 | date2 | start | end | amount | +------------+------------+-------+-----+--------+ | 2013-11-01 | 2013-11-10 | 5 | 8 | 3 | | 2013-11-10 | 2013-11-14 | 8 | 10 | 2 | | 2013-11-14 | 2013-11-20 | 10 | 18 | 8 | | 2013-11-20 | 2013-11-25 | 18 | 25 | 7 | | 2013-11-25 | 2013-11-29 | 25 | 30 | 5 | +------------+------------+-------+-----+--------+ ``` Original Post: This is most of the way there: ``` select meters1.date as date1, meters2.date as date2, meters1.meter as start, meters2.meter as end, (meters2.meter - meters1.meter) as amount from meters meters1, meters meters2 having date1 < date2 order by date1; ``` It outputs: ``` +------------+------------+-------+-----+--------+ | date1 | date2 | start | end | amount | +------------+------------+-------+-----+--------+ | 2013-11-01 | 2013-11-10 | 5 | 8 | 3 | | 2013-11-01 | 2013-11-20 | 5 | 18 | 13 | | 2013-11-01 | 2013-11-29 | 5 | 30 | 25 | | 2013-11-01 | 2013-11-14 | 5 | 10 | 5 | | 2013-11-01 | 2013-11-25 | 5 | 25 | 20 | | 2013-11-10 | 2013-11-20 | 8 | 18 | 10 | | 2013-11-10 | 2013-11-29 | 8 | 30 | 22 | | 2013-11-10 | 2013-11-14 | 8 | 10 | 2 | | 2013-11-10 | 2013-11-25 | 8 | 25 | 17 | | 2013-11-14 | 2013-11-25 | 10 | 25 | 15 | | 2013-11-14 | 2013-11-20 | 10 | 18 | 8 | | 2013-11-14 | 2013-11-29 | 10 | 30 | 20 | | 2013-11-20 | 2013-11-25 | 18 | 25 | 7 | | 2013-11-20 | 2013-11-29 | 18 | 30 | 12 | | 2013-11-25 | 2013-11-29 | 25 | 30 | 5 | +------------+------------+-------+-----+--------+ ```
If it's SQL server try it this way ``` WITH cte AS ( SELECT *, ROW_NUMBER() OVER (ORDER BY date) rnum FROM table1 ) SELECT c.date date1, p.date date2, c.meter [start], p.meter [end], p.meter - c.meter amount FROM cte c JOIN cte p ON c.rnum = p.rnum - 1 ``` Here is **[SQLFiddle](http://sqlfiddle.com/#!3/ad552/6)** demo --- If it's MySQL then you can do ``` SELECT date1, date2, meter1, meter2, meter2 - meter1 amount FROM ( SELECT @d date2, date date1, @m meter2, meter meter1, @d := date, @m := meter FROM table1 CROSS JOIN (SELECT @d := NULL, @m := NULL) i ORDER BY date DESC ) q WHERE date2 IS NOT NULL ORDER BY date1 ``` Here is **[SQLFiddle](http://sqlfiddle.com/#!2/18dd8/6)** demo Output in both cases: ``` | DATE1 | DATE2 | START | END | AMOUNT | |------------|------------|-------|-----|--------| | 2103-11-01 | 2103-11-10 | 5 | 8 | 3 | | 2103-11-10 | 2103-11-14 | 8 | 10 | 2 | | 2103-11-14 | 2103-11-20 | 10 | 18 | 8 | | 2103-11-20 | 2103-11-25 | 18 | 25 | 7 | | 2103-11-25 | 2103-11-29 | 25 | 30 | 5 | ```
get amount between range
[ "", "mysql", "sql", "sql-server", "" ]
Here's the test table for my question : ``` CREATE TABLE document ( id integer NOT NULL, name character varying(120) NOT NULL, owner_id bigint DEFAULT 0 NOT NULL, doc_type_id smallint DEFAULT 1 NOT NULL, archived boolean DEFAULT false NOT NULL, insert_date timestamp without time zone DEFAULT now() NOT NULL, modify_date timestamp without time zone DEFAULT now() NOT NULL, last_writer_id bigint ); ``` Modify\_date determines the last time someone edited a document. In order to make some statistics, I need to get the time between the creation(insert\_date) and modify\_date. And then to display a bar chart, I need to get a count of document where this time interval is, for example, between 0 and 5 days, 6 and 10 days, etc. So ranges must be calculated in the query I guess. The result expected (or kind of...) is : ``` Age Count 0-5 2 6-10 5 11-15 9 ... ... ``` Of course the age could be on a scale where 0-5 == 0, 6-10 == 1. I'll prepare the data to display them. I found a post quite similar but I couldn't apply it to my case. ([Select data for 15 minute windows - PostgreSQL](https://stackoverflow.com/questions/17516000/select-data-for-15-minute-windows-postgresql?rq=1)) Thanks for any answer you could bring to me. EDIT 1: The ranges needs to be dynamically generated from the minimum and maximum age I can get from the table.
``` with cte_ages as ( select extract(day from (modify_date - insert_date))::int as age from document ), cte_groups as ( select case when g.age = 1 then 0 else g.age end as gr_start, g.age + 4 as gr_end from generate_series(1, (select max(age) from cte_ages), 5) as g(age) ) select g.gr_start::text || '-' || g.gr_end::text, count(a.age) from cte_groups as g left outer join cte_ages as a on a.age between g.gr_start and g.gr_end group by g.gr_start, g.gr_end order by g.gr_start ``` **`sql fiddle demo`**
I'm sorry i do not know postgres but if you can convert this to postgres, it will work: ``` SELECT count(*) FROM document WHERE (TO_DAYS(modify_date) - TO_DAYS(insert_date)) < 11 AND (TO_DAYS(modify_date) - TO_DAYS(insert_date)) > 5 ``` This query counts the number of documents modified between 6 and 10 days. But you have to run and modify the query to modify the range of days.
Select a count of rows, order by dynamic ranges of time intervals in PostgreSQL
[ "", "sql", "postgresql", "count", "range", "intervals", "" ]
Say I have three tables: ``` TABLE A idA variable 1 Number of hats 2 Number of scarves 3 Number of mittens TABLE B idB name 1 Andy 2 Betty 3 Cedric 4 Daphne TABLE C idA idB value 1 1 15 1 2 2 1 3 89 2 1 10 2 3 3 2 4 1504 3 2 12 3 3 4 3 4 1 ``` Looking at the table, it's relatively simple to work out - we know how many hats (2) and mittens (12) that she owns, but not how many scarves. Likewise, for Daphne we know how many scarves (1504) and mittens (1) she owns, but not the amount of hats. However, I'd like a list of fields that there ISN'T information for - I would have a returned result looking something like this (if I asked for Andy) ``` idA variable 3 Number of mittens ``` Any idea how I do that? :)
The following query works: ``` SELECT B.name, A.variable FROM B CROSS JOIN A LEFT JOIN C ON C.idA = A.idA AND C.idB = B.idB WHERE C.value IS NULL ``` Its the `CROSS JOIN` that is key, it says `JOIN` every record in `B` to every record in `A`. Once you've done that you can easily check which combinations of `idA` and `idB` don't have a corresponding record in `C`. [Tested on SQLFiddle](http://sqlfiddle.com/#!2/7181f/9/0) **Result:** ``` NAME UNKNOWN VARIABLE ------------------------------- Andy Number of mittens Betty Number of scarves Daphne Number of hats ```
``` select idA, variable from a where idA not in (select idA from c where idB = 1) ```
SQL Query to match unlinked data
[ "", "mysql", "sql", "" ]
In my SQL Server 2008 I've got two tables. 1. Table: All kinds of Users with unique ID's 2. Table: Blacklisted Users with ID's Now I'd like to get all Users that are not on the blacklist. Just doesn't work like I want it to ``` SELECT A.ID, B.ID FROM Users AS A INNER JOIN Blacklist AS B ON A.ID != B.ID ``` Can someone help?
If you expect it to not to be in Blacklist, you won't have any data to select from blacklist in select statement ``` SELECT A.* FROM Users A Where A.ID NOT IN (Select Id From Blacklist ) ``` If you wish, read more about [Subqueries with NOT IN](http://technet.microsoft.com/en-us/library/ms189062%28v=sql.105%29.aspx)
What you want is an anti-join, something like this: ``` SELECT A.ID, B.ID FROM Users AS A LEFT JOIN Blacklist AS B ON A.ID = B.ID WHERE B.ID IS NULL ``` That is, we perform the join, and then in the `WHERE` clause we apply a filter which eliminates rows where the join was successful. Your original query doesn't work (assuming that there is more than one row in `Blacklist` and that they have different `ID` values) because, for *any* `ID` value in `A`, we can find *a* row in `B` which doesn't match it - even if there's *also* a row which does match it.
Table without ID's in other Table
[ "", "sql", "sql-server", "select", "" ]
I'm currently using GUIDs as a `NONCLUSTERED PRIMARY KEY` alongside an `INT IDENTITY` column. The GUIDs are required to allow offline creation of data and synchronisation - which is how the entire database is populated. I'm aware of the implications of using a GUID as a clustered primary key, hence the integer clustered index but does using a GUID as a primary key and therefore foreign keys on other tables have significant performance implications? Would a better option to use an integer primary/foreign key, and use the GUID as a client ID which has a `UNIQUE INDEX` on each table? - My concern there is that entity framework would require loading the navigation properties in order to get the GUID of the related entity without significant alteration to the existing code. The database/hardware in question is SQL Azure.
Generally speaking, it is preferable to use INT for Primary Key / Foreign Key fields, whether or not these fields are the leading field in Clustered indexes. The issue has to do with JOIN performance and even if you use UNIQUEINDENTIFIER as NonClustered or even if you used NEWSEQUENTIALID() to reduce fragmentation, as the tables get larger it will be more scalable to JOIN between INT fields. (Please note that I am *not* saying that PK / FK fields should always be INT as sometimes there are perfectly valid natural keys to use). In your case, given the concern about Entity Framework and generating the GUIDs in the app and not in the DB, go with your alternate suggestion of using INT as the PK / FK fields, **but** rather than have the UNIQUEIDENTIFIER in all tables, only put it in the main user / customer info table. I would think that you should be able to do a one-time lookup of the customer INT identifier based on the GUID, cache that value, and then use the INT value for all remaining operations. And yes, be sure there is a UNIQUE, NONCLUSTERED index on the GUID field. That all being said, if your tables will never (and I mean NEVER as opposed to just not in the first 2 years) grow beyond maybe 100,000 rows each, then using UNIQUEIDENTIFIER is less of a concern as small volumes of rows generally perform ok (given moderately decent hardware that is not overburdened with other processes or low on memory). Obviously, the point at which JOIN performance degrades due to using UNIQUEIDENTIFIER will greatly depend on the specifics of the system: hardware as well as what types of queries, how the queries are written, and how much load on the system.
You can also create foreign keys against unique key constraints, which then gives you the option to foreign key to the `ID` identity as an alternative to the Guid. i.e. ``` Create Table SomeTable ( UUID UNIQUEIDENTIFIER NOT NULL, ID INT IDENTITY(1,1) NOT NULL, CONSTRAINT PK PRIMARY KEY NONCLUSTERED (UUID), CONSTRAINT UQ UNIQUE (ID) ) GO Create Table AnotherTable ( SomeTableID INT, FOREIGN KEY (SomeTableID) REFERENCES SomeTable(ID) ) GO ``` **Edit** Assuming that your centralized database is a Mart, and that only batch ETL is done from the source databases, if you do your ETL directly to the central database (i.e. not via `Entity Framework`), given that all your tables have UUID FK's after re-population from the distributed databases, you'll need to either map the INT UKCs during ETL or fix them up after the import (which would require a temporary NOCHECK constraint step on the INT FK's). Once ETL is loaded and INT keys are mapped, I would suggest you ignore / remove the UUID's from your ORM model - you would need to regenerate your EF navigation on the INT keys. A different solution would be required if you update the central database directly or do continual ETL and do use EF for the ETL itself. In this case, it might be less total I/O just to leave the PK GUID as FKs for RI, drop the INT FK's altogether, and choose other suitable columns for clustering (minimizing page reads).
SQL Guid Primary Key Join Performance
[ "", "sql", "sql-server", "database-design", "azure-sql-database", "" ]
Just trying to insert data from 5 cells via a VBA script, into a column on an SQL server 08 database. So basically I have 1 table with 4 columns, I want to insert multiple sets of data into the columns at once which would insert data into the DB with the below result.. ``` Server Name Middleware Version License TEST6 Testing 1 1 TEST6 Testing1 1 1 TEST6 Testing2 1 1 TEST6 Testing3 1 1 ``` I know the values are not correct on the below code, but I get the error message (below the vba code) when the VBA script is executed. ``` Dim val1 As String, val2 As String, val3 As String, val4 As String val1 = Range("B126").Value val2 = Range("C126").Value val3 = Range("C127").Value val4 = Range("D126").Value conn.Open sConnString Dim item As String item4 = "INSERT INTO [IndustrialComp].[dbo].[Middleware](" item4 = item4 & " [server_name],[middleware],[middlware],[version]" item4 = item4 & " )Values(" item4 = item4 & " '" & val1 & "', '" & val2 & "', '" & val3 & "','" & val4 & "')" conn.Execute item4 End Sub ``` Msg 264, Level 16, State 1, Line 1 The column name 'middleware' is specified more than once in the SET clause. A column cannot be assigned more than one value in the same SET clause. Modify the SET clause to make sure that a column is updated only once. If the SET clause updates columns of a view, then the column name 'middleware' may appear twice in the view definition.
I believe the columns you specify in your INSERT statement are duplicated and therefore not correct. Try: ``` item4 = item4 & " [server_name],[middleware],[version],[license]" ``` Update: Your SQL statement should look like this: ``` INSERT INTO [IndustrialComp].[dbo].[Middleware]([server_name],[middleware],[version],[license]) VALUES ('TEST6','Testing',1,1) ,('TEST6','Testing1',1,1) ,('TEST6','Testing2',1,1) ,('TEST6','Testing3',1,1) ``` So you have to repeat the block between parenthesis for every row you want to insert. **However**, you now only have 4 variables that hold 4 different values in your solution, so you will never be able to insert those 4 different rows because you only select values in cells B126, C126, C127 and D126. That will likely be the first row that you want to insert? Or do you want to add the 1,2,3 to `Testing` yourself and repeat the other values? Please explain and update your answer accordingly.
I am assuming the data is in Excel. If so, just loop through the rows. Also, it's seems that your val1, val2 etc. don't match the example. Maybe you meant val3 to be D126 and val4 to be E126. I will assume that. Here is the corrected code: ``` Dim sSQL as string Dim i as long i=0 while ActiveSheet.Range("B126").offset(i,0).value <> "" 'stop when there is a blank cell i=i+1 conn.Open sConnString sSQL = "INSERT INTO [IndustrialComp].[dbo].[Middleware](" sSQL = sSQL & " [server_name],[middleware],[version],[license]" sSQL = sSQL & " )Values (" sSQL = sSQL & " '" & ActiveSheet.Range("B126").offset(i,0).Value & "', " sSQL = sSQL & " '" & ActiveSheet.Range("C126").offset(i,0).Value & "', " sSQL = sSQL & " '" & ActiveSheet.Range("D126").offset(i,0).Value & "', " sSQL = sSQL & " '" & ActiveSheet.Range("E126").offset(i,0).Value & "' " sSQL = sSQL & ")" conn.Execute sSQL wend ``` Code not tested but it compiles.
Inserting multiple values into a SQL database from EXCEL through VBA script
[ "", "sql", "sql-server", "vba", "sql-server-2008", "excel", "" ]
I have a MySQL like the following: ``` Events +----+------+--------------------------------+ | id | name | sites_id | created | +----+------+--------------------------------+ | 1 | test | 1 | 2013-11-01 00:00:00 | | 2 | test | 1 | 2013-11-02 00:00:00 | | 3 | test | 2 | 2013-11-13 00:00:00 | | 4 | test | 3 | 2013-11-14 00:00:00 | | 5 | test | 4 | 2013-11-25 00:00:00 | +----+------+----------+---------------------+ ``` What I want to select events that are created with in 48 hours of each other and have the same site id. (in this example I would expect ids 1 and 2). Any help at all would be appreciated as I have drawn a blank how to do this solely in SQL. Thanks
Try something like this: ``` SELECT DISTINCT e1.* FROM events e1 INNER JOIN events e2 ON e1.sites_id = e2.sites_id AND e1.id <> e2.id WHERE ABS(datediff(e1.created, e2.created)) <= 2; ``` `sqlfiddle demo` This gives you the result: ``` ID NAME SITES_ID CREATED 2 test 1 November, 02 2013 00:00:00+0000 1 test 1 November, 01 2013 00:00:00+0000 ```
I've not syntax checked this (so see it as more of a guideline): ``` SELECT DISTINCT t1.id , t1.name , t1.sites_id , t1.created FROM Events t1 INNER JOIN Events t2 ON (t2.created BETWEEN DATE_ADD(t1.created, INTERVAL -2 DAY) AND DATEADD(DAY, INTERVAL 2 DAY)) AND t1.sites_id = t2.sites_id AND t1.id <> t2.id ```
MySQL: Select rows that are in a date range of each other
[ "", "mysql", "sql", "database", "" ]
Am new to vertica. My need is to write a trigger to insert into the history table whenever table update occurs. It is simply possible in MSSQL. like vise is my need and can any body suggest some direct links or details the steps am there is less time for me to investigate into and the searches didn't resulted productive. Thanks
Vertica doesn't support triggers. This may seem strange if you're coming from another database platform, but the fact is Vertica was designed for analytical reporting applications, while triggers are more about transaction processing. It is my impression that most Vertica users (including my company) process transactional updates to data in a more traditional Relation Database or other system and periodically batch-load Vertica with any updates. Any business rules or data validation requiring a trigger or other procedural logic should occur on this other, primary database system.
I think you should look into external UDF. sky's the limit !
how to write vertica trigger
[ "", "asp.net", "sql", "triggers", "vertica", "" ]
What is the fastest regarding performance way to check that integer column contains specific value? I have a table with 10 million rows in postgresql 8.4. I need to do at least 10000 checks per sec. Currently i am doing query `SELECT id FROM table WHERE id = my_value` and then checking does `DataReader` have rows. But it is quite slow. Is there any way to speed up without loading whole column into memory?
You can select `COUNT` instead: ``` SELECT COUNT(*) FROM table WHERE id = my_value ``` It will return just one integer value - number of rows matching your select condition.
You need two things, As Marcin pointed out, you want to use the `COUNT(*)` if all you need is to know how many. You also need an index on that column. The index will have the answer pretty much right at hand. Without the index, Postgresql would still have to go through the entire table to count that one number. ``` CREATE INDEX id_idx ON table (id) ASC NULLS LAST; ``` Something of the sort should get you there. Whether it is enough to run the query 10,000/sec. will depend on your hardware...
Does column of integers contains value
[ "", "sql", "performance", "postgresql", "" ]
I have a query: ``` select count(distinct RID) from MASTER_MOVIEVOD as M inner join RID_GENRE_MOVIEVOD as RG inner join GENRE_MOVIEVOD as G on M.RID=RG.RID and RG.GENRE_SR_NO=G.GENRE_SR_NO where M.UPDATE_PRESENT=1 and M.CLIP_TYPE=220 and M.PCAT=2 and G.GENRE_NAME!='Drama'; ``` It gives me error > Error: ambiguous column name: RID
This is because SQL doesn't know which `RID` to select `M.RID` or `RG.RID`, the ambiguity comes from here. You have to choose one of them instead of `RID`: ``` select count(RG.RID) from MASTER_MOVIEVOD as M .. ```
Add table identifier to `count`, because 2 of your tables has column `RID` you have to specify which one you want to use in your query ``` select count(distinct M.RID) from MASTER_MOVIEVOD as M inner join RID_GENRE_MOVIEVOD as RG inner join GENRE_MOVIEVOD as G on M.RID=RG.RID and RG.GENRE_SR_NO=G.GENRE_SR_NO where M.UPDATE_PRESENT=1 and M.CLIP_TYPE=220 and M.PCAT=2 and G.GENRE_NAME!='Drama'; ```
Sql error Error: ambiguous column name
[ "", "sql", "c", "sqlite", "" ]
so I have 2 tables Table 1: Languages ``` | language_id | language | ------------------------- | 1 | java | | 2 | c | ``` Table 2: People ``` | person_id | person_name | expert_lang | years_experience | ------------------------------------------------------------- | 1 | Neil | 1 | 15 | | 2 | John | 1 | 10 | | 3 | Lucy | 2 | 12 | ``` Now what i'm trying to do is find the total years of experience for each language so it would produce a table like follows: ``` | language | total_years_experience | ------------------------------------- | java | 25 | | c | 12 | ``` I cant seem to get anything to work, could anyone help? would be much appreciated!
You might want to try joining the tables ``` SELECT language, SUM(years_experience) as total_years_experience FROM languages INNER JOIN people ON languages.language_id=people.expert_lang GROUP BY expert_lang ```
Try this... ``` SELECT l.languna ,SUM(p.years_experience) FROM Languages l INNER JOIN People p ON l.lagu_id=p.expert_lang GROUP BY l.languna ```
SQL wringing a JOIN
[ "", "sql", "" ]
I was given an interview question and I cannot resolve it. I'm not in the SQL field, and it was more to show problem solving ability than SQL ability. I'd still like to figure it out though! It was in MS SQL server, but a generic SQL answer is acceptable. I have 3 tables: Sales, Customers, Products and NO NULL values in the tables. **Customers**: Customer\_ID, Customer\_Name **Products**: Product\_ID, Product\_Price **Sales**: Customer\_ID, Product\_ID, Number\_Purchased They want me to display, the client who has paid the most, to the client who has paid the least. So I need to link the Customer ID from Customers to the Customer ID in Sales and then the sales to the product price and work out Price \* Number Purchased and assign it to the correct person. I tried something like this at the time: (obviously wrong) ``` SELECT Customers.Customer_ID, Customers.Customer_Name, SUM(Sales.Number_Purchased *Products.Product_Price) as Total FROM (Customers INNER JOIN Products ON Customers.Customer_ID = Products.Customer_ID) INNER JOIN Sales ON Products.Product_ID = Sales.Product_ID GROUP BY Customers.Customer_ID, Customers.Customer_Name ``` Obviously I'm not good with SQL but if somone can give me a shove in the right direction to solving this (second Interview is in a few hours!) I would really appreciate it! I've gotten myself tied in knots.
If you want the list sorted from paid the most to paid the least, add: ``` ORDER BY SUM(Sales.Number_Purchased * Products.Product_Price) DESC ``` Additionally, you have to change some of the JOIN conditions, since you are not establishing the correct connection between the tables. And since you are doing this in MS-ACCESS, you have to wrap the JOINS with parenthesis It should be: ``` SELECT Customers.Customer_ID, Customers.Customer_Name, SUM(Sales.Number_Purchased * Products.Product_Price) AS Total FROM ((Customers INNER JOIN Sales ON Sales.Customer_ID = Customers.Customer_ID) INNER JOIN Products ON Sales.Product_ID = Products.Product_ID) GROUP BY Customers.Customer_ID, Customers.Customer_Name ORDER BY SUM(Sales.Number_Purchased * Products.Product_Price) DESC ```
I'd prefer to approach this question in the following steps. First of all, find out the total of each purchase done by all the customers. ``` SELECT Customer_ID, SUM (Sales.Number_Purchased * Products.Product_Price) FROM Sales INNER JOIN Products ON Products.Product_ID = Sales.Product_ID GROUP BY Customer_ID ``` Then we would like to know the customer name. So we will inner join the Customes table. ``` SELECT Customer_ID, Customer_Name, SUM (Sales.Number_Purchased * Products.Product_Price) FROM Sales INNER JOIN Products ON Products.Product_ID = Sales.Product_ID INNER JOIN Customers ON Customers.Customer_ID = Sales.Customer_ID GROUP BY Customer_ID ``` Finally, you can do a ORDER BY statement to find out the ones who paid the most and the least. Take note that in your code, Products.Customer\_ID is *not* allowed because Customer\_ID is not one of the columns in the Products table. **EDIT** Oops, there is an ambiguous column in my second SQL. It should be "SELECT Customers.Customer\_ID" because the column name Customer\_ID is used in two different tables.
SQL Query for three tables with calculation
[ "", "sql", "sql-server", "" ]
Is there any good method to get the count of different product once. for example ``` /* the table like */ id count product 1 5 a 2 2 b 3 6 c 4 2 a ...... ``` I want to get the sum of product use one sql command. because the product number is very large. the value like ``` a b c 7 2 6 ``` Thank you very much!
``` Select product, sum(count) from table_name group by product ``` You might want to go through tutorial of group\_by clause <http://www.techonthenet.com/sql/group_by.php>
i dont know if you looking for pivote result as you done in your question , if yes use this ``` SELECT Sum(CASE WHEN ( product = 'a' ) THEN ` count ` ELSE 0 END)AS a, Sum(CASE WHEN ( product = 'b' ) THEN ` count ` ELSE 0 END)AS b, Sum(CASE WHEN ( product = 'c' ) THEN ` count ` ELSE 0 END)AS c FROM table1 ``` [DEMO HERE](http://sqlfiddle.com/#!2/dd9b7/1) result: ``` A B C 7 2 6 ``` note this query is only if you have 3 limited values on product.
Sql sum of count field with different product name once
[ "", "mysql", "sql", "sql-server", "" ]
i have the two tables, Payment Table and Person Table, a person por month can have more than one payment, so i want so sum all "amount" fields from a parson per month and per year, if there is no payment the result should be 0 and ID of the person should appear in the month. i am almost there, but in my curreny query the data displayed is all payments per person and not the sum. how can o het this? current results are like this (see october) i need to sum below 3 payments and olny show one line of october 2013: My Table MonthNr---MonthAbr---Amount---PersonID---YearAmount 1---JAN---0---2---2013 2---FEB---0---2---2013 3---MAR---0---2---2013 4---APR---0---2---2013 5---MAY---0---2---2013 6---JUN---0---2---2013 7---JUL---0---2---2013 8---AUG---0---2---2013 9---SEP---0---2---2013 10---OCT---64,74---2---2013 10---OCT---73,66---2---2013 10---OCT---24,3---2---2013 11---NOV---24,3---2---2013 12----DEC----0---2----2013 My query: ``` SELECT months.monthno as MonthNr, CAST(CASE WHEN CAST(months.monthno AS int) =1 THEN 'JAN' WHEN CAST(months.monthno AS int) =2 THEN 'FEB' WHEN CAST(months.monthno AS int) =3 THEN 'MAR' WHEN CAST(months.monthno AS int) =4 THEN 'APR' WHEN CAST(months.monthno AS int) =5 THEN 'MAY' WHEN CAST(months.monthno AS int) =6 THEN 'JUN' WHEN CAST(months.monthno AS int) =7 THEN 'JUL' WHEN CAST(months.monthno AS int) =8 THEN 'AUG' WHEN CAST(months.monthno AS int) =9 THEN 'SEP' WHEN CAST(months.monthno AS int) =10 THEN 'OCT' WHEN CAST(months.monthno AS int) =11 THEN 'NOV' WHEN CAST(months.monthno AS int) =12 THEN 'DEC' ELSE '' END AS nvarchar) as MonthAbr, Amount = isnull(sum(o.Amount),0), c.IDPerson as PersonID, isnull(year(o.Date ),2013) as YearAmount FROM Person c cross join (select number monthNo from master..spt_values where type='p' and number between 1 and 12) months full join Payments o ON o.IDPerson = c.IDPerson AND month(o.Date ) = months.monthNo where c.IDPerson = 2 GROUP BY months.monthno, c.IDPerson ,o.Date ORDER BY months.monthno, c.IDPerson ``` can anyone help me? thanks in advance.
Since you are using the isnull function on o.date I assume this means there are nulls in this column. If so, you need to account for this within your group by clause, e.g. "group by months.monthno, c.idperson, isnull(year(o.date),2013)".
You shouldn't group by `o.Date`, but only by the month of the date, which you already have included as `months.monthno`.
SQL Query to sum by month
[ "", "sql", "" ]
EDIT: I'm using the PROC SQL functionality in SAS. I'm trying to overwrite data in a primary table with data in a secondary table if two IDs match. Basically, there is a process modifying certain values associated with various IDs, and after that process is done I want to update the values associated with those IDs in the primary table. For a very simplified example: Primary table: ``` PROD_ID PRICE IN_STOCK 1 5.25 17 2 10.24 200 [...additional fields...] 3 6.42 140 ... ``` Secondary table: ``` PROD_ID PRICE IN_STOCK 2 11.50 175 3 6.42 130 ``` And I'm trying to get the new Primary table to look like this: ``` PROD_ID PRICE IN_STOCK 1 5.25 17 2 11.50 175 [...additional fields...] 3 6.42 130 ... ``` So it overwrites certain columns in the primary table if the keys match. In non-working SQL code, what I'm trying to do is something like this: ``` INSERT INTO PRIMARY_TABLE (PRICE, IN_STOCK) SELECT PRICE, IN_STOCK FROM SECONDARY_TABLE WHERE SECONDARY_TABLE.PROD_ID = PRIMARY_TABLE.PROD_ID ``` Is this possible to do in one statement like this, or will I have to figure out some workaround using temporary tables (which is something I'm trying to avoid)? EDIT: None of the current answers seem to be working, although it's probably my fault - I'm using PROC SQL in SAS and didn't specify, so is it possible some of the functionality is missing? For example, the "FROM" keyword doesn't turn blue when using UPDATE, and throws errors when trying to run it, but the UPDATE and SET seem fine...
One answer in SAS PROC SQL is simply to do it as a left join and use COALESCE, which picks the first nonmissing value: ``` data class; set sashelp.class; run; data class_updates; input name $ height weight; datalines; Alfred 70 150 Alice 59 92 Henry 65 115 Judy 66 95 ;;;; run; proc sql; create table class as select C.name, coalesce(U.height,C.height) as height, coalesce(U.weight,C.weight) as weight from class C left join class_updates U on C.name=U.name; quit; ``` In this case though the SAS solution outside of SQL is superior in terms of simplicity of coding. ``` data class; update class class_updates(in=u); by name; run; ``` This does require both tables to be sorted. There are a host of different ways of doing this (hash table, format lookup, etc.) if you have performance needs.
Do you really want to insert new data? Or update existing rows? If updating, join the tables: ``` UPDATE PT SET PT.PRICE = ST.PRICE, PT.IN_STOCK = ST.IN_STOCK FROM PRIMARY_TABLE PT JOIN SECONDARY_TABLE ST ON PT.PROD_ID = ST.PROD_ID ```
Overwrite data in one table with data from another if two keys match
[ "", "sql", "sas", "" ]
I've got three tables: `department`, `employee` and `telephone` **`employee`** has columns `EmpNo`, `Surname`, `Firstname` and `DeptNo`. **`telephone`** has columns `EmpNo` and `Extension`. **`department`** has columns `DeptNo` and `DeptName`. What I am trying to get is the full name, department name and telephone extension. I think I am almost there but its not working yet. ***My query*** ``` SELECT e.Firstname, e.Surname, d.DeptName, t.Extension FROM employee AS e INNER JOIN department AS d INNER JOIN telephone AS t ON e.DeptNo = d.DeptNo ON t.EmpNo = e.EmpNo; ```
To fix your problem, move one of the `ON` clauses before the next `JOIN` ``` SELECT e.Firstname, e.Surname, d.DeptName, t.Extension FROM employee AS e INNER JOIN department AS d ON e.DeptNo = d.DeptNo INNER JOIN telephone AS t ON t.EmpNo = e.EmpNo; ```
The ON clause must follow the JOIN that it is modifying. So: `INNER JOIN department d ON e.DeptNo = d.DeptNo` and `INNER JOIN telephone t ON t.EmpNo = e.EmpNo;`
using multiple table in SQL
[ "", "sql", "" ]
I believe I have the right syntax for SQL plus command, I have tried different ways to do it but I am getting the same error message. I don't know why i am getting this "missing right parenthesis error" any help will be appreciated thank you in advance. here is my code: ``` create table PUBLISHERS ( NAME varchar2(50) primary key, address varchar2(50), phone integer(10) ); ```
The `integer` data type does not use a length qualifier. `integer` is equivalent to `number(38,0)`. ``` SQL> ed Wrote file afiedt.buf 1 create table PUBLISHERS ( 2 NAME varchar2(50) primary key, 3 address varchar2(50), 4 phone integer 5* ) SQL> / Table created. ``` If you want to limit the size, use a `number` ``` SQL> ed Wrote file afiedt.buf 1 create table PUBLISHERS ( 2 NAME varchar2(50) primary key, 3 address varchar2(50), 4 phone number(10) 5* ) SQL> / Table created. ``` Since you are never going to do numeric operations on a phone number, however, while it is generally likely that you will perform string manipulation on it to format phone numbers for display, it would generally make sense to store a phone number as a character string rather than as a number. You can add a `CHECK` constraint that ensures the format is correct. ``` SQL> ed Wrote file afiedt.buf 1 create table PUBLISHERS ( 2 NAME varchar2(50) primary key, 3 address varchar2(50), 4 phone varchar2(10) 5* ) SQL> / Table created. ```
`INTEGER` is not a Oracle Built-In data type. It is just a ANSI format that is supported in oracle. The oracle representation of INTEGER is NUMBER (38). Use `NUMBER` datatype instead. ``` CREATE TABLE publishers( name VARCHAR2(50) PRIMARY KEY, address VARCHAR2(50), phone NUMBER(10) ); ```
Error at line 1: ORA-00907: missing right parenthesis
[ "", "sql", "oracle", "" ]
I am new to SQL so I am fumbling here a bit. I have the following table: ``` Entered Generalist Item 12/31/2012 07:26:50 Tom Smith RTW/Updates 12/31/2012 07:30:10 Terrie Bradshaw Posters 12/31/2012 07:38:16 Jen Lopez Client Assistance/Request 12/31/2012 07:48:00 Tom Smith RTW/Updates 12/31/2012 07:50:29 Mike Smith RTW/Updates 12/31/2012 07:55:32 Tom Smith Client Assistance/Request ``` I am trying to find out when was the last time a rep was assigned an item. So I am looking for the Min value on a column. My query would look at Item "RTW/Updates" when was the earlier time entered between a date range and return Tom Smith. For example the user queries, RTW/Update between 12/31/2012 and 1/1/2013 and the answer would be Tom Smith. This is what I have so far, but have not been able to figure out the between the dates part: ``` SELECT MIN(entered), generalist, item FROM dataTable ``` That is pretty much it.
I believe this should work (where the @ variables are the parameters passed to your procedure) ``` SELECT MIN(entered), generalist, item FROM dataTable WHERE item = @itemParm AND entered BETWEEN @enteredStart AND @enteredEnd GROUP BY generalist, item ```
I May not understand what you want, but if you want to get one person back based on the minimum date, you need to work out the minimum date, and use that to find that person: ``` select * from datatable where entered = ( select min(entered) as MinDate from DataTable where Item = 'RTW/Updates' ) and item = 'RTW/Updates' ``` [SQL Fiddle](http://sqlfiddle.com/#!3/50a20/2) You could also use a CTE: ``` ; with LowDate as (select min(entered) as MinDate from DataTable where Item = 'RTW/Updates' ) select * from datatable inner join LowDate ON entered = LowDate.MinDate and item = 'RTW/Updates' ``` [More SQL Fiddle!](http://sqlfiddle.com/#!3/50a20/8)
SQL statement to find Min Date
[ "", "sql", "sql-server", "" ]
I would like to delete all duplicates in the SQL Table that have more than one ID for a single value in ColumnName (or several rows with same ColumnName value). Here is SQL for delect for one ColumnName values: ``` DELETE FROM MyDataTable WHERE ColumnName = 'UniqueComnName1' AND ID <> (SELECT TOP 1 ID FROM MyDataTable WHERE ColumnName = 'UniqueComnName1') ``` I would like to do that for all unique ColumnName in the table (or just simply all ColumnNames in the table). How can I do this?
You can do this with two little queries: ``` SELECT MIN(ID) AS ID INTO #tmpTable FROM MyDataTable GROUP BY ColumnName DELETE FROM MyDataTable WHERE ID NOT IN (SELECT ID FROM #tmpTable) ```
Run this and let me know your result: ``` DELETE FROM MyDataTable WHERE ID != (SELECT MAX(ID) FROM MyDataTable A WHERE A.ColumnName = MyDataTable.ColumnName) ``` This query will keep the record that having the ID equals the MAX (id) of records that having the same ColumnName.
Delete rows with identical ColumnName value in a SQL Table?
[ "", "sql", "sql-server", "duplicates", "" ]
Is it possible to express the statment below by ANSI SQL? Since the example below belongs to PL/SQL. Thanks in advance. ``` SELECT department_id, MIN(salary) KEEP (DENSE_RANK FIRST ORDER BY commission_pct) "Worst", MAX(salary) KEEP (DENSE_RANK LAST ORDER BY commission_pct) "Best" FROM employees GROUP BY department_id; DEPARTMENT_ID Worst Best ------------- ---------- ---------- 10 4400 4400 20 6000 13000 30 2500 11000 40 6500 6500 50 2100 8200 60 4200 9000 70 10000 10000 80 6100 14000 90 17000 24000 100 6900 12000 110 8300 12000 7000 7000 ```
This returns the same result (as far as I can tell) but does not need a join and is ANSI SQL: ``` select department_id, min(case when min_comm = 1 then salary end) as worst, max(case when max_comm = 1 then salary end) as best from ( select department_id, salary, dense_rank() over (partition by department_id order by commission_pct desc) as max_comm, dense_rank() over (partition by department_id order by commission_pct) as min_comm from employees ) t group by department_id order by 1; ```
You can use self-joins instead of analytics in most cases. Here's an equivalent of your `MAX(salary) KEEP (DENSE_RANK LAST ORDER BY commission_pct)`: ``` SELECT department_id, MAX(salary) FROM employees e WHERE (department_id, commission_pct) IN (SELECT department_id, MAX(commission_pct) FROM employees GROUP BY department_id) GROUP BY department_id ``` Obviously getting both the `MAX` and `MIN` would be a bit trickier (and uglier) but is doable.
how to express DENSE_RANK with ANSI SQL?
[ "", "sql", "oracle", "plsql", "ansi-sql", "" ]
I've inherited a SQL Server based application and it has a stored procedure that contains the following, but it hits timeout. I believe I've isolated the issue to the SELECT MAX() part, but I can't figure out how to use alternatives, such as ROW\_NUMBER() OVER( PARTITION BY... Anyone got any ideas? ## Here's the "offending" code: ``` SELECT BData.*, B.* FROM BData INNER JOIN ( SELECT MAX( BData.StatusTime ) AS MaxDate, BData.BID FROM BData GROUP BY BData.BID ) qryMaxDates ON ( BData.BID = qryMaxDates.BID ) AND ( BData.StatusTime = qryMaxDates.MaxDate ) INNER JOIN BItems B ON B.InternalID = qryMaxDates.BID WHERE B.ICID = 2 ORDER BY BData.StatusTime DESC; ``` Thanks in advance.
[UNSOLVED] But I've moved on! Thanks to everyone who provided answers / suggestions. Unfortunately I couldn't get any further with this, so have given-up trying for now. It looks like the best solution is to re-write the application to UPDATE the latest data into into a different table, that way it's a really quick and simple SELECT to latest readings. Thanks again for the suggestions.
SQL performance problems are seldom addressed by rewriting the query. The compiler already know how to rewrite it anyway. The problem is always indexing. To get `MAX(StatusTime ) ... GROUP BY BID` efficiently, you need an index on `BData(BID, StatusTime)`. For efficient seek of `WHERE B.ICID = 2` you need an index on `BItems.ICID`. The query could also be, probably, expressed as a correlated [APPLY](http://technet.microsoft.com/en-us/library/ms175156%28v=sql.105%29.aspx), because it seems that what is what's really desired: ``` SELECT D.*, B.* FROM BItems B CROSS APPLY ( SELECT TOP(1) * FROM BData WHERE B.InternalID = BData.BID ORDER BY StatusTime DESC ) AS D WHERE B.ICID = 2 ORDER BY D.StatusTime DESC; ``` [SQL Fiddle](http://sqlfiddle.com/#!6/26c9d/1). This is not semantically the same query as OP, the OP would return multiple rows on StatusTime collision, I just have a guess though that this is what is desired ('the most recent BData for this BItem').
SELECT MAX() too slow - any alternatives?
[ "", "sql", "sql-server", "timeout", "max", "" ]
I have a table and there's been some duplicates entered into it. I don't want to remove the duplicates but I want to set an indicator on the some of the records which are not the latest entry (the highest reviewid is the latest) My table is as follows: ``` ReviewId |ClientID | CommunicationSent 17023| 1950943 | 0 17202| 1950943 | 0 17734| 1950943 | 0 17731| 1948031 | 0 16822| 1948031 | 0 15300| 1948031 | 0 14722| 1945039 | 0 16125| 1945039 | 0 17729| 1945039 | 0 17727| 1943172 | 0 14552| 1943172 | 0 17179| 1943172 | 0 15175| 1943172 | 0 ``` So for example I want to set the 'communicationset' to 1 where the clientid = 1948031 and it's not the latest, i.e. highest set communationid to 1 where reviewid =16822 and 15300. I'm guessing it's going to be some where reviewid is not max(reviewid). Anyone know how this could be done? Thanks,
I think this update will do it for you: ``` UPDATE T SET communicationSent = 1 FROM TABLE T INNER JOIN ( SELECT MAX(reviewId) AS 'reviewId', clientID FROM TABLE GROUP BY clientID ) T2 ON T.clientId = T2.clientId AND T.reviewId < T2.reviewId ```
I think this should do it: ``` update table1 set CommunicationSent = 1 where reviewid not in ( select max(reviewid) from table1 group by clientid ) ``` Sample [SQL Fiddle](http://www.sqlfiddle.com/#!6/48858/8)
Update table field in database depending on whether it's the latest entry
[ "", "sql", "t-sql", "" ]
I would like to delete all rows in a table that have a unique value in the `setid` column. I can select them like this: ``` select * from imagesets as a where (select count(*) from imagesets as b where a.setid = b.setid) = 1 ``` What's the best way to delete them? Is there a better way to select them?
I think the following will work in MySQL: ``` delete i from imagesets i join (select setid, count(*) as cnt from imagesets group by setid having count(*) = 1 ) set1 on i.setid = set1.setid; ```
``` CREATE TEMPORARY TABLE to_delete AS SELECT setid FROM imagesets GROUP BY setid HAVING COUNT(*) = 1; DELETE imagesets FROM imagesets NATURAL JOIN to_delete; DROP TABLE to_delete; ``` The last line is optional. Temporary tables are removed automatically by the end of the session. Aparently this works too: ``` DELETE imagesets FROM imagesets NATURAL JOIN (SELECT setid FROM imagesets GROUP BY setid HAVING COUNT(*) = 1) singles; ``` Despite it's using a `SELECT` inside the `DELETE` statement, the way MySQL handles this query don't seem to create conflicts with table locking. According to the Processes List, MySQL is automatically generating a temporary table when doing this. I cannot tell howover, that it will work for every version of MySQL.
How should you delete rows with a unique value?
[ "", "mysql", "sql", "" ]
I would like to restrict the selection on the nested query only to what is the main query select. But while I try to do that, it returns error. Please see the query below: ``` SELECT DISTINCT A.VBELN, C.MBLNR FROM LIKP AS A INNER JOIN VBUK AS B ON B.MANDT = A.MANDT AND B.VBELN = A.VBELN LEFT JOIN ( SELECT DISTINCT MANDT, XBLNR, MBLNR FROM MKPF WHERE MANDT='200' AND MBLNR NOT IN ( SELECT DISTINCT SMBLN FROM MSEG WHERE MANDT='200' ) --AND XBLNR=A.VBELN ---> I would like to add this line, but it's error ) AS C ON C.MANDT = A.MANDT AND C.XBLNR = A.VBELN WHERE A.MANDT='200' AND a.lfdat BETWEEN '20131203' AND '20131205' AND b.wbstk <> 'C' ORDER BY A.VBELN FETCH FIRST 200 ROWS ONLY OPTIMIZE FOR 200 ROWS ``` The error is: `"A.VBELN" is an undefined name.. SQLCODE=-204, SQLSTATE=42704, DRIVER=3.66.46 SQL Code: -204, SQL State: 42704` If I am not using this line: `AND XBLNR=A.VBELN`, it would take a longer time to get the result. Because although I have use `fetch first` and `optimize` clause, it still take a long time. How can I do that? Thanks.
Try the [`LATERAL` keyword](http://pic.dhe.ibm.com/infocenter/db2luw/v10r5/topic/com.ibm.db2.luw.sql.ref.doc/doc/r0059206.html), like so: ``` ... INNER JOIN VBUK AS B ON B.MANDT = A.MANDT AND B.VBELN = A.VBELN LEFT JOIN LATERAL ( -- <<< here SELECT DISTINCT MANDT, XBLNR, MBLNR ... ```
I've never used db2, so there's probably a better way to optimize this, but you could always just repeat the outer select inside: ``` SELECT DISTINCT A.VBELN, C.MBLNR FROM LIKP AS A INNER JOIN VBUK AS B ON B.MANDT = A.MANDT AND B.VBELN = A.VBELN LEFT JOIN ( SELECT DISTINCT MANDT, XBLNR, MBLNR FROM MKPF AS M INNER JOIN ( SELECT DISTINCT A.VBELN, C.MBLNR FROM LIKP AS L ) ON M.XBLNR = L.VBELN WHERE MANDT='200' AND MBLNR NOT IN ( SELECT DISTINCT SMBLN FROM MSEG WHERE MANDT='200' ) ) AS C ON C.MANDT = A.MANDT AND C.XBLNR = A.VBELN WHERE A.MANDT='200' AND a.lfdat BETWEEN '20131203' AND '20131205' AND b.wbstk <> 'C' ORDER BY A.VBELN FETCH FIRST 200 ROWS ONLY OPTIMIZE FOR 200 ROWS ```
Nested query selection based on the main query
[ "", "sql", "performance", "db2", "" ]
I was just after some input on database design. I have two tables, Orders and Items. The items table is going to be a list of items that can be used on multiple orders, each item has an id The way i thought to do it at the moment, was in the order to put an array of comma seperated ids for each item in the order. does that sound like the best way? also im using linq to entity framework and i dont think id be able to create a relationship between the tables, but i dont think one is needed anyway is there, since the items are not unique to an order Thanks for any advice
As far as I have understood from your question (it is not very clear), every Order can have multiple Items and every Item can be used in multiple orders. If this is what you want, you have a many to many relationship, that must be resolved using an intersection entity. This intersection entity has 2 foreign keys, one for item and one for order. Using it, you can identify what items are in a certain order and what orders need a certain item. As my explanation is very short and very sloppy, I will recommend you the following references: <http://sd271.k12.id.us/lchs/faculty/bkeylon/Oracle/database_design/section5/dd_s05_l03.pdf> [Resolve many to many relationship](https://stackoverflow.com/questions/497395/resolve-many-to-many-relationship) Also, you proposed design is very bad, as it breaks the first normal form: no attribute can have multiple values. You shoud try to build databases at least in third normal form.
> The way I thought to do it at the moment, was in the order to put an array of comma separated ids for each item in the order. Does that sound like the best way? Absolutely not - It will be MUCH more difficult in SQL to determine which orders contain a particular item, enumerate the items (to get a total, for example), and to add/remove items from an order. A much better way would be to create an `OrderItem` table, which has a foreign key back to `Order` and `Item` and any other attributes relating to the item *in that order* - quantity, discount, comments, etc. As far as EF goes, it will probably create a third entity (`OrderItem`) that will "link" the two tables. If you don't add any extra properties (which you probably should) then EF will probably create it as a many-to-many relationship between the `Order` and `Item` entities.
database design, items and orders tables
[ "", "sql", "entity-framework", "" ]
I need to delete approximately 50 million of records (and not whole table) I searched and found some ways to do it This query does what i want, i can optimize it a little bit by storing results of sub queries in a `#TempTable` What else can i do? The fastest query so far : ``` CREATE TABLE #UserIDs ( UserId UNIQUEIDENTIFIER NOT NULL ); CREATE CLUSTERED INDEX myIndex ON #UserIDs (UserId) INSERT INTO #UserIDs SELECT UserId FROM TableX WHERE UserID IS NOT NULL; INSERT INTO #UserIDs SELECT UserID FROM TableY WHERE CreatorID IS NOT NULL; DELETE TOP (10000) FROM Users WHERE Email IS NULL AND ( (NOT EXISTS ( SELECT 1 FROM #UserIDs WHERE #UserIDs.UserId = Users.UserId ) ) ) DROP TABLE #UserIDs ``` ![Execution Plan](https://i.stack.imgur.com/GjyWL.jpg)
If the number of records you want to preserve is small comparing to number of records you need to delete, I would copy them to temporary table, then TRUNCATE the original table and insert preserved records back from temporary table. Table truncation works very fast, so you may save much time. However TRUNCATE TABLE not always work, but it can be a solution for you.
Run part 1 first: ``` SELECT u.UserId INTO #DeleteThis FROM Users u LEFT JOIN tableX x ON u.UserId = x.UserId LEFT JOIN tableY y ON u.UserId = y.UserId WHERE SomeCondition IS NULL --SomeCondition is indexed AND x.UserId IS NULL AND y.UserId IS NULL ``` Then run part 2: ``` DELETE TOP (1000) FROM Users u JOIN #DeleteThis d ON u.UserId = d.UserId GO 50000 ```
Optimizing delete operation of 50 millions of records from table
[ "", "sql", "sql-server", "sql-server-2008", "sql-server-2008-r2", "" ]
When I don't use the SET command, I get results like: ``` SQL> select * from <tablename>; NAME PLACE AGE GENDER DESTINATION -------------- -------------- --------- ------------ --------------- Joy Scotland 22 Male California ``` When I use the SET variable : ``` SET LIN 32000 pagesize 0 SQL> select * from <tablename>; ``` I get the results in better format but not too good: ``` Joy Scotland 22 Male California ``` 1. How can I display the output in better way (e.g. columns are separated with a delimiter (space)) 2. When we use SET, why did the column headers disappear? How can I improve it?
> [**SET PAGES[IZE] {14 | n}**](http://docs.oracle.com/cd/B19306_01/server.102/b14357/ch12040.htm#i2699247) > > Sets the number of rows on each page of output in iSQL\*Plus, and the > number of lines on each page of output in command-line and Windows > GUI. **You can set PAGESIZE to zero to suppress all headings**, page > breaks, titles, the initial blank line, and other formatting > information. You set the pagesize as 0, that's why you are not getting the heading. Try 1 or the default 14. ``` SET LIN 32000 pagesize 1 - OR - SET LIN 32000 PAGES 1 ``` **Btw, this is not SQL.** It's a bunch of Oracle commands used to control the database behaviour.
To export data to csv-file via sqlplus I use a script like ``` set colsep ";" -- column separator set pagesize 0 -- no column headings set feedback off -- no "xx rows selected" set trimspool on -- no line-padding set linesize 9999 -- length of output line set verify off -- no output of parameter replacements set termout off -- no console output spool export.csv -- redirect output to file select * from mytable; spool off -- stop output to file ``` If you omit the `set pagesize 0` you'll get column headers, but shortened to field length and repeated every `(pagesize-3)` lines.
SELECT doesn't display column names
[ "", "sql", "oracle", "sqlplus", "" ]
This is my code having a table with diff columns ``` CREATE TABLE IF NOT EXISTS `pcategories` ( `categoryid` int(11) NOT NULL AUTO_INCREMENT, `catimage` varbinary(max) not null, PRIMARY KEY (`categoryid`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=14 ; ``` i want to insert image in "catimage'column. I have tried these 2.. kindly help me in fixing this ``` INSERT INTO `pcategories` (`categoryid`, `catimage`) VALUES (9, 'p2.gif'); INSERT INTO `pcategories` (`categoryid`, `catimage`) VALUES SELECT 9, 'Laptop Computers', BulkColumn FROM Openrowset( Bulk 'C:\wamp\www\OnlineBiddingSystem\images\p2.gif', Single_Blob) as EmployeePicture ``` KINDLY HELP ME
Your `INSERT` tells MySQ it will insert values into two columns (`categoryid` and `catimage`): ``` NSERT INTO `pcategories` (`categoryid`, `catimage`) VALUES ``` However, you then proceed to add three values (`9`, `Laptop computers`, and an image): ``` SELECT 9, 'Laptop Computers', BulkColumn FROM Openrowset( Bulk 'C:\wamp\www\OnlineBiddingSystem\images\p2.gif', Single_Blob) as EmployeePicture ``` If you remove the second column ('Laptop computers') from the INSERT, that ought to do the trick: ``` INSERT INTO `pcategories` (`categoryid`, `catimage`) VALUES SELECT 9, BulkColumn FROM Openrowset( Bulk 'C:\wamp\www\OnlineBiddingSystem\images\p2.gif', Single_Blob) as EmployeePicture ```
try this for insert ``` $sql = "INSERT INTO pcategories (categoryid, catimage) VALUES (9, 'p2.gif')"; ```
Image not showing up in php though inserted in sql database
[ "", "sql", "phpmyadmin", "" ]
I have an SQL table with one column (dateRec) containing dates, format: yyyy-mm-dd. Is there a way in SQL that I can define date ranges and then group all the items by these ranges ? I would need the following groups here: * group one = 0 - 7 days old * group two = 8 - 14 days old * group three = 15 - 30 days old * group four = 31 - 60 days old * group five = rest **My standard query to fetch all items from that table:** ``` CREATE PROCEDURE [dbo].[FetchRequests] AS BEGIN SET NOCOUNT ON; SELECT subject, dateRec, category FROM LogRequests WHERE logStatus = 'active' ORDER BY dateRec desc, subject FOR XML PATH('items'), ELEMENTS, TYPE, ROOT('ranks') END ``` Thanks for any help with this, Tim.
You need to do something like this ``` select t.range as [score range], count(*) as [number of occurences] from ( select case when score between 0 and 9 then ' 0-9 ' when score between 10 and 19 then '10-19' when score between 20 and 29 then '20-29' ... else '90-99' end as range from scores) t group by t.range ``` Check this link [In SQL, how can you "group by" in ranges?](https://stackoverflow.com/questions/232387/in-sql-how-can-you-group-by-in-ranges)
Yes, you can do that by adding a new column which contains all the bands you require and then group by that column: ``` SELECT subject, dateRec, category ,case when datediff(day,dateRec,Getdate())<='7' then '0 - 7 days old' when datediff(day,dateRec,Getdate()) between '8' and '14' then '8 - 14 days old' when datediff(day,dateRec,Getdate()) >60 then 'rest' end Classes into #temp1 FROM LogRequests WHERE logStatus = 'active' ORDER BY dateRec desc, subject ``` I have missed couple of your ranges, but hopefully you got the logic then Group by this column: ``` select classes, Count(*) from #temp1 begin drop table #temp1 end ```
SQL Server: group dates by ranges
[ "", "sql", "sql-server", "stored-procedures", "group-by", "" ]
Say I have the a table with 3 columns : Id, Category, Name. I would like to query the table that way : Get me the rows for which `{ Category = "Cat1" AND Name = "ABC" }` OR `{ Category = "Cat2" AND Name = "ABC" }` OR `{ Category = "Cat2" AND Name = "DEF" }` How? Without having to resort to a huge list of `WHERE OR` I was thinking of using `IN`...but is it possible to use that in conjunction with 2 columns? Thanks!
You can create a temp table ``` create table #temp (Category varchar(50), Name varchar(50)) insert into #temp values ('Cat1', 'abc'), ('Cat2', 'cde'), ('Cat3', 'eee') ``` And then join your main table ``` select * from table1 inner join #temp on table1.Category = #temp.Category and table1.Name = #temp.Name ``` --- If you want to use that approach from the code, you can do that using table parameters. Define a table type: ``` CREATE TYPE dbo.ParamTable AS TABLE ( Category varchar(50), Name varchar(50) ) ``` and a stored proc that will read the data: ``` create procedure GetData(@param dbo.ParamTable READONLY) AS select * from table1 inner join @param p on table1.Category = p.Category and table1.Name = p.Name ``` Then you can use those from the C# code, for example: ``` using (var conn = new SqlConnection("Data Source=localhost;Initial Catalog=Test2;Integrated Security=True")) { conn.Open(); DataTable param = new DataTable(); param.Columns.Add(new DataColumn("Category", Type.GetType("System.String"))); param.Columns.Add(new DataColumn("Name", Type.GetType("System.String"))); param.Rows.Add(new object[] { "Cat1", "abc" }); using (var command = conn.CreateCommand()) { command.CommandText = "GetData"; command.CommandType = CommandType.StoredProcedure; command.Parameters.AddWithValue("@param", param); using (var reader = command.ExecuteReader()) { // reading here } } } ```
@Szymon's answer is the best, but if you absolutely need to do it in one query you can come up with a scheme to concatenate the two columns into one string and join candidate values using the same method. Then you can use `IN` instead of a bunch of `ANDs` and `ORs`. ``` SELECT * FROM table WHERE Category + ':' + Name IN ('Cat1:abc', 'Cat2:cde', 'Cat3:eee') ``` This has the distinct disadvantage of never being able to take advantage of indexes. But it's good for a quick and dirty solution.
How to query the database with a list of key-value pairs
[ "", "sql", "sql-server", "database", "" ]
Currently I'm working on a database redesign project. A large bulk of this project is pulling data from the old database and importing it into the new one. One of the columns in a table from the old database is called 'name'. It contains a forename and a surname all in one field (*ugh*). The new table has two columns; forenames and surname. I need to come up with a clean, efficient way to split this single column into two. For now I'd like to do everything in the same table and then I can easily transfer it across. 3 columns: * Name (the forename and surname) * Forename (currently empty, first half of name should go here) * Surname (currently empty, second half of name should go here) What I need to do: **Split name in half and place into forename and surname** If anyone could shed some light on how to do this kind of thing I would really appreciate it as I haven't done anything like this in SQL before. Database engine: MySQL Storage engine: InnoDB
A quick solution is to use [SUBSTRING\_INDEX](http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_substring-index) to get everything at the left of the first space, and everything past the first space: ``` UPDATE tablename SET Forename = SUBSTRING_INDEX(Name, ' ', 1), Surname = SUBSTRING_INDEX(Name, ' ', -1) ``` Please see fiddle [here](http://sqlfiddle.com/#!2/63ed6/1). It is not perfect, as a name could have multiple spaces, but it can be a good query to start with and it works for most names.
Try this: ``` insert into new_table (forename, lastName, ...) select substring_index(name, ' ', 1), substring(name from instr(name, ' ') + 1), ... from old_table ``` This assumes the first word is the forename, and the rest the is lastname, which correctly handles multi-word last names like "John De Lacey"
Splitting a single column (name) into two (forename, surname) in SQL
[ "", "mysql", "sql", "" ]
Assuming the following table: ``` ID Name Revision --- ----- -------- 1 blah 0 2 yada 1 3 blah 1 4 yada 0 5 blah 2 6 blah 3 ``` How do I get the two records, one for "blah" and one for "yada" with highest revision number (3 for blah and 1 for yada)? Something like: ``` ID Name Revision --- ----- -------- 6 blah 3 2 yada 1 ``` Also, once these records are retrieved, how do I get the rest, ordered by name and revision? I am trying to create a master-detail view where master records are latest revisions and details include the previous revisions.
Basically, with the **aggregate function `MAX()`**: ``` SELECT "Name", MAX("Revision") AS max_revison FROM tbl WHERE "Name" IN ('blah', 'yada'); GROUP BY "Name" ORDER BY "Name"; -- ordering by revision would be pointless; ``` If you need *more columns* from the row, there are several ways. One would be to join the above subquery back to the base table: ``` SELECT t.* FROM ( SELECT "Name", max("Revision") AS max_revison FROM tbl WHERE "Name" IN ('blah', 'yada'); GROUP BY "Name" ) AS sub JOIN tbl AS t ON t."Revision" = sub.max_revison AND t."Name" = sub."Name" ORDER BY "Name"; ``` Generally, this has the potential to yield *more than one row* per `"Name"` - if "Revision" is not unique (per "Name"). You would have to *define* how to pick `one` from a group of peers sharing the same maximum "Revision" - a tiebreaker. Another way would be with **`NOT EXISTS`**, excluding rows that have greater peers, possibly faster: ``` SELECT t.* FROM tbl AS t WHERE "Name" IN ('blah', 'yada') AND NOT EXISTS ( SELECT 1 FROM tbl AS t1 WHERE t1."Name" = t."Name" AND t1."Revision" > t."Revision" ) ORDER BY "Name"; ``` Or you could use a **CTE** with an analytic function (window function): ``` WITH cte AS ( SELECT *, ROW_NUMBER() OVER(PARTITION BY "Name" ORDER BY "Revision" DESC) AS rn FROM tbl WHERE "Name" IN ('blah', 'yada') ) SELECT * FROM cte WHERE rn = 1; ``` The last one is slightly different: one row per `"Name"` is guaranteed. If you don't use more `ORDER BY` items, an arbitrary row will be picked in case of a tie. If you want all peers use [`RANK()`](http://docs.oracle.com/cd/B28359_01/server.111/b28286/functions129.htm) instead.
This approach will select the rows for each Name with the maximum revision number for that Name. The result will be the exact output you were looking for in your post. ``` SELECT * FROM tbl a WHERE a.revision = (select max(revision) from tbl where name = a.name) ORDER BY a.name ```
Oracle query - select top records
[ "", "sql", "oracle11g", "greatest-n-per-group", "" ]
If I have two different types of user, Parent and Child. They have identical fields, however a Child has a one to many relationship with exams, a relationship that does not exist for Parents. Would Parent and Child best be modelled as a single table, or combined? What if I have two different types of user, Parent and Child. They are the same apart from a child belongs to a school (a school has many children) again, Would Parent and Child best be modelled as a single table, or combined?
> They have identical fields, however a Child has a one to many relationship with exams Even when fields are the same, different *constraints1* means you are dealing with logically separate entities. Absent other factors, separate entities should be put into separate physical tables. There may, however, be reasons to the contrary. For example, if there is a key that needs to be unique across parents *and* children combined, or there is another table that needs to reference all of them etc... If that's the case, then logically both "parent" and "child" are inheriting from the "person", containing the common constraints (and fields). Such "inheritance" can be represented by either storing the whole hierarchy into a single table (and setting unused "portion" to NULL), or by separating all three "classes" into their own tables, and referencing the "base class" from "inherited classes", for example2: ![enter image description here](https://i.stack.imgur.com/fyJqo.png) PERSON\_ID is unique across all parents and children. In addition to that, OTHER\_TABLE can reference it directly, instead of having to separately reference PARENT\_ID or CHILD\_ID. --- *1 A foreign key in this case.* *2 A very simplified model that just illustrates the point above and does not try to model everything you mentioned in your question*
Parent and Child both are Persons without a doubt. You should never put them in seperate tables. Only time separates them : what if a Child becomes a parent? A parent easily can have children for that you need a relationship table. Als a relationship table is the right way to model school membership. so tables here : ``` person is_child_of (many to many, join table) --> relations between persons can be is_parent_of ``` plain and simple Remember : being a child is a *relation* from person to person. How would you model a grandchild if needed? Yet another table? And a great grandchild? And supose you are fine with that and you make a lot of tables for a lot of "kind of" relationships, an all of a sudden you want you have to add a field (day of birth) or alter a field format : you have to do it in all your different tables.
Grounds for having a one to one relationship between tables
[ "", "mysql", "sql", "database", "database-design", "database-schema", "" ]
I have a table like this: ``` CREATE TABLE [dbo].[Question] ( [QuesID] INT IDENTITY (1, 1) NOT NULL, [Body] NVARCHAR (MAX) NULL, [O1] NVARCHAR (MAX) NULL, [O2] NVARCHAR (MAX) NULL, [O3] NVARCHAR (MAX) NULL, [O4] NVARCHAR (MAX) NULL, [UserID] NVARCHAR (50) NULL, [QuesDate] DATETIME NULL, PRIMARY KEY CLUSTERED ([QuesID] ASC) ); ``` And I want to ``` select * from Question where UserID=N'admin' ``` then insert what I selected, to this table but change the UserID from admin to another value. Notice that I don't need to select QuesID because its PRIMARY KEY.
``` insert into Question ([Body], [O1], [O2], [O3], [O4], [UserID]) select [Body], [O1], [O2], [O3], [O4], 'newUserID' from Question where UserID = N'admin' ```
In an `insert ... select` statement, you can easily replace a column by a literal value: ``` insert YourTable (UserID, col1, col2, ...) select N'anothervalue' , col1 , col2 , ... from YourTable where UserID = N'admin' ```
inserting some rows from one table to another by changing value of one column
[ "", "sql", "sql-server", "import", "" ]
The following query creates a list of all index names in the database a long with each column that is part of that index. Can someone tell me how to determine if the column is sorted ASC or DESC? ``` SELECT ind.name as index_name , t.[name] as table_name , col.name as column_name , ic.index_column_id as index_column_id FROM [GDI-193-DEV].sys.indexes ind INNER JOIN [GDI-193-DEV].sys.index_columns ic ON ind.object_id = ic.object_id and ind.index_id = ic.index_id INNER JOIN [GDI-193-DEV].sys.columns col ON ic.object_id = col.object_id and ic.column_id = col.column_id INNER JOIN [GDI-193-DEV].sys.tables t ON ind.object_id = t.object_id WHERE ind.is_primary_key = 0 AND ind.is_disabled = 0 ORDER BY t.name, ind.name, ind.index_id, ic.index_column_id ``` Thanks! Matt
Table [`sys.index_columns`](http://technet.microsoft.com/en-us/library/ms175105%28v=sql.105%29.aspx) has a column `is_descending_key` > 1 = Index key column has a descending sort direction. > 0 = Index key column has an ascending sort direction. > Does not apply to columnstore indexes which return 0.
If you have not specified any sorting order then by default it takes it as **ASCENDING**. Also you can add the **ORDER BY** statement to check that From [Creating Ascending and Descending Indexes](http://technet.microsoft.com/en-us/library/aa933132%28v=sql.80%29.aspx): > When defining indexes, you can specify whether the data for each > column is stored in ascending or descending order. **If neither > direction is specified, ascending is the default**, which maintains > compatibility with earlier versions of Microsoft® SQL Server™.
How do I Programatically Determine if an Index Column is ASC or DESC
[ "", "sql", "sql-server", "sql-server-2008", "indexing", "" ]
There is a column of birthdates. To find the current age for display, the calculation is made as the following: ``` SELECT age(birth_date) FROM people ``` This returns records in the format `1 year 10 mons 3 days`. I have modified this slightly based on this [SO post](https://stackoverflow.com/questions/16990161/postgresql-truncating-a-date-within-age-function) to be `date_trunc('month', age(birth_date))`. That returns `1 year 10 mons`. It's better, but still not meeting the user's requirement. Colloquially, when people speak about ages, in U.S. English at least, particularly for young children, people say "14 months" instead of "1 year 2 months". However, around age four, people switch to saying "4 years". Is there a way to write a fast query to accomplish this? My initial thought is write a case/when statement. But it grew complex and I cannot get the case to work for the intervals I describe below. Here are the rules I came up with: > ``` > | Age: m (months) | Display as | > +-----------------+----------------+ > | < 0 | exp. 2 mons | > | 0 < m < 24 | 14 mons | > | 24 <= m < 48 | 2 years 6 mons | > | 48 <= m | 4 years | > ```
### `date_trunc()` Although the function [**`date_trunc()`** is listed in the table of date/time functions](http://www.postgresql.org/docs/current/interactive/functions-datetime.html#FUNCTIONS-DATETIME-TABLE) as taking an `timestamp` as argument, the manual clarifies further down: > `source` is a value expression of type `timestamp` or **`interval`**. Bold emphasis mine. So, for starters, you can use this form to "round" to months: ``` SELECT date_trunc('mon', age(birth_date))::text ``` ### Specific format For your specific needs, you need a `CASE` statement like: ``` SELECT CASE WHEN i < interval '1 mon' THEN 'newborn'::text WHEN i < interval '12 mon' THEN date_trunc('mon', i)::text WHEN i < interval '24 mon' THEN 12 + EXTRACT(month FROM i) || ' mons' WHEN i < interval '48 mon' THEN date_trunc('mon', i)::text ELSE date_trunc('year', i)::text -- or EXTRACT(year FROM i) || ' years' END AS display FROM (SELECT age(birth_date) AS i FROM people) sub ``` [**->SQLfiddle**](http://sqlfiddle.com/#!15/d41d8/522) with a complete test case. You can wrap that in a SQL or plpgsql function for convenience. You'll find many examples here on SO.
You won't be able to meet that requirement with the built-in functions. At the very best, they'll allow you to transform 24 months in 2 years; not the other way around. You want to create a pgsql function that generates the desired output instead (possibly as text), or (better) manage this at the application level. Doing this at the app level would allow you to localize it and the assicated criteria as a bonus.
Colloquial age calculation for display
[ "", "sql", "postgresql", "intervals", "" ]
I have the following query: ``` select title_id, shared_task_id from tasks_task order by title_id ``` The results are like so: ``` title_id shared_task_id 1 99217 1 NULL 4 18873 4 18874 4 18875 4 NULL 4 NULL 4 NULL ``` I want to find all `shared_task_id`s that have more than one `title_id`. Here would be an example. Using the below data: ``` title_id shared_task_id 1 100 2 100 3 105 3 NULL 4 110 5 NULL 6 120 6 120 6 120 ``` The query would return: ``` title_id shared_task_id 1 100 2 100 ``` Because this is the only entry with the same shared\_task\_id with a different title. What would be the correct query here?
Sounds like a self join is the way to go. ``` select distinct t1.title_id, t1.shared_task_id from mytable t1 join mytable t2 on t1.shared_task_id = t2.shared_task_id and t1.title_id <> t2.title_id ```
This should do it: ``` SELECT t1.* FROM tasks_task t1 JOIN ( SELECT shared_task_id FROM tasks_task WHERE shared_task_id IS NOT NULL GROUP BY shared_task_id HAVING count(DISTINCT title_id) > 1 ) t2 ON t1.shared_task_id = t2.shared_task_id ``` Output: ``` | TITLE_ID | SHARED_TASK_ID | |----------|----------------| | 1 | 100 | | 2 | 100 | ``` Fiddle [here](http://sqlfiddle.com/#!2/0e86e/1).
SQL group by subselect
[ "", "mysql", "sql", "" ]
How can I retrieve data on time range where tickets issued between the times of 6am - 9:15am for the past month. I tried this but was wrong.Its in oracle and to\_char is not function name in SQL. how can i do that in sql. ``` select * from [ICPS].[dbo].[tickets] where t_date_time_issued > sysdate - 30 and to_char(t_date_time_issed, 'hh24:mi:ss') >= '06:00:00' and to_char(t_date_time_issued, 'hh24:mi:ss') <= '09:15:00' ```
Assuming that you want it in SQL-Server("Its in oracle and to\_char is not function name in SQL"), you can cast the `datetme` to `time`: ``` SELECT * FROM [dbo].[tickets] WHERE t_date_time_issued > DATEADD(mm, - 30, GetDate()) AND Cast(t_date_time_issued as TIME) between '06:00' and '09:10' ``` `Demo`
You can use `to_timestamp()` in stead of `to_char`. Details about `to_timestamp()` is: <http://docs.oracle.com/cd/B28359_01/olap.111/b28126/dml_functions_2118.htm#OLADM696> Hope will help!
get data on particular timings of the day in a month in SQL
[ "", "sql", "" ]
I wanted to know that, Suppose I have a table in one database with say 1000 records and I have similar table in another database with say 500 records. Now my question is If I will try to import data from say DB1.Tbl1 to DB2.Tbl1, then what will happen? Is there any possibilities of duplicacy of the data? I wanted the records of DB1.Tbl1 to copy into DB2.Tbl1 table. Please clear my confusion.
If you have same data in both tables you can first truncate 2nd table after that you can import data from 1st table by Insert Command or "import data" task
Try this ``` INSERT INTO DB1.dbo.Tbl1 SELECT * FROM DB2.dbo.Tbl2 ``` This just moves the data. If you want to move the table definition, you have to do something else. Note that, SQL Server Management Studio's "Import Data" task (right-click on the DB name, then tasks) will do most of this for you. Run it from the database you want to copy the data into. If the tables don't exist it will create them for you, but you'll probably have to recreate any indexes. If the tables do exist, it will append the new data by default but you can adjust that (edit mappings) so it will delete all existing data.
Importing Data From One database to another
[ "", "sql", "" ]
since oracle does not support null=null comparisons I did this statement: ``` SELECT p.nazwa, p.nazwa_miedz, p.dawka_l_p, p.dawka_j_p, p.dawka_l_n, p.dawka_j_n FROM produkt p WHERE p.nazwa_miedz LIKE (SELECT pp.nazwa_miedz FROM produkt pp WHERE pp.bloz12 = 232440132296) AND p.bloz12 != (SELECT pp.bloz12 FROM produkt pp WHERE pp.bloz12 = 232440132296) AND ( p.dawka_l_p = (SELECT pp.dawka_l_p FROM produkt pp WHERE pp.bloz12 = 232440132296) OR ( p.dawka_l_p IS NULL AND (SELECT pp.dawka_l_p FROM produkt pp WHERE pp.bloz12 = 232440132296) IS NULL ) ) AND ( p.dawka_j_p = (SELECT pp.dawka_j_p FROM produkt pp WHERE pp.bloz12 = 232440132296) OR ( p.dawka_j_p IS NULL AND (SELECT pp.dawka_j_p FROM produkt pp WHERE pp.bloz12 = 232440132296) IS NULL ) ) AND ( p.dawka_l_n = (SELECT pp.dawka_l_n FROM produkt pp WHERE pp.bloz12 = 232440132296) OR ( p.dawka_l_n IS NULL AND (SELECT pp.dawka_l_n FROM produkt pp WHERE pp.bloz12 = 232440132296) IS NULL ) ) AND ( p.dawka_j_n = (SELECT pp.dawka_j_n FROM produkt pp WHERE pp.bloz12 = 232440132296 IS NULL) OR p.dawka_j_n IS NULL AND (SELECT pp.dawka_j_n FROM produkt pp WHERE pp.bloz12 = 232440132296) ) ``` What this statemtent do? Shows us replacement for medicine. bloz12 = 232440132296 is id of product. We are looking for products which has the same doses: dawka\_l\_p,dawka\_j\_p,dawka\_l\_n, dawka\_j\_n. There is also one AND to do not show the replacement (which be the same as original product). And the last AND nazwa\_miedz is just the substance of medicine. Im just wondering if those subquerys can be optimized.. As You can see there is many of them.. I hope someone could improve it! Thanks!
``` SELECT p.nazwa, p.nazwa_miedz, p.dawka_l_p, p.dawka_j_p, p.dawka_l_n, p.dawka_j_n FROM produkt p join product p2 on p.nazwa_miedz = p2.nazwa_miedz and (p.dawka_l_p = p2.dawka_l_p or p.dawka_l_p is NULL and p2.dawka_l_p is NULL) and (p.dawka_j_p = p2.dawka_j_p or p.dawka_j_p is NULL and p2.dawka_j_p is NULL) and (p.dawka_l_n = p2.dawka_l_n or p.dawka_l_n is NULL and p2.dawka_l_n is NULL) and (p.dawka_j_n = p2.dawka_j_n or p.dawka_j_n is NULL and p2.dawka_j_n is NULL) WHERE p.bloz12 != 232440132296 and p2.bloz12 = 232440132296 ``` I'm guessing you have forgotten to add an `is NULL` at the end of ``` ... OR p.dawka_j_n IS NULL AND (SELECT pp.dawka_j_n FROM produkt pp WHERE pp.bloz12 = 232440132296) ``` ? When [comparing NULL values](http://docs.oracle.com/cd/E11882_01/server.112/e41084/sql_elements005.htm#SQLRF51097) remember that e.g `p.dawka_j_n = p2.dawka_j_n` will not evaluate to true when both are `NULL` so explicitly checking with `is NULL` is necessary if that's what you want.
A self join will probably work. This will show you the general idea. ``` SELECT p.nazwa, p.nazwa_miedz, p.dawka_l_p, p.dawka_j_p, p.dawka_l_n, p.dawka_j_n FROM produkt p join product p2 on p.dawka_l_p = p2.dawka_l_p and p.dawka_j_p = p2.dawka_j_p and p.dawka_l_n = p2.dawka_l_n WHERE p.bloz12 <> 232440132296 and p2.bloz12 = 232440132296 ``` There might be some details that I missed.
Optimizing this query in oracle
[ "", "sql", "oracle", "query-optimization", "" ]
I am using VIsual Studio 2010 with Microsoft SQL Server and I am trying to write a query that will retrun the (3) most recent records in the database by the date field. Here are the fields in the database with columns; id | first\_name | last\_name | url | date Here is the current query I am using but it only returns the single most recent entry; ``` SELECT id, first_name, last_name, url, MAX(DISTINCT date) AS Expr1 FROM tbl_paystubs GROUP BY first_name ORDER BY first_name ``` How do I return the (3) most recent instead of just (1)?
try something like this: ``` SELECT TOP 3 id, first_name, last_name, url, date FROM tbl_paystubs ORDER BY date desc ```
You could find the max, then find the next date that is less than the max, then find the next date that is less than that max. Using CTEs: ``` WITH firstDate(id, first_name, last_name, url, date) as ( SELECT id, first_name, last_name, url, MAX(DISTINCT date) AS Expr1 FROM tbl_paystubs GROUP BY t.first_name, t.id, t.last_name, t.url ), secondDate(id, first_name,Last_name,url,date) ( SELECT t.id, t.first_name, t.last_name, t.url, MAX(t.date) FROM tbl_paystubs t inner join firstDate f on f.id = t.id and f.first_Name = t.first_name and f.last_name = t.last_name and f.url = t.url WHERE f.date > t.date GROUP BY t.first_name, t.id, t.last_name, t.url ), thirdDate(id, first_name,Last_name,url,date) ( SELECT t.id, t.first_name, t.last_name, t.url, MAX(t.date) FROM tbl_paystubs t inner join secondDate s on s.id = t.id and s.first_Name = t.first_name and s.last_name = t.last_name and s.url = t.url WHERE s.date > t.date GROUP BY t.first_name, t.id, t.last_name, t.url ) select f.id, f.first_name, f.last_name, f.url, f.date as "FirstMax", s.date as "SecondMax", t.date as "ThirdMax" from FirstDate f left outer join SecondDate s on f.id = s.id and f.first_Name = s.first_name and f.last_name = s.last_name and f.url = s.url left outer join ThirdDate t on f.id = t.id and f.first_Name = t.first_name and f.last_name = t.last_name and f.url = t.url ```
How to select the 3 most recent rows using the date field
[ "", "sql", "sql-server", "visual-studio-2010", "" ]
I have a University project and I have to create a DB for a plant shop. I have a problem with the order table. At the moment it only allows a customer to buy one product at a time but in real life a customer can buy many products at a time. For example, We have a customer John Doe, and he buys two products that are in the product table. How do I pull those two (or more) products and add them to one order table? Below is the SQL code I wrote: ``` CREATE TABLE customer( customer_id INT(3), customer_fname VARCHAR(20), customer_lname VARCHAR(20), customer_gender CHAR(1), customer_tel VARCHAR(20), customer_email VARCHAR(30), customer_dateJoined DATE, address_id INT(3), PRIMARY KEY(customer_id), INDEX(customer_id), FOREIGN KEY(customer_id) REFERENCES address); CREATE TABLE address( adress_id INT(3), customer_street VARCHAR(30), customer_town VARCHAR(30), customer_postcode CHAR(7), PRIMARY KEY(address_id), INDEX(address_id), FOREIGN KEY(address_id) REFERENCES customer(address_id), FOREIGN KEY(address_id) REFERENCES employee(address_id)); CREATE TABLE product( product_id INT(5), product_name VARCHAR(20), product_season VARCHAR(15), product_price NUMERIC(4,2), product_origin VARCHAR(15), product_type VARCHAR(15), product_inStock BOOLEAN, PRIMARY KEY(product_id), INDEX(product_id)); CREATE TABLE orders( order_id INT(3), customer_id INT(3), employee_id INT(3), product_name VARCHAR(20), quantity INT(4), order_date TIMESTAMP, PRIMARY KEY(order_id), INDEX(order_id)); CREATE TABLE employee( employee_id INT(3), employee_fname VARCHAR(20), employee_lname VARCHAR(20), address_id INT (3), employee_pay NUMERIC(2,2), employee_daysOff INT(2), employee_hoursWorked INT(3), PRIMARY KEY(staff_id), INDEX(staff_id)); ```
You have to create Kettle Table `customer_orders`, in this table you store `customer_id` and `order_id` and connect them with foreign keys to the customer and orders tables. Like in the following query: ``` CREATE TABLE customer_orders( customer_id INT(3), order_id INT(3), PRIMARY KEY(customer_id, order_id), FOREIGN KEY(customer_id) REFERENCES customer(customer_id), FOREIGN KEY(order_id) REFERENCES orders(order_id) ); ```
``` CREATE TABLE sales.stores ( store_id INT IDENTITY (1, 1) PRIMARY KEY, store_name VARCHAR (255) NOT NULL, phone VARCHAR (25), email VARCHAR (255), street VARCHAR (255), city VARCHAR (255), state VARCHAR (10), zip_code VARCHAR (5) ); ```
Shop database, SQL for order table
[ "", "sql", "" ]
I have a table that looks like this: ``` id value has_other color 1 banana 0 yellow 2 apple 1 red 3 apple 1 green 4 orange 0 orange 5 grape 0 red 6 grape 0 green ``` I want to make a query that selects all entries that has 'has\_other' = 0, but there are other entries with same 'value' and 'has\_other' value (essentially to find duplicates) **Edit**: added some entries. Query should return these for above example: ``` 5, grape, 0, red 6, grape, 0, green ``` Any ideas? Cheers
This will return the results you are looking for: ``` SELECT t.* FROM myTable t INNER JOIN ( SELECT value FROM myTable WHERE has_other = 0 GROUP BY value HAVING count(*) > 1 ) a ON a.value = t.value WHERE t.has_other = 0; ``` `sqlfiddle demo`
``` select * from myTable where value in ( select value from myTable where has_other = 0 group by value having count(*) > 1 ) and has_other = 0 ```
SQL two or more with same properties
[ "", "mysql", "sql", "" ]
I have a SQL Server Scripts 2012 Project with multiple SQL queries and stored procedures. We use Team Foundation Server 2012 to manage our source code for our Visual Studio Solutions. How can I check in a SQL Server Scripts 2012 Project into TFS? If it is not possible how can I manage source control on this and allow multiple developers access to it?
You have a few options, here are two that I have used. **1: Download the TFS 2012 MSSCCI Provider:** This plugin allows you to access TFS from Microsoft SQL Server Management Studio. So you can easily add and check in\out those ssmssln and ssmsproj files from TFS. **[64bit Download](http://visualstudiogallery.msdn.microsoft.com/3c7b6813-2617-4d5f-9a1d-5ad980cab5d2) - [32bit Download](http://visualstudiogallery.msdn.microsoft.com/b5b5053e-af34-4fa3-9098-aaa3f3f007cd)** Once installed, in SSMS go to **Tools-> Options -> Source Control** to select the plugin. If you don't see it then you probably need to install the other bit version. After you have selected the plugin in the options window of SSMS, you will have a new menu option under "File" that will allow you to Add\Open\Change items in TFS from Sql Management Studio. To add your Scripts solution using the MSSCCI plugin: Open the project in SSMS, go to **File -> Source Control -> Add Solution to Source Control** **2. Add through VS using the "Add files to Source Control"** See here: [To add a file that is not in a solution to version control](http://msdn.microsoft.com/en-us/library/ms181374%28v=vs.100%29.aspx#CreateAndAddToVC)
I'm not quite sure why it would be a challenge to add the sql server scripts to TFS just as any other file in your visual studio solution. I've done this in a lot of projects with great success. What is a challenge with databases though is to find a good strategy to handle branches and database versioning. I recommend that you have a look at Entity Framework Code First Migrations which handles this very nicely. Another approach is to use Chuck Norris Round house which is a more script based solution: RoundHouse <https://code.google.com/p/roundhouse/> Code First Migrations. <http://msdn.microsoft.com/en-us/data/jj591621.aspx> If you start from scratch I would recommend the Code First Migrations approach, but if you allready have a lot of .sql files the second can work very well.
SQL Server Scripts 2012 Project into Team Foundation Server 2012
[ "", "sql", "sql-server", "visual-studio-2012", "tfs", "ssms", "" ]
I'm using an Oracle database and I want to know how can I find rows in a varchar type column where the values of that column has a string which contains some character. I'm trying something like this (that's a simple example of what I want), but it doesn't work: ``` select p.name from person p where p.name contains the character 'A'; ``` I also want to know if I can use a function like `chr(1234)` where 1234 is an ASCII code instead of the `'A'` character in my example query, because in my case I want to search in my database values where the name of a person contains the character with 8211 as ASCII code. With the query `select CHR(8211) from dual;` I get the special character that I want. *Example:* ``` select p.name from person p where p.name contains the character chr(8211); ```
By *lines* I assume you mean rows in the table `person`. What you're looking for is: ``` select p.name from person p where p.name LIKE '%A%'; --contains the character 'A' ``` The above is case sensitive. For a case insensitive search, you can do: ``` select p.name from person p where UPPER(p.name) LIKE '%A%'; --contains the character 'A' or 'a' ``` For the special character, you can do: ``` select p.name from person p where p.name LIKE '%'||chr(8211)||'%'; --contains the character chr(8211) ``` The `LIKE` operator matches a pattern. The syntax of this command is described in detail in the [Oracle documentation](http://docs.oracle.com/cd/B19306_01/server.102/b14200/conditions007.htm). You will mostly use the `%` sign as it means *match zero or more characters*.
The answer of ADTC works fine, but I've find another solution, so I post it here if someone wants something different. I think ADTC's solution is better, but mine's also works. Here is the other solution I found ``` select p.name from person p where instr(p.name,chr(8211)) > 0; --contains the character chr(8211) --at least 1 time ```
Use string contains function in oracle SQL query
[ "", "sql", "oracle", "" ]
I want to copy records from one table to another. While doing this I want to set a flag of those records I copy. This is how I would do it (simplified): ``` BEGIN TRANSACTION copyTran insert into destination_table (name) select top 100 name from source_table WITH (TABLOCKX) order by id update source_table set copy_flag = 1 where id in (select top 100 id from source_table order by id) COMMIT TRANSACTION copyTran ``` Is there an easier way?
By leveraging [`OUTPUT`](http://technet.microsoft.com/en-us/library/ms177564%28v=sql.105%29.aspx) clause you can boil it down to a single `UPDATE` statement ``` UPDATE source_table SET copy_flag = 1 OUTPUT inserted.name INTO destination_table(name) WHERE id IN ( SELECT TOP 100 id FROM source_table ORDER BY id ) ``` **Note:** Now tested. Should work just fine.
The problem with your query is, that you may get different records in your `UPDATE` if someone inserts some data while you are running your query. It is saver to use the `INSERTED` keyword. ``` Declare @temp TABLE (Id integer); INSERT INTO destination_table (name) OUTPUT INSERTED.Id into @temp SELECT TOP 100 name FROM source_table ORDER BY id UPDATE source_table SET copy_flag = 1 WHERE Id IN (SELECT Id FROM @temp) ```
Update while copy records
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have a Laboratory-Test table with 120 columns all with datatype `varchar` (which supposed to be `FLOAT`) but these columns also contain characters like `^,*,A-Z,a-z`, commas, sentences with full stop "." at the end. I am using the following function to keep all the numeric values including ".". The issue is this `.` (dot ), if I use `@KeepValues as varchar(50) = '%[^0-9]%'` then it will remove all the dots (e.g `1.05*L` become `105`) which is not something I want. Could you please help me to resolved this would be very helpful or any alternative solution would be great ``` Create Function [dbo].[RAC] (@Temp VarChar(1000)) Returns VarChar(1000) AS Begin Declare @KeepValues as varchar(50) = '%[^0-9.]%' While PatIndex(@KeepValues, @Temp) > 0 Set @Temp = Stuff(@Temp, PatIndex(@KeepValues, @Temp), 1, '') Return @Temp End ``` My T-SQL `CASE` statement is : ``` ,CASE WHEN LTRIM(RTRIM(DBO.RAC([INR]))) NOT IN ('','.') THEN round(AVG(NULLIF(CAST(DBO.RAC([INR]) as FLOAT), 0)), 2) END AS [INR] ```
Since you have SQL2012, you can take advantage of the [TRY\_CONVERT()](http://technet.microsoft.com/en-us/library/hh230993.aspx) function ``` CREATE FUNCTION [dbo].[RAC] (@input varchar(max)) RETURNS TABLE AS RETURN ( WITH number_list AS (SELECT ROW_NUMBER() OVER(ORDER BY (SELECT 1)) i FROM sys.objects a) SELECT TOP 1 TRY_CONVERT(float,LEFT(@input,i)) float_conversion FROM number_list WHERE i <= LEN(@input) AND TRY_CONVERT(float,LEFT(@input,i)) IS NOT NULL ORDER BY i DESC ) GO ``` If you have an actual number\_list, which is very useful, use that instead. ``` DECLARE @table TABLE (data varchar(max)) INSERT @table VALUES ('123.124'), ('123.567 blah.'), ('123.567E10 blah.'), ('blah 45.2') SELECT * FROM @table OUTER APPLY [dbo].[RAC](data) t ```
You need a somewhat basic Regular Expression that will allow you to get digits with a single decimal between two sets of digits (or perhaps digits with no decimal at all). This requires using SQLCLR for the RegEx function. You can find numerous examples of those, or you can use the freely available SQLCLR library [SQL# (SQLsharp)](https://SQLsharp.com/?ref=so_20407881) (which I am the author of, but the function needed to answer this question is in the Free version). ``` DECLARE @Expression NVARCHAR(100) = N'\d+(\.\d+)?(e[-+]?\d+)?'; SELECT SQL#.RegEx_MatchSimple(N'This is a test. Number here 1.05*L.', @Expression, 1, 'IgnoreCase') AS [TheNumber], CONVERT(FLOAT, SQL#.RegEx_MatchSimple(N'This is a test. Number here 1.05*L.', @Expression, 1, 'IgnoreCase')) AS [Float], CONVERT(FLOAT, SQL#.RegEx_MatchSimple(N'Another test. New number 1.05e4*L.', @Expression, 1, 'IgnoreCase')) AS [Float2], CONVERT(FLOAT, SQL#.RegEx_MatchSimple(N'One more test. Yup 1.05e-4*L.', @Expression, 1, 'IgnoreCase')) AS [Float3] /* Returns: TheNumber Float Float2 Float3 1.05 1.05 10500 0.000105 */ ``` The only issue with the pattern would be if there is another number in the text (you did say there are full sentences) prior to the one that you want. If you are 100% certain that the value you want will always have a decimal, you could use a simpler expression as follows: ``` \d+\.\d+(e[-+]?\d+)? ``` The regular expression allows for optional ( e / e+ / e- ) notation.
Remove alphanumeric characters from the Varchar columns and then convert to Float
[ "", "sql", "sql-server", "t-sql", "sql-server-2012", "" ]
I am hoping to get an answer to this problem. I am using SQL Developer to write queries, connected to an Oracle database. What I need is, if the query result is nothing (null or 0, I guess?), I still need something to show up. As of now, when the query result is nothing, then nothing but column headers come up. The code below is what I have/tried so far with no success. ``` SELECT to_char(rs.cr_date, 'MM/DD/YYYY') "Date", COUNT(os.ord_id) "RTS Returned Orders" FROM return_sku rs, order_sku os WHERE rs.s_method Like '%RTS%' AND trunc(created_date) = trunc(SYSDATE) AND os.ord_sku_id = rs.ord_sku_id GROUP BY to_char(rs.cr_date, 'MM/DD/YYYY')) rts ``` This works fine when there is an "RTS" in the s\_method column; as in, a number will appear in my query result. The problem is that when there are no query results where rs.s\_method has "RTS" in it, my query just returns column headers and nothing else (see below). ``` Date | RTS Returned Orders ------------------------------ ``` I need it so that when there are no results with "RTS" in s\_method, it will return a row with the date and the number 0 in the "RTS Returned Orders" column. Something like below: ``` Date | RTS Returned Orders ------------------------------ 12/4/2013 | 0 ``` I have tried using decode and NVL to no avail. Either I am not using them correctly, or there is something else that I can use that I am unaware of. Please help! Thanks in advance. Any help is greatly appreciated. Best Regards, -Anthony C.
I think the query that you want uses conditional aggregation, instead of filtering in the `where` clause: ``` SELECT to_char(rs.cr_date, 'MM/DD/YYYY') as "Date", sum(case when rs.s_method Like '%RTS%' then 1 else 0 end) as "RTS Returned Orders" FROM return_sku rs join order_sku os on os.ord_sku_id = rs.ord_sku_id WHERE trunc(created_date) = trunc(SYSDATE) GROUP BY to_char(rs.cr_date, 'MM/DD/YYYY'); ``` Note a few things. The `group by` is unnecessary, because you are only returning one row (but I'm leaving it in because it was part of your original question). I also fixed the `join` syntax to use standard join syntax (`join` . . . `on`) rather than implicit joins.
You could do something like this: ``` select "date", sum('MM/DD/YYYY') "Date") from (select to_char(sysdate, 'MM/DD/YYYY') "Date" , 0 "'MM/DD/YYYY') "Date" from dual union SELECT to_char(rs.cr_date, 'MM/DD/YYYY') "Date", COUNT(os.ord_id) "RTS Returned Orders" FROM return_sku rs, order_sku os WHERE rs.s_method Like '%RTS%' AND trunc(created_date) = trunc(SYSDATE) AND os.ord_sku_id = rs.ord_sku_id GROUP BY to_char(rs.cr_date, 'MM/DD/YYYY')) rts ) temp ``` Not sure about the syntax with all those quoted aliases, but the general idea should work.
How to Get a Query Result even if Result is 0 or Null
[ "", "sql", "oracle", "null", "" ]
I have been working now on a project for a period of time. The project has been released and is in production mode. We have now noticed that one of the tables that stores events for products is growing rapidly, it produces around 6.5 million rows each month give/take a few 100 000. This makes of course the database to take up much hdd-space and it will not get smaller, around 12GB in size, we are on our way to getting out of space. The thought of running a script each night to keep a low number of history rows has occurred to me, but this will of course not free up hdd-space, here I need to shrink the database file. Have read around on the net about shrink of datafile. And get pro and cons to why one should not do it. Could the handling of the history table be done in some other way, or should I run the delete script in a maintenance plan and once a week/month run a shrink job in a maintenance plan? At moment I have a maintenance plan with: - Reorganize index (3 times a week) - Update statics (3 times a week) - Clears old not needed rows in some tables. (every night) The system it self has over 14 000 units making selects/writes each minute or 2 minute, it can sometimes be very slow and some sql-questions seems to get a command timeout (30s).
Data archival can be tricky. You could copy the history to another database (perhaps on another server), which would help with your HDD space concerns. But even without talking about what you decide to do with the old data, once you delete it from your table, you shouldn't have to shrink your database. That space will be freed up inside of the database file, and you can reuse it for the incoming records in that table. The main reason you don't want to shrink is because you'll eventually grow out that file again when it needs the space, which will cause fragmentation and performance issues. Plus, you'll have a bunch of unnecessary IO for shrinking and growing the file, reorganizing/rebuilding those fragmented indexes, etc. So I would say to leave the database file as it is and try to maintain it at the same size, just deleting and adding records to that big table. If it continues to grow because you keep adding more than you delete, then you'll need to figure out another solution, such as adding additional HDDs for more data files.
You also have option to do table partitioning. You could have several filegroups assign to different disks and allocate these resources to the table. SQL will do horizontal table partitioning based on values you specified. This is good overview for SQL table partitioning: <http://databases.about.com/od/sqlserver/a/partitioning.htm> If your table also actively inserted and deleted data - you will get better performance on truncate instead of delete. The only different is truncate operation is not logged, so you can't rollback the statement. Hope this helps
History tables in SQL Server
[ "", "sql", "sql-server", "sql-server-2008-r2", "" ]
``` SELECT Id FROM Container_A JOIN Container_B ON Container_A.Container_B_FK = Container_B.Id ORDER BY Container_A.Id ``` This query return all elements of all Container\_A items related to Container\_B. The question is - How to get only first (with min Id) item related to each Container\_B item.
The problem with queries like this is that "first" has no meaning at all unless you specify what the order of the items are within the group. If you want the lowest item id, then you can group on the container b and use the `min` aggreagate to get the lowest item id from each group: ``` select b.Id, min(a.Id) as A_Id from Container_B b inner join Container_A a on a.Container_B_FK = b.Id group by b.Id order by b.Id ```
``` SELECT top 1 *,min(Id) as minimum FROM Container_A JOIN Container_B ON Container_A.Container_B_FK = Container_B.Id group by Container_A.id,Container_B.id ORDER BY minimum ```
Return not all, but only first item of each joined table item
[ "", "sql", "sql-server", "join", "" ]
I have found some SQL queries in an application I am examining like this: ``` SELECT DISTINCT Company, Warehouse, Item, SUM(quantity) OVER (PARTITION BY Company, Warehouse, Item) AS stock ``` I'm quite sure this gives the same result as: ``` SELECT Company, Warehouse, Item, SUM(quantity) AS stock GROUP BY Company, Warehouse, Item ``` Is there any benefit (performance, readability, additional flexibility in writing the query, maintainability, etc.) of using the first approach over the later?
## Performance: **Winner: `GROUP BY`** Some very rudimentary testing on a large table with unindexed columns showed that at least in my case the two queries generated a completely different query plan. The one for `PARTITION BY` was significantly slower. The `GROUP BY` query plan included only a table scan and aggregation operation while the `PARTITION BY` plan had two nested loop self-joins. The `PARTITION BY` took about 2800ms on the second run, the `GROUP BY` took only 500ms. ## Readability / Maintainability: **Winner: `GROUP BY`** Based on the opinions of the commenters here the `PARTITION BY` is less readable for most developers so it will be probably also harder to maintain in the future. ## Flexibility **Winner: `PARTITION BY`** `PARTITION BY` gives you more flexibility in choosing the grouping columns. With `GROUP BY` you can have only one set of grouping columns for all aggregated columns. With `DISTINCT + PARTITION BY` you can have different column in each partition. Also on some DBMSs you can chose from more aggregation/analytic functions in the `OVER` clause.
Using `sum()` as an analytic function with `over partition by` is not necessary. I don't think there is a big difference between them in any sense. In oracle there are lot more analytic function than aggregation function. I think ms-sql is the same case. And for example `lag()`, `lead()`, `rank()`, `dense rank()`, etc are much harder to implement with only `group by`. Of course this argument is not really for defending the first version... Maybe there were previously more computed fields in the result set which are not implementable with group by.
DISTINCT with PARTITION BY vs. GROUPBY
[ "", "sql", "sql-server", "group-by", "distinct", "query-performance", "" ]
I want to be able to list the 2nd lowest priced product sold for the day after i do queries on microsoft server sql studio 2008. (I know how to list the lowest thx to the guys who answered that question Here, but what if i just want to list the 2nd lowest, 3rd, or 4th lowest, but let just focus on how to list the 2nd lowest This what the table looks like its name is SALES ``` DATE PRODUCT_SOLD PRICE 2013-11-15, crab pot , 21.15 2013-11-15, bait , 3.50 2013-11-15, hooks , 11.99 2013-11-15 , sinkers , 1.99 2013-11-15 , fishing rod , 49.99 2013-11-16 , baitcaster , 29.99 2013-11-16 , squid bait , 3.50 2013-11-16 , knife , 9.95 2013-11-17 , fishing rod , 99.95 2013-11-17 , net , 25.99 ``` How do I display it so it just list the 2nd lowest priced product for a day. like this below. ``` 2013-11-15, bait , 3.50 2013-11-16 , knife , 9.95 2013-11-17 , fishing rod , 99.95 ```
Give this a try: ``` SELECT date, product_sold, price FROM ( SELECT date, product_sold, price, DENSE_RANK() OVER (PARTITION BY date ORDER BY price) rank FROM sales ) t WHERE rank = 2 ``` Fiddle [here](http://sqlfiddle.com/#!6/6d46e/1). Note that if you have values `1`, `1` and `2` this will return `2`. If you would like to return the *second* `1` then use `ROW_NUMBER` instead of `DENSE_RANK`. If you're looking for a generic solution that would in most DBMS then you can go for: ``` SELECT s2.date, min(s2.price) price FROM ( SELECT date, min(price) price FROM sales GROUP BY date ) s1 JOIN sales s2 ON s1.date = s2.date AND s2.price > s1.price GROUP BY s2.date ``` Note this one would work as the `DENSE_RANK` solution.
You are unclear about what to do about ties. Here is a solution using `dense_rank()`: ``` select date, product, price from (select s.*, dense_rank() over (partition by date order by price ) as seqnum from sales ) s where seqnum = 2; ```
Now I want to be able to list the 2nd lowest priced product sold for the day
[ "", "sql", "sql-server", "database", "" ]
I am using db2 9.5 i have created a column in table which is created successfully but i am not able to update table column and getting following error > [Error] Script lines: 1-1 -------------------------- > DB2 SQL error: SQLCODE: -668, SQLSTATE: 57016, SQLERRMC: 7;DB2ADMIN.XCATENTRYEXT > Message: Operation not allowed for reason code "7" on table "DB2ADMIN.XCATENTRYEXT". Following the some blog/sites on google i found the REORG command as solution as mentioned in following link <http://bytes.com/topic/db2/answers/508869-reorg-tablespace> i have tried the following queries to run on database to solve the problem. ``` Database["DB2"].ExecuteNonQuery("call SYSPROC.ADMIN_CMD ('REORG TABLE DB2ADMIN.XCATENTRYEXT index CATENTRY_ID INPLACE')") REORG TABLE DB2ADMIN.XCATENTRYEXT index CATENTRY_ID INPLACE REORG TABLE DB2ADMIN.XCATENTRYEXT REORG INDEXES I0000908 FOR TABLE DB2ADMIN.XCATENTRYEXT ``` but all queries have the same error in result like ``` DB2 SQL error: SQLCODE: -104, SQLSTATE: 42601, SQLERRMC: Database;BEGIN-OF-STATEMENT;<variable_set> Message: An unexpected token "Database" was found following "BEGIN-OF-STATEMENT". Expected tokens may include: "<variable_set>". ``` I am stuck on this error, I am not even able to update any column of that particular table.
It follows from the error message, that you somehow submit the entire string `Database["DB2"].ExecuteNonQuery("call SYSPROC.ADMIN_CMD ('REORG TABLE DB2ADMIN.XCATENTRYEXT index CATENTRY_ID INPLACE')")` as a SQL statement, which obviously is incorrect. Simply issue these on the shell command line: ``` db2 connect to <your database name here> db2 REORG TABLE DB2ADMIN.XCATENTRYEXT ```
It is possible to do REORG through an SQL statement: ``` CALL SYSPROC.ADMIN_CMD('REORG TABLE SCHEMA.TABLENAME'); ```
Failing update table in db2 with SQLCODE: -668, SQLSTATE: 57016, SQLERRMC: 7;
[ "", "sql", "database", "db2", "db2-luw", "" ]
Im working on a small school project where im creating a movie database in SQL. I've created the tables and was wondering if I will encounter any problems with the model i created. Thanks in advance. Current diagram ![enter image description here](https://i.stack.imgur.com/r4moC.png) Edit: Here is the new diagram ![enter image description here](https://i.stack.imgur.com/r4moC.png)
MovieDetails is bad design. You need one row in MovieDetails per actor, while the director will be the same, which is data duplication. Instead, the Movie table should have a foreign key referencing director, then a MovieActor table should represent the many to many relationship between movies and actors. Technically, there's also no reason to have different tables for Directors and Actors since you have the same data in the tables. You could just as well have a Person table with both.
How about (I am only showing the relevant columns) ``` movie table ----------- id title person ------ id name cast table ---------- movie_id person_id movie_role_id (actor, director, ...) role_type table ---------------- id name (actor, director, ...) genres table ------------ id name movie_genres table ------------------ movie_id genre_id ```
SQL movie database diagram
[ "", "sql", "sql-server", "database", "diagram", "" ]
Consider the following Postgresql database table: ``` id | book_id | author_id --------------------------- 1 | 1 | 1 2 | 2 | 1 3 | 3 | 2 4 | 4 | 2 5 | 5 | 2 6 | 6 | 3 7 | 7 | 2 ``` In this example, Author 1 has written 2 books, Author 2 has written 4 books, and Author 3 has written 1 book. How would I determine the average number of books written by an author using SQL? In other words, I'm trying to get, "An author has written an average of 2.3 books". Thus far, attempts with AVG and COUNT have failed me. Any thoughts?
``` select avg(totalbooks) from (select count(1) totalbooks from books group by author_id) bookcount ``` I think your example data actually only has 3 books for author id 2, so this would not return 2.3 <http://sqlfiddle.com/#!15/3e36e/1> With the 4th book: <http://sqlfiddle.com/#!15/67eac/1>
You'll need a subquery. The inner query will `count` the books with `GROUP BY author`; the outer query will scan the results of the inner query and `avg` them. You can use a subquery in the `FROM` clause for this, or you can use a CTE (`WITH` expression).
How do you determine the average total of a column in Postgresql?
[ "", "sql", "postgresql", "count", "average", "" ]
I am a bit lost here. I would like to join these two queries into one to avoid two connections and simplify the code. ``` "SELECT uname FROM Projects WHERE id IN (SELECT MAX(id) FROM Projects)" "SELECT name FROM Users WHERE username like"+"'"+ uname +"'" ``` Right now I am using two queries and taking the result of the first into the second one. I am sure the two queries can become one but I do not know how to do it.
You can simply use `=` rather than `LIKE` since I you are not using pattern symbol such as `%`. ``` SELECT a.name FROM Users a INNER JOIN Projects b ON a.username = b.uname WHERE b.ID = (SELECT MAX(id) FROM Projects) ```
You may try like this using the `INNER JOIN` considering that both of your tables are linked through `User_ID` ``` SELECT u.name FROM Users u INNER JOIN Projects p ON u.username = p.uname WHERE p.ID = (SELECT MAX(id) FROM Projects) ```
SQL join two queries
[ "", "sql", "sql-server", "" ]
So i have 2 tables **Places** ``` PlaceID varchar PK PlaceName varchar ``` **Trips** ``` TripID int PK Depart TIME DepartPlc varchar ArrivalPlc varchar ArrivalTime TIME ``` `DepartPlc` and `ArrivalPlc` both have `PlaceID`'s stored. I want to select the data from trips, but show de `PlaceName` data that corresponds to the ID stored in `DepartPlc` and `ArrivalPlc`. Anyone can help me on my way? I currently have this statement: ``` SELECT TripID, Depart, PlaceName, ArrivalPlc, ArrivalTime FROM Trips, Places WHERE TripID = 'VALUE' AND PlaceName = DepartPlc; ``` This works as i want it to, but when i add the same thing for ArrivalPlc obviously that does not work...
Just JOIN Twice using two different table aliases ``` SELECT DPlaces.placeid D_PlaceID, Aplaces.placeid A_PlaceID FROM trips t INNER JOIN places DPlaces ON t.departplc = DPlaces.placename INNER JOIN places APlaces ON t.arrivalplc = aplaces.placename WHERE t.TripID = 'VALUE' ```
You need to use the `places` table twice in your `from` clause: ``` select tripId, depart, departPlc as departPlaceId, dp.placeName as departPlaceName, arrivalPlc as arravalPlaceId, ap.placeName as arrivalPlaceName from trips as t inner join places as dp on t.departPlc = dp.placeId inner join places as ap on t.arrivalPlc = ap.placeId where t.tripId = 'VALUE` ``` Notice that I moved the relation conditions from the `where` clause to the `from` clause (ussing inner join) for clarity. That way you keep things cleanly separated: the relations and the filters.
SQL is this possible...it probably is
[ "", "sql", "sql-server", "" ]
I want to fill the calendar object which requires start and end date information. I have one column which contains a sequence of dates. Some of the dates are consecutive (have one day difference) and some are not. ``` InfoDate 2013-12-04 consecutive date [StartDate] 2013-12-05 consecutive date 2013-12-06 consecutive date [EndDate] 2013-12-09 [startDate] 2013-12-10 [EndDate] 2014-01-01 [startDate] 2014-01-02 2014-01-03 [EndDate] 2014-01-06 [startDate] 2014-01-07 [EndDate] 2014-01-29 [startDate] 2014-01-30 2014-01-31 [EndDate] 2014-02-03 [startDate] 2014-02-04 [EndDate] ``` I want to pick each consecutive dates range’s start and end date (the first one and the last one in the block). ``` StartDate EndDate 2013-12-04 2013-12-06 2013-12-09 2013-12-10 2014-01-01 2014-01-03 2014-01-06 2014-01-07 2014-01-29 2014-01-31 2014-02-03 2014-02-04 ``` I want to solve the problem using SQL only.
No joins or recursive CTEs needed. The standard gaps-and-island solution is to group by (value minus row\_number), since that is invariant within a consecutive sequence. The start and end dates are just the MIN() and MAX() of the group. ``` WITH t AS ( SELECT InfoDate d,ROW_NUMBER() OVER(ORDER BY InfoDate) i FROM @d GROUP BY InfoDate ) SELECT MIN(d),MAX(d) FROM t GROUP BY DATEDIFF(day,i,d) ```
Here you go.. ``` ;WITH CTEDATES AS ( SELECT ROW_NUMBER() OVER (ORDER BY Infodate asc ) AS ROWNUMBER,infodate FROM YourTableName ), CTEDATES1 AS ( SELECT ROWNUMBER, infodate, 1 as groupid FROM CTEDATES WHERE ROWNUMBER=1 UNION ALL SELECT a.ROWNUMBER, a.infodate,case datediff(d, b.infodate,a.infodate) when 1 then b.groupid else b.groupid+1 end as gap FROM CTEDATES A INNER JOIN CTEDATES1 B ON A.ROWNUMBER-1 = B.ROWNUMBER ) select min(mydate) as startdate, max(infodate) as enddate from CTEDATES1 group by groupid ```
Detect consecutive dates ranges using SQL
[ "", "sql", "sql-server", "sql-server-2008", "gaps-and-islands", "" ]
I found something that I cannot resolve by my self. Lack of problem solving techniques... :( I need to get result to select each user with his id, and show their last login info, for example from this table: ``` id name loginTime logoutTime 3 test 2013-11-25 22:50:00 null 4 test1 2013-11-20 07:23:18 null 6 test2 2013-11-19 11:17:22 null 3 test3 2013-11-27 14:20:54 null 16 test4 2013-11-09 13:52:21 null 3 test 2013-12-02 23:07:43 null 2 test5 2013-11-11 18:15:31 null 4 test1 2013-11-17 19:13:59 null 6 test2 2013-11-30 03:10:07 null ... ``` I need to get: ``` id name loginTime logoutTime 2 test5 2013-11-11 18:15:31 null 3 test 2013-12-02 23:07:43 null 4 test1 2013-11-20 07:23:18 null 6 test2 2013-11-30 03:10:07 null 16 test4 2013-11-09 13:52:21 null ... ``` SO far I have used DISTINCT to distinct users, but there is a problem with getting last date of login per distinct user... What is the best and proper way to achieve this?
``` select max(loginTime), id, name from your_table group by id, name ```
``` SELECT id, name, logintime, logouttime a from table where logintime=(select max(logintime) from table b where a.id = b.id) ```
distinct users, get last login info
[ "", "sql", "t-sql", "" ]
I'm trying to create a search function that uses a TextBox and two DropDownLists to return entries from my MSSQL DB. The way I have it set up is by passing the search queries via QueryStrings to a GridView on a separate page. Here's what I have for the search function: ``` Partial Class Includes_LeftCol Inherits System.Web.UI.UserControl Public Sub btnSearch_Click(sender As Object, e As System.EventArgs) Handles btnSearch.Click Dim SelectURLRangePrice As String If tbxSearch.Text.Length = 0 And ddlRangeName.SelectedValue = "" And ddlPriceRange.SelectedValue = "" Then SelectURLRangePrice = "SearchResults.aspx" Response.Redirect(SelectURLRangePrice) ElseIf tbxSearch.Text.Length = 0 And ddlRangeName.SelectedValue = "" And ddlPriceRange.SelectedValue.Length > 0 Then SelectURLRangePrice = "SearchResults.aspx?price=" & ddlPriceRange.Text Response.Redirect(SelectURLRangePrice) ElseIf tbxSearch.Text.Length = 0 And ddlRangeName.SelectedValue.Length > 0 And ddlPriceRange.SelectedValue.Length > 0 Then SelectURLRangePrice = "SearchResults.aspx?range=" & ddlRangeName.Text & "&price=" & ddlPriceRange.Text Response.Redirect(SelectURLRangePrice) End If Dim SelectURLRange As String If tbxSearch.Text.Length = 0 And ddlRangeName.SelectedValue.Length = 0 Then SelectURLRange = "SearchResults.aspx" Response.Redirect(SelectURLRange) ElseIf tbxSearch.Text.Length = 0 And ddlRangeName.SelectedValue.Length > 0 Then SelectURLRange = "SearchResults.aspx?range=" & ddlRangeName.Text Response.Redirect(SelectURLRange) End If Dim SelectURL As String If tbxSearch.Text.Length = 0 Then SelectURL = "SearchResults.aspx" Response.Redirect(SelectURL) ElseIf tbxSearch.Text.Length > 0 Then SelectURL = "SearchResults.aspx?search=" & tbxSearch.Text Response.Redirect(SelectURL) End If End Sub End Class ``` And here's my SQL statement: ``` SELECT Product_Rental, Product_ID, Range_Name, Model_Name, Product_Name, Product_Year, Product_Code, Product_Active, Product_DateAdded FROM Products WHERE (Range_Name LIKE '%' + @Range_Name + '%') OR (Model_Name LIKE '%' + @Model_Name + '%') OR (Product_Name LIKE '%' + @Product_Name + '%') OR (Product_Code LIKE '%' + @Product_Code + '%') OR (Product_Year LIKE '%' + @Product_Year + '%') OR (Product_Rental BETWEEN @Product_Rental AND @Product_Rental + 50) ``` Once submitted, I get the following error: **Conversion failed when converting the varchar value '[e.g. 209.35]' to data type int.** I'm a beginner at this so apologies if it's something stupid, and I've tried to go as far as I can before posting, but I just can't get past this. Thanks in advance.
The problem is here: ``` Product_Rental BETWEEN @Product_Rental AND @Product_Rental + 50 ``` Because you are using `+ 50` [data type precedence](http://technet.microsoft.com/en-us/library/ms190309.aspx) kicks in. The docs state: > When an operator combines two expressions of different data types, the rules for data type precedence specify that the data type with the lower precedence is converted to the data type with the higher precedence. If the conversion is not a supported implicit conversion, an error is returned. When both operand expressions have the same data type, the result of the operation has that data type. So internally SQL Server attempts to convert `@Product_Rental` to an int (because 50 is an int, and an int has a higher precedence than varchar). So whenever This can reproduced fairly simply: ``` DECLARE @Var VARCHAR(6) = '50'; SELECT * FROM (VALUES (1)) T (A) WHERE A < @Var + 50; ``` Then checking the execution plan XML we can see the comversion behind the scenes: ``` <ScalarOperator ScalarString="(1)&lt;(CONVERT_IMPLICIT(int,[@Var],0)+(50))"> ``` This basically shows that SQL Server has essentially turned ``` A < @Var + 50 ``` Into ``` A < CONVERT(INT, @Var) + 50; ``` This is fine if `@Var` converts to an int, but if it has a decimal point it will not, as can be shown by: ``` SELECT CONVERT(INT, '50.0'); ``` I would suggest if your column `Product_Rental` contains decimal data, then it should be of the decimal type, as should any parameters used to filter the column.
The SQL Type INT is for whole numbers (1,2,3). FLOAT could support decimals. And text characters like [ and ] definitely won't work with an INT column. Edit from Comments: Since the Product\_Rental field is VarChar try changing to this (if you don't want to change the type on the table, which I recommend): ``` --Make @Product_Rental a numeric type first. (CAST(Product_Rental AS FLOAT) BETWEEN @Product_Rental AND @Product_Rental + 50) ```
Conversion failed when converting the varchar value '[]' to data type int
[ "", "asp.net", "sql", "sql-server", "vb.net", "search", "" ]
I have searched far and wide for an answer to this problem. I'm using a Microsoft SQL Server, suppose I have a table that looks like this: ``` +--------+---------+-------------+-------------+ | ID | NUMBER | COUNTRY | LANG | +--------+---------+-------------+-------------+ | 1 | 3968 | UK | English | | 2 | 3968 | Spain | Spanish | | 3 | 3968 | USA | English | | 4 | 1234 | Greece | Greek | | 5 | 1234 | Italy | Italian | ``` I want to perform one query which only selects the unique 'NUMBER' column (whether is be the first or last row doesn't bother me). So this would give me: ``` +--------+---------+-------------+-------------+ | ID | NUMBER | COUNTRY | LANG | +--------+---------+-------------+-------------+ | 1 | 3968 | UK | English | | 4 | 1234 | Greece | Greek | ``` How is this achievable?
Since you don't care, I chose the max ID for each number. ``` select tbl.* from tbl inner join ( select max(id) as maxID, number from tbl group by number) maxID on maxID.maxID = tbl.id ``` --- **Query Explanation** ``` select tbl.* -- give me all the data from the base table (tbl) from tbl inner join ( -- only return rows in tbl which match this subquery select max(id) as maxID -- MAX (ie distinct) ID per GROUP BY below from tbl group by NUMBER -- how to group rows for the MAX aggregation ) maxID on maxID.maxID = tbl.id -- join condition ie only return rows in tbl -- whose ID is also a MAX ID for a given NUMBER ```
A very typical approach to this type of problem is to use `row_number()`: ``` select t.* from (select t.*, row_number() over (partition by number order by id) as seqnum from t ) t where seqnum = 1; ``` This is more generalizable than using a comparison to the minimum id. For instance, you can get a random row by using `order by newid()`. You can select 2 rows by using `where seqnum <= 2`.
SQL - select distinct only on one column
[ "", "sql", "sql-server", "unique", "distinct", "" ]
``` SELECT Count(*) FROM stu_empl_pers WHERE site_id = 'DEXLER'; ``` returns 22 records. but ``` SELECT Count(*) FROM stu_empl_pers a WHERE a.status = 'A' OR a.faculty = 1 OR a.staff = 1 AND a.site_id = 'DEXLER'; ``` returns 691 records.It considers different site\_id also.Can anybody elaborate on this?
> I need status=A and site\_id='dexler'as mandatory then, could you try this? ``` Select count(*) from stu_empl_pers a WHERE (a.faculty=1 OR a.staff=1) AND a.status='A' AND a.site_Id = 'DEXLER'; ```
``` WHERE a.status='A' or a.faculty=1 OR a.staff=1 AND a.site_Id = 'DEXLER'; (true/false or true/false or (true/false and true/false)) ``` maybe this will be more easier to see what your `WHERE` statement mean if one of `OR`s (except the one with `AND`) become true the record will be added to the result think `AND` like multiplication operand in math it has priority over `OR` which can be considered like `sum` operand in math 2\*2-1 = 3 Not 2
Select statement gives different set of records
[ "", "mysql", "sql", "" ]
I need to filter some results from a query where the field must have more than a given length. I know that doesn't work, but it would be something like this: ``` SELECT * FROM MyTable WHERE COUNT(description) > 50 ``` Is that doable or will I have to filter that on PHP(in my case) later?
Given you're using MySQL, you're probably looking for `LENGTH()`. For standard SQL, the function is called `LEN()`. If you are dealing with UTF-8, you will have to use `CHAR_LENGTH()` as `LENGTH()` measures the length in bytes while `CHAR_LENGTH()` will correctly measure the length in characters.
``` SELECT * FROM MyTable WHERE LENGTH(description) < 50 ```
Filter results by the column length
[ "", "mysql", "sql", "" ]
this is my table 1: ``` NAME AGE SEX CITY ID Clara 22 f New York 1 Bob 33 m Washington 2 Sam 25 m Boston 3 ``` this is my table 2: ``` NUMBER ID 555-1111 1 555-2222 2 555-3333 3 ``` and now I want a table 3 which shows me all information: ``` NAME AGE SEX CITY ID NUMBER Clara 22 f New York 1 555-1111 Bob 33 m Washington 2 555-2222 Sam 25 m Boston 3 555-3333 ``` I tried first to insert into table 3 only the values from table 1 and then I inserted into table 3 the values from table 2 with an inner join where Id = Id is. ``` INSERT INTO table3 { name, age, sex, city, id} SELECT name, age, sex, city, id FROM table 1 INSERT INTO table3 { name, age, sex, city, id, number} SELECT name, age, sex, city, id, number FROM table 2 p INNER JOIN table 3 c ON c.Id = p.Id ``` But all I get is a duplication of my values. instead of having 3 entries, I have like 9 entries, which some have number null, some have only the number and the rest null, and some are correct. I hope someone can help me **EDIT** If I am having now a third Table like this one: ``` NATIONALITY ID Canadian 1 American 2 French 3 ``` How could I merge all 3 tables into one Table?
You only need one INSERT: ``` INSERT INTO destinationTable( name, age, sex, city, id, number, nationality) SELECT name, age, sex, city, s1.id, number, s3.nationality FROM sourceTable1 s1 INNER JOIN sourceTable2 s2 ON s2.Id = s1.Id INNER JOIN sourceTable3 s3 ON s3.Id = s1.Id ```
I would suggest instead of creating a new table, you just use a view that combines the two tables, this way if any of the data in table 1 or table 2 changes, you don't need to update the third table: ``` CREATE VIEW dbo.YourView AS SELECT t1.Name, t1.Age, t1.Sex, t1.City, t1.ID, t2.Number FROM Table1 t1 INNER JOIN Table2 t2 ON t1.ID = t2.ID; ``` If you could have records in one table, and not in the other, then you would need to use a full join: ``` CREATE VIEW dbo.YourView AS SELECT t1.Name, t1.Age, t1.Sex, t1.City, ID = ISNULL(t1.ID, t2.ID), t2.Number FROM Table1 t1 FULL JOIN Table2 t2 ON t1.ID = t2.ID; ``` If you know all records will be in table 1 and only some in table 2, then you should use a `LEFT JOIN`: ``` CREATE VIEW dbo.YourView AS SELECT t1.Name, t1.Age, t1.Sex, t1.City, t1.ID, t2.Number FROM Table1 t1 LEFT JOIN Table2 t2 ON t1.ID = t2.ID; ``` If you know all records will be in table 2 and only some in table 2 then you could use a `RIGHT JOIN` ``` CREATE VIEW dbo.YourView AS SELECT t1.Name, t1.Age, t1.Sex, t1.City, t2.ID, t2.Number FROM Table1 t1 RIGHT JOIN Table2 t2 ON t1.ID = t2.ID; ``` Or just reverse the order of the tables and use a LEFT JOIN (I find this more logical than a right join but it is personal preference): ``` CREATE VIEW dbo.YourView AS SELECT t1.Name, t1.Age, t1.Sex, t1.City, t2.ID, t2.Number FROM Table2 t2 LEFT JOIN Table1 t1 ON t1.ID = t2.ID; ```
SQL INSERT INTO from multiple tables
[ "", "sql", "sql-server", "" ]
I was reading in 2600, but this article is also here <https://viaforensics.com/mobile-security/static-code-analysis-watchtower.html> Anyways there is a code block: ``` $result = mysql_query("SELECT * FROM users WHERE username = '{$_GET['username']}' AND `password` = SHA1('{$_GET['password']}')") ``` The author says "Readers of 2600 will spot the obvious SQL injections, but it seems that many programmers – remarkably – will not." Can someone explain and point out what he means? To me my guess was he meant that since it appears there's no cleaning of data for characters notorious for injection that it's vulnerable? I'm relatively novice to PHP5/MySQL and went over this code over and over looking for what's wrong but couldn't come to any other conclusion.
If `username` was something like `'' OR ''=''#` and `password` was `'anything'` it would short circuit the query to become: ``` SELECT * FROM users WHERE username ='' OR ''=''#AND password ='anything' ``` You can short circuit the logic by injecting SQL into parameters.
Let us expand on Namphibian's answer a bit, the $\_GET['parameter'] is a parameter that is part of the URL, so it will look something like. ``` http://link/foo.php?username=thatguy&password=whoa ``` Where $\_GET['username'] is "thatguy" and $\_GET['password'] is "whoa" So let's put that in the code. ``` $result = mysql_query("SELECT * FROM users WHERE username = 'thatguy' AND `password` = SHA1('whoa')") ``` What would happen if we passed in "`' OR ''=''#`" for username, let's just encode it for the URL. ``` http://link/foo.php?username=%27%20OR%20%27%27%3D%27%27%23&password=whoa ``` This will return ``` $result = mysql_query("SELECT * FROM users WHERE username = '' OR ''=''# AND `password` = SHA1('whoa')") ``` The `' OR ''=''#` will force the query to return all results and the `#` will comment out the rest of the MySQL statement, so who cares about the password. So MySQL query will only look at ``` SELECT * FROM users WHERE username = '' OR ''='' ``` Just need someone to confirm the URL encoding.
Can you spot the SQL injection?
[ "", "mysql", "sql", "code-injection", "" ]
So i'm writing a function for oracle db but I get errors saying that the sql statements were ignored and so expressions are missing. The function: ``` CREATE OR REPLACE FUNCTION getQueueNumber (student IN VARCHAR,course IN CHAR) RETURN INT IS queuePosition INT; date TIMESTAMP; BEGIN SELECT waitingDate INTO date FROM WaitingCourseStudent wcs WHERE wcs.student=student AND wcs.course=course; IF SQL%NOTFOUND THEN queuePosition := NULL; ELSE SELECT COUNT(*) as pos INTO queuePosition FROM WaitingCourseStudent WHERE waitingDate<=date; END IF; RETURN queuePosition; END; ``` Full error message: ``` LINE/COL ERROR 8/5 PL/SQL: SQL Statement ignored 8/29 PL/SQL: ORA-00936: missing expression 13/9 PL/SQL: SQL Statement ignored 16/20 PL/SQL: ORA-00936: missing expression ```
What you're probably looking for, is error handling instead of just an if. ``` CREATE OR REPLACE FUNCTION getQueueNumber (p_student IN VARCHAR, p_course IN CHAR) RETURN INT IS queuePosition INT; v_date TIMESTAMP; BEGIN BEGIN SELECT waitingDate INTO v_date FROM WaitingCourseStudent wcs WHERE wcs.student = p_student AND wcs.course = p_course; EXCEPTION WHEN NO_DATA_FOUND THEN RETURN NULL; END; SELECT COUNT(*) as pos INTO queuePosition FROM WaitingCourseStudent WHERE waitingDate <= v_date; RETURN queuePosition; END; ``` The inner BEGIN/END is for the exception handling. An exception can only be handled within a BEGIN/END block and since you needed to do something after your exception, you need an inner BEGIN/END. An alternative would be ``` CREATE OR REPLACE FUNCTION getQueueNumber (p_student IN VARCHAR, p_course IN CHAR) RETURN INT IS queuePosition INT; v_date TIMESTAMP; BEGIN SELECT waitingDate INTO v_date FROM WaitingCourseStudent wcs WHERE wcs.student = p_student AND wcs.course = p_course; SELECT COUNT(*) as pos INTO queuePosition FROM WaitingCourseStudent WHERE waitingDate <= v_date; RETURN queuePosition; EXCEPTION WHEN NO_DATA_FOUND THEN RETURN NULL; END; ```
``` SELECT waitingDate INTO date <--- (date is reserved keyword for denoting oracle datatype of date) FROM WaitingCourseStudent wcs WHERE wcs.student=student AND wcs.course=course; IF SQL%NOTFOUND THEN queuePosition := NULL; ``` Alternate suggestion : Prefix your variables with **v\_** or something like that to avoid these kind of errors.
SQL statements ignored in PL/SQL
[ "", "sql", "oracle", "plsql", "" ]
I have a table: ``` Visit (FromId, ToId, VisitTime) ``` where FromId and ToId are FKs to table ``` UserProfile (uid, name, age ...) ``` As a user with my UID I want to select all profiles I have visited or who visited me in one result set ordered by VisitTime and with the indication of the "direction of the visit". Is it possible to do it using only one MySQL query?
``` SELECT CASE WHEN a.FromID = 'yourIDHere' THEN c.Name ELSE b.Name END Name, CASE WHEN a.FromID = 'yourIDHere' THEN c.Age ELSE b.Age END Age, a.VisitTime, CASE WHEN a.FromID = 'yourIDHere' THEN 'You' ELSE 'Friend' END DirectionOfVisit FROM Visit a INNER JOIN UserProfile b ON a.FromID = b.Uid INNER JOIN UserProfile c ON a.ToID = c.Uid WHERE 'yourIDHere' IN (a.FromID, a.ToID) ORDER BY a.VisitTime ``` Brief Explanation: The query will display the name of your friend you visited or who have visited you and will also display the direction of the visit. When it displays `You`, it means that you have visited your friend's profile, otherwise it will display `Friend` if the friend have visited you. * [SQLFiddle Demo](http://sqlfiddle.com/#!2/e2d018/3)
You need two different sets of information (From's user profile, and To's user profile) depending on the direction of the visit. So you can: **do a double-JOIN combined with a lot of IF's** ``` SELECT IF ( FromId = f.uid, f.name, t.name ) AS name, IF ( FromId = f.uid, f.age, t.age ) AS age, ... FROM Visit JOIN UserProfile AS f ON (uid = FromId) JOIN UserProfile AS t ON (uid = ToId) ORDER BY VisitTime ``` **do the same, renaming fields, selecting all and doing the choice in the code on which set of fields you'll extract from the tuple (tuple is now double in size)** ``` SELECT f.name as f_name, t.name AS t_name, ... ``` **use a UNION. That's two queries rolled in one, and then you need a Sort** ``` SELECT * FROM ( SELECT UserProfile.*, 'From' AS direction FROM UserProfile JOIN Visit ON (FromId = uid) UNION SELECT UserProfile.*, 'To' AS direction FROM UserProfile JOIN Visit ON (ToId = uid) ) AS visits ORDER BY VisitTime ``` I believe this third option to be probably simpler. The first may yield better performances, or not, depending on indexing and actual table structure and size. For tables of a few hundred or thousand visits, the difference is probably negligible.
Are two selects needed?
[ "", "mysql", "sql", "" ]
I have inherited a SQL Server table in the (abbreviated) form of (includes sample data set): ``` | SID | name | Invite_Date | |-----|-------|-------------| | 101 | foo | 2013-01-06 | | 102 | bar | 2013-04-04 | | 101 | fubar | 2013-03-06 | ``` I need to select all `SID`'s and the `Invite_date`, but if there is a duplicate `SID`, then just get the latest entry (by date). So the results from the above would look like: ``` 101 | fubar | 2013-03-06 102 | bar | 2013-04-04 ``` Any ideas please. N.B the `Invite_date` column has been declared as a `nvarchar`, so to get it in a date format I am using `CONVERT(DATE, Invite_date)`
``` select t1.* from your_table t1 inner join ( select sid, max(CONVERT(DATE, Invite_date)) mdate from your_table group by sid ) t2 on t1.sid = t2.sid and CONVERT(DATE, t1.Invite_date) = t2.mdate ```
You can use a [ranking function](http://technet.microsoft.com/en-us/library/ms189798.aspx) like `ROW_NUMBER` or `DENSE_RANK` in a `CTE`: ``` WITH CTE AS ( SELECT SID, name, Invite_Date, rn = Row_Number() OVER (PARTITION By SID Order By Invite_Date DESC) FROM dbo.TableName ) SELECT SID, name, Invite_Date FROM CTE WHERE RN = 1 ``` `Demo` Use `Row_Number` if you want exactly one row per group and `Dense_Rank` if you want all last `Invite_Date` rows for each group in case of repeating max-`Invite_Date`s.
SQL Server : select from duplicate columns where date newest
[ "", "sql", "sql-server", "" ]
I would like to pass a set of values as a parameter to an Sql Statement (in vb.net). **In my case:** Users are allowed to upload a set of IDs, to check availability of an item. I would like to execute a statement that will return the items that match any of the IDs by doing something like the following: ``` SELECT * FROM MyTable WHERE id IN ('123','456','789') ``` But I cannot pass on the value ('123','456','789') as a parameter as it will be taken as an atomic value - a whole string, i.e., this will not work: ``` SELECT * FROM MyTable WHERE id IN :param where :param is ('123','456','789') ``` I cannot concatenate the strings (as shown above) either to avoid client-side sql injection. Any ideas?
you could pass the values in as XML and parse them using the XMLDOM. See: [here](http://psoug.org/reference/xml_functions.html) ``` DECLARE vXML VARCHAR2 (10000 CHAR) := '<ids><id>1</id><id>2</id><id>3</id></ids>'; BEGIN OPEN :refc FOR SELECT c."id" FROM XMLTABLE ('/ids/id' PASSING XMLTYPE (vXML) COLUMNS "id" VARCHAR2 (32)) c; END; ```
From VB.net you can pass an "Associative array" to a SQL call. In PL/SQL create types and procedures like this: ``` CREATE OR REPLACE TYPE NUMBER_TABLE_TYPE AS TABLE OF NUMBER; CREATE OR REPLACE PACKAGE My_Package AS TYPE NUMBER_ARRAY_TYPE IS TABLE OF NUMBER INDEX BY BINARY_INTEGER; PROCEDURE My_Procedure(arr IN NUMBER_ARRAY_TYPE); END My_Package; CREATE OR REPLACE PACKAGE BODY My_Package AS PROCEDURE My_Procedure(arr IN NUMBER_ARRAY_TYPE) IS nested_table NUMBER_TABLE_TYPE := NUMBER_TABLE_TYPE(); BEGIN -- First transform "Associative array" to a "Nested Table" FOR i IN arr.FIRST..att.LAST LOOP nested_table.EXTEND; nested_table(nested_table.LAST) := arr(i); END LOOP; SELECT * INTO ... FROM MyTable WHERE ID MEMBER OF nested_table; END My_Procedure; END My_Package; ``` In VB.NET it looks like this: ``` Sub My_Sub(ByVal idArr As Long()) Dim cmd As OracleCommand Dim par As OracleParameter cmd = New OracleCommand("BEGIN My_Package.My_Procedure(:arr); END;"), con) cmd.CommandType = CommandType.Text par = cmd.Parameters.Add("arr", OracleDbType.Int64, ParameterDirection.Input) par.CollectionType = OracleCollectionType.PLSQLAssociativeArray par.Value = idArr par.Size = idArr.Length cmd.ExecuteNonQuery() End Sub ``` Check Oracle doc for further information: [PL/SQL Associative Array Binding](http://docs.oracle.com/cd/B28359_01/win.111/b28375/featOraCommand.htm#BABBDHBB)
Is there a way to pass a set of values as a parameter in an Oracle SQL Statement
[ "", "sql", "oracle", "parameters", "set", "" ]
I want to know difference between these two key. When the Unique key with `not null` constrain in terms of how they are stored in database and what difference are there when we making `Select,Insert,Update, Delete` operation for these keys.
A primary key must be unique and non-null, so they're the same from that standpoint. However, a table can only have one primary key, while you can have multiple unique non-null keys. Most systems also use metadata to tag primary keys separately so that they can be identified by designers, etc. > What are the differences between a primary key and a Unique key with not null constrain in terms of how they are stored in database If both are either `CLUSTERED` or `NON CLUSTERED` then the only difference is metadata in most systems to tag a index as a PK. > what difference are there when we making `Select`,`Insert`,`Update`, `Delete` operation for these keys None.
Yes! In general there is one huge difference in how unique keys and primary keys are stored in SQL Server 2008. A unique key **by default** (you can change that) will be created as a non-clustered index and a PK will be created as a clustered index **by default** (you can change that also). Non-clustered means it will be stored in a structure "attached" to the table and will consume disk space. Clustered means the records will be actually stored in that physical order, no consuming disk space, and that's why your table can own just one clustered index. (Just if you are wondering... no, you cannot get 2 non-clustered PK in a table, PK are unique even if they are non-clustered.)
What is difference between unique key with 'not null' constraint and primary key?
[ "", "sql", "sql-server-2008", "" ]
I am trying to create a query for a report. I have a table of `licenses` and a table of `users`, and I have `license_assignments` as a many to many table to assign license seats to users: ``` mysql> CREATE TABLE license_assignments ( `uid` int(10) unsigned DEFAULT NULL, `lid` int(1) unsigned NOT NULL, `delta` int(10) unsigned NOT NULL, PRIMARY KEY (`lid`, `delta`), KEY `uid` (`uid`)); Query OK, 0 rows affected (0.06 sec) mysql> INSERT INTO license_assignments VALUES (1, 2, 1), (1,2,2), (1,2,3), (NULL, 2, 4), (NULL, 2, 5), (NULL, 2, 6); Query OK, 6 rows affected (0.03 sec) Records: 6 Duplicates: 0 Warnings: 0 mysql> select * FROM license_assignments; +------+-----+-------+ | uid | lid | delta | +------+-----+-------+ | NULL | 2 | 4 | | NULL | 2 | 5 | | NULL | 2 | 6 | | 1 | 2 | 1 | | 1 | 2 | 2 | | 1 | 2 | 3 | +------+-----+-------+ 6 rows in set (0.00 sec) ``` The report I want to create must show me the total number of license seats belong to a particular license ... ``` mysql> select COUNT(lid) FROM license_assignments all_licenses WHERE lid = 2; +------------+ | COUNT(lid) | +------------+ | 6 | +------------+ 1 row in set (0.00 sec) ``` ... and how many of those seats remain unassigned (no related user record): ``` mysql> select COUNT(lid) FROM license_assignments unassigned_licenses WHERE lid = 2 AND uid IS NULL; +------------+ | COUNT(lid) | +------------+ | 3 | +------------+ 1 row in set (0.00 sec) ``` However when I put those two queries together with a natural join, I get the cartesian product (3 x 6 = 18): ``` mysql> select COUNT(all_licenses.lid) as all_licenses_count, COUNT(unassigned.lid) as unassigned_count FROM license_assignments unassigned, license_assignments all_licenses WHERE unassigned.lid = 2 AND unassigned.uid IS NULL AND all_licenses.lid = 2; +--------------------+------------------+ | all_licenses_count | unassigned_count | +--------------------+------------------+ | 18 | 18 | +--------------------+------------------+ 1 row in set (0.00 sec) ``` Thinking I just needed to add a `GROUP BY`, I did so, but it didn't help: ``` mysql> select COUNT(all_licenses.lid) as all_licenses_count, COUNT(unassigned.lid) as unassigned_count FROM license_assignments unassigned, license_assignments all_licenses WHERE unassigned.lid = 2 AND unassigned.uid IS NULL AND all_licenses.lid = 2 GROUP BY all_licenses.lid, unassigned.lid; +--------------------+------------------+ | all_licenses_count | unassigned_count | +--------------------+------------------+ | 18 | 18 | +--------------------+------------------+ 1 row in set (0.00 sec) ``` Then I supposed that natural joins were tripping me up, so I tried inner joins: ``` mysql> select COUNT(all_licenses.lid) as all_licenses_count, COUNT(unassigned.lid) as unassigned_count FROM license_assignments unassigned INNER JOIN license_assignments all_licenses ON all_licenses.lid = unassigned.lid WHERE unassigned.uid IS NULL; +--------------------+------------------+ | all_licenses_count | unassigned_count | +--------------------+------------------+ | 18 | 18 | +--------------------+------------------+ 1 row in set (0.00 sec) ``` What am I failing to understand? I expect to perform a query that gives me this result: ``` mysql> select COUNT( ... ; +--------------------+------------------+ | all_licenses_count | unassigned_count | +--------------------+------------------+ | 6 | 3 | +--------------------+------------------+ 1 row in set (0.00 sec) ``` But my set theory is apparently rusty. What do I need to do? BTW: ``` mysql> select version(); +-------------------+ | version() | +-------------------+ | 5.5.31-1~dotdeb.0 | +-------------------+ ```
You are getting a cartesian product because one set is six rows with lid=2, and the other set has three rows with lid=2. Each row in a set is being matched with EVERY row in the other set. The problem with the JOIN here is that you need to guarantee that a row from set one will match at most one row from set two... you need a join predicate on a UNIQUE key. If you absolutely needed to use a JOIN to get this resultset, then this would work: ``` SELECT COUNT(a.lid) AS all_licenses_count , COUNT(u.lid) AS unassigned_count FROM license_assignments a LEFT JOIN license_assignments u ON u.lid = a.lid AND u.delta = a.delta AND u.uid IS NULL WHERE a.lid = 2 ``` Note that the JOIN predicate is matching on both `lid` AND `delta` (which is defined as UNIQUE KEY in the table definition.) So we are guaranteed that a row from set one (a) will match at most one rows from set two (u). As other answers have noted, using a JOIN like this is not the most efficient way to get that result. There are several ways to return an equivalent result, but the most efficient approach is usually along the lines of doing a single pass through the table to pick out "all" rows, and then use an expression that performs a conditional test to determine whether the row should be included in another COUNT or SUM aggregate. I'd write something like this: ``` SELECT SUM(1) AS all_licenses_count , SUM(a.uid IS NULL) AS unassigned_count FROM license_assignments a WHERE a.lid = 2 ```
The query is much simpler than you think :) ``` SELECT COUNT(*) all_licenses_count, COUNT(*) - COUNT(uid) unassigned_count FROM license_assignments WHERE lid = 2 ``` The `COUNT(*)` counts rows while `COUNT(uid)` counts rows that have `uid` not null. Output: ``` | ALL_LICENSES_COUNT | UNASSIGNED_COUNT | |--------------------|------------------| | 6 | 3 | ``` Fiddle [here](http://sqlfiddle.com/#!2/6b6dae/1).
Why am I getting a cartesian project with this join?
[ "", "mysql", "sql", "join", "" ]
I'm working on a query that doesn't seem to be doing what it's supposed to since being transferred from one SSRS server to another, and the following line in part of the where statement *appears* to be where the difference is, or at least from what I can find. ``` where COALESCE(field, -1) = CASE field WHEN 1 THEN 0 ELSE -1 END ``` I know it's probably a bit generic but can anyone shed any light on what this may be doing? I've read up on coalesce and gather that it finds the first non-null value from the specified list of fields, but I don't understand what the '-1' does there. I hope it's not too general a question and that someone can give me a clue as to what it may be doing
Without context it's difficult to give a truly helpful answer. At first glance it looks as though it could be rewritten much more simply from this: ``` WHERE COALESCE(@field, -1) = CASE @field WHEN 1 THEN 0 ELSE -1 END ``` to this: ``` WHERE COALESCE(@field, -1) = -1 ``` If that is true then basically you are saying that if the field is null or the field equals -1 then the condition is true otherwise it's false. Here are some tests to try to prove this: ``` -- Original DECLARE @field INT SELECT 1 WHERE COALESCE(@field, -1) = CASE @field WHEN 1 THEN 0 ELSE -1 END SET @field = -1 SELECT 1 WHERE COALESCE(@field, -1) = CASE @field WHEN 1 THEN 0 ELSE -1 END SET @field = 0 SELECT 1 WHERE COALESCE(@field, -1) = CASE @field WHEN 1 THEN 0 ELSE -1 END SET @field = 1 SELECT 1 WHERE COALESCE(@field, -1) = CASE @field WHEN 1 THEN 0 ELSE -1 END SET @field = 2 SELECT 1 WHERE COALESCE(@field, -1) = CASE @field WHEN 1 THEN 0 ELSE -1 END SET @field = 3 SELECT 1 WHERE COALESCE(@field, -1) = CASE @field WHEN 1 THEN 0 ELSE -1 END --Rewritten DECLARE @field INT SELECT 1 WHERE COALESCE(@field, -1) = -1 SET @field = -1 SELECT 1 WHERE COALESCE(@field, -1) = -1 SET @field = 0 SELECT 1 WHERE COALESCE(@field, -1) = -1 SET @field = 1 SELECT 1 WHERE COALESCE(@field, -1) = -1 SET @field = 2 SELECT 1 WHERE COALESCE(@field, -1) = -1 SET @field = 3 SELECT 1 WHERE COALESCE(@field, -1) = -1 ``` Both sets of queries in this test give the same results, but as I said without context and realistic test data it's difficult to know if there was a reason why the query was written in the way that it originally was. Here is another example from a different perspective, using a LEFT JOIN: ``` DECLARE @MainTable AS TABLE(ident INT) DECLARE @PossibleNullTable AS TABLE(mainIdent INT, field INT) INSERT INTO @MainTable(ident) VALUES(1) INSERT INTO @MainTable(ident) VALUES(2) INSERT INTO @MainTable(ident) VALUES(3) INSERT INTO @MainTable(ident) VALUES(4) INSERT INTO @MainTable(ident) VALUES(5) INSERT INTO @PossibleNullTable(mainIdent, field) VALUES(1,-1) INSERT INTO @PossibleNullTable(mainIdent, field) VALUES(1,1) INSERT INTO @PossibleNullTable(mainIdent, field) VALUES(1,0) INSERT INTO @PossibleNullTable(mainIdent, field) VALUES(2,0) INSERT INTO @PossibleNullTable(mainIdent, field) VALUES(3,1) INSERT INTO @PossibleNullTable(mainIdent, field) VALUES(5,-1) --Original SELECT * FROM @MainTable mt LEFT JOIN @PossibleNullTable pnt ON mt.ident = pnt.mainIdent WHERE COALESCE(field, -1) = CASE field WHEN 1 THEN 0 ELSE -1 END --Original Result ident mainIdent field 1 1 -1 4 NULL NULL 5 5 -1 --Rewritten SELECT * FROM @MainTable mt LEFT JOIN @PossibleNullTable pnt ON mt.ident = pnt.mainIdent WHERE COALESCE(field, -1) = -1 --Rewritten Result ident mainIdent field 1 1 -1 4 NULL NULL 5 5 -1 ``` Again both queries in this test give the same results.
> the first non-null value from the specified list of fields This means the list of fields between the parenthesis. For instance: ``` COALESCE(col1,col2,col3,-1) ``` means that if `col1` is not null then use this, else check `col2`. If `col2` is null then check `col3`. If that is null too then use -1 as the value. In your example, `COALESCE(field, -1)` is equivalent to `ISNULL(field, -1)` In my example `COALESCE(col1,col2,col3,-1)` is equivalent to `ISNULL(ISNULL(ISNULL(col1, col2), col3), -1)`
coalesce and a case statement - explanation?
[ "", "sql", "t-sql", "sql-server-2008-r2", "" ]
i've a table in a database i would like to use as a contacts table to import into outlook. what i would like the import to do is this: name | email andy | ds@ds.com <--name and email entered at the moment i have this code: ``` (SELECT 'Name', 'Email') union all (select E_Name, Email INTO OUTFILE '/xampp/tmp/Sample.csv' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' ESCAPED BY '\\' LINES TERMINATED BY '\n' FROM email WHERE 1) ``` this creates an excel file like this: A | B <-- excel column names Name | Email andy | ds@ds.com i know if i pull the information using odbc connection it pulls the information the way i require, however i want it so the csv file is created with this information already in it, thus removing the need to do the odbc method.
I found a solution to my problem. would slap myself. ``` SELECT * INTO OUTFILE '/xampp/tmp/Sample.csv' FIELDS TERMINATED BY ',' LINES TERMINATED BY '\r\n' FROM ( SELECT 'Name' AS `E_Name`, 'Email' AS `Email` UNION ALL SELECT `E_Name`, `Email` FROM `email` ) `sub_query` ``` before the new line i wasnt doing a return. once i added this the csv can be imported into outlook no problem. Thanks for the help though all. ill leave this open incase anyone else has the same problem. \****face palm*\***
I find this situation often. The easiest way I know is by using `sed`, that is a command-line utility for Unix. Since you are working on Excel, I am assuming that you don't have a \*nix system (Linux, BSD, etc). So I recommend you install Cygwin (you can get it [here](http://www.cygwin.com/)). Then, open a Cygwin bash shell and write this in the command prompt: ``` mysql -h yourHost -u yourUser -pYourPassword yourDatabase -e"select e_name, email from email" | sed 's/\t/,/g' > yourOutput.csv ``` Let me explain each piece: 1. `mysql -h yourHost -u yourUser -pYourPassword yourDatabase -e"select..."` opens a session in MySQL, connects to `yourDatabase` and executes the query (the `-e` flag is used to execute a query). By default, the output is directed to the console as tab-separated values. 2. The pipe character `|` is used to redirect the output of the first command to the second command, instead to direct the output to the console. 3. `sed 's/\t/,/g''` substitutes every tab character (`\t`) with a comma 4. `> yourOutput.csv` redirects the result of the command to a file, called `yourOutput.csv`. You can specify a path of course. More information about `sed` and redirecting: <http://lowfatlinux.com> --- I know this can be cumbersome, but if you do this frequently, this will save you quite some time and headaches (the first time is the hardest, after a while it turns out to be very straightforward) Hope this helps --- About your comment: > will this take the headers from mysql and make them headers in excel the way the odbc connection would? In layman termns: an ODBC connection makes a table or view visible to the client program, so the client can use the data contained in the table or view. That means, of course, that the column names are visible to the client. The solution I propose above is more like "exporting" the result of a `select` statement to a file. If you don't want to install Cygwin, I have some news for you: The `>` redirection operator works on Windows' Command window... but I don't know of a Windows command line utility to replace characters. If you don't mind dealing with tabs instead of commas, this will work: ``` mysql -h yourHost, -u yourUser -pyourPassword yourDatabase -e"select..." > yourOutputFile.txt ``` This will export the result of the query to the `yourOutputFile.txt` file, as tab-separated values (which can be read by excel). As I mentioned in my comment, if you use aliases for your column names, these will be the column headers that will be exported.
Column names from MySQL to CSV file
[ "", "mysql", "sql", "excel", "csv", "" ]
How do I check whether insertion will result in UNIQUE INDEX collision error without actually inserting something?
Run a SELECT query with unique column condition and if result data set isn't empty, it means will occur unique index error: ``` SELECT 1 FROM table WHERE unique_column = 'new_insert_value' LIMIT 1 ```
`INSERT` and catch the error. Is cheapest way, because it performs the lookup and actually does the insert on success. The problem with a separate check before the actual INSERT is, of course, concurrency. Multiple processes can do the check, all conclude that there will be no collision, and all decide to `INSERT`, only to collide. Getting this scenario right (check then insert) is so problematic (locking an non-existing slot) that is really not worth. MySQL has syntax to gracefully handle collision [`INSERT ... ON DUPLICATE KEY UPDATE`](http://dev.mysql.com/doc/refman/5.0/en/insert-on-duplicate.html).
Dry insert to check for index collisions
[ "", "mysql", "sql", "" ]
Is there a way to Select records between dates and specific time range. For example all the records from 2013-11-01 to 2013-11-30 between hours 05:00 to 15:00. Here what i made until now. ``` select count(*) as total from tradingaccounts accounts inner join tradingaccounts_audit audit on audit.parent_id = accounts.id where date(audit.date_created) between date('2013-11-01 00:00:00') AND date('2013-11-01 23:59:59') ``` But how am I going to set the specific time range?
You can use the `HOUR` function to add an additional condition on the hours: ``` select count(*) as total from tradingaccounts accounts inner join tradingaccounts_audit audit on audit.parent_id = accounts.id where date(audit.date_created) between date('2013-11-01 00:00:00') AND date('2013-11-01 23:59:59') AND HOUR (audit.date_created) BETWEEN 5 AND 15 ```
As others answered, `HOUR()` could help you. But HOUR() or DATE() cannot use `INDEX`. To make query faster, I suggest that add `time_created TIME` column and save only TIME part. after that `ADD INDEX(date_created, time_created)`. finally with below query, you can retrieve rows with high speed. ``` where audit.date_created between '2013-11-01 00:00:00' AND '2013-11-01 23:59:59' AND audit.time_created BETWEEN '05:00:00' AND '15:00:00' ```
Select between date range and specific time range
[ "", "mysql", "sql", "select", "time", "" ]
``` Class| Value ------------- A | 1 A | 2 A | 3 A | 10 B | 1 ``` I am not sure whether it is practical to achieve this using SQL. If the difference of values are less than 5 (or x), then group the rows (of course with the same Class) Expected result ``` Class| ValueMin | ValueMax --------------------------- A | 1 | 3 A | 10 | 10 B | 1 | 1 ``` For fixed intervals, we can easily use "GROUP BY". But now the grouping is based on nearby row's value. So if the values are consecutive or very close, they will be "chained together". Thank you very much Assuming MSSQL
These give the correct result, using the fact that you must have the same number of group starts as ends and that they will both be in ascending order. ``` if object_id('tempdb..#temp') is not null drop table #temp create table #temp (class char(1),Value int); insert into #temp values ('A',1); insert into #temp values ('A',2); insert into #temp values ('A',3); insert into #temp values ('A',10); insert into #temp values ('A',13); insert into #temp values ('A',14); insert into #temp values ('b',7); insert into #temp values ('b',8); insert into #temp values ('b',9); insert into #temp values ('b',12); insert into #temp values ('b',22); insert into #temp values ('b',26); insert into #temp values ('b',67); ``` **Method 1 Using CTE and row offsets** ``` with cte as (select distinct class,value,ROW_NUMBER() over ( partition by class order by value ) as R from #temp), cte2 as ( select c1.class ,c1.value ,c2.R as PreviousRec ,c3.r as NextRec from cte c1 left join cte c2 on (c1.class = c2.class and c1.R= c2.R+1 and c1.Value < c2.value + 5) left join cte c3 on (c1.class = c3.class and c1.R= c3.R-1 and c1.Value > c3.value - 5) ) select Starts.Class ,Starts.Value as StartValue ,Ends.Value as EndValue from ( select class ,value ,row_number() over ( partition by class order by value ) as GroupNumber from cte2 where PreviousRec is null) as Starts join ( select class ,value ,row_number() over ( partition by class order by value ) as GroupNumber from cte2 where NextRec is null) as Ends on starts.class=ends.class and starts.GroupNumber = ends.GroupNumber ``` \*\* Method 2 Inline views using not exists \*\* ``` select Starts.Class ,Starts.Value as StartValue ,Ends.Value as EndValue from ( select class,Value ,row_number() over ( partition by class order by value ) as GroupNumber from (select distinct class,value from #temp) as T where not exists (select 1 from #temp where class=t.class and Value < t.Value and Value > t.Value -5 ) ) Starts join ( select class,Value ,row_number() over ( partition by class order by value ) as GroupNumber from (select distinct class,value from #temp) as T where not exists (select 1 from #temp where class=t.class and Value > t.Value and Value < t.Value +5 ) ) ends on starts.class=ends.class and starts.GroupNumber = ends.GroupNumber ``` In both methods I use a select distinct to begin because if you have a dulpicate entry at a group start or end things go awry without it.
You are trying to group things by gaps between values. The easiest way to do this is to use the `lag()` function to find the gaps: ``` select class, min(value) as minvalue, max(value) as maxvalue from (select class, value, sum(IsNewGroup) over (partition by class order by value) as GroupId from (select class, value, (case when lag(value) over (partition by class order by value) > value - 5 then 0 else 1 end) as IsNewGroup from t ) t ) t group by class, groupid; ``` Note that this assumes SQL Server 2012 for the use of `lag()` and cumulative sum.
SQL group by if values are close
[ "", "sql", "sql-server", "group-by", "subquery", "gaps-and-islands", "" ]
I have a simple select query - ``` SELECT ID, NAME FROM PERSONS WHERE NAME IN ('BBB', 'AAA', 'ZZZ') -- ORDER BY ??? ``` I want this result to be ordered by the sequence in which NAMES are provided, that is, 1st row in result set should be the one with NAME = BBB, 2nd is AAA, 3rd it ZZZ. Is this possible in SQL server ? I would like to know how to do it if there is a simple and short way of doing it, like maybe 5-6 lines of code.
You could create an ordered split function: ``` CREATE FUNCTION [dbo].[SplitStrings_Ordered] ( @List NVARCHAR(MAX), @Delimiter NVARCHAR(255) ) RETURNS TABLE AS RETURN (SELECT [Index] = ROW_NUMBER() OVER (ORDER BY Number), Item FROM (SELECT Number, Item = SUBSTRING(@List, Number, CHARINDEX(@Delimiter, @List + @Delimiter, Number) - Number) FROM (SELECT ROW_NUMBER() OVER (ORDER BY s1.[object_id]) FROM sys.all_objects AS s1 CROSS JOIN sys.all_objects AS s2) AS n(Number) WHERE Number <= CONVERT(INT, LEN(@List)) AND SUBSTRING(@Delimiter + @List, Number, LEN(@Delimiter)) = @Delimiter ) AS y); ``` Then alter your input slightly (a single comma-separated list instead of three individual strings): ``` SELECT p.ID, p.NAME FROM dbo.PERSONS AS p INNER JOIN dbo.SplitStrings_Ordered('BBB,AAA,ZZZ', ',') AS s ON p.NAME = s.Item ORDER BY s.[Index]; ```
You could store the names in a temp table with an order. Example: ``` DECLARE @Names TABLE ( Name VARCHAR(MAX), SortOrder INT ) INSERT INTO @Names (Name, SortOrder) VALUES ('BBB', 1) INSERT INTO @Names (Name, SortOrder) VALUES ('AAA', 2) INSERT INTO @Names (Name, SortOrder) VALUES ('ZZZ', 3) SELECT P.ID, P.NAME FROM PERSONS P JOIN @Names N ON P.Name = N.Name ORDER BY N.SortOrder ```
order by using terms in where clause
[ "", "sql", "sql-server", "" ]
Im trying to create a plugin for wordpress and I encounter a table with a structure like this ![enter image description here](https://i.stack.imgur.com/EFEVB.png) Sample data ![enter image description here](https://i.stack.imgur.com/0I0FL.png) How should my MySQL `Select` query look like if I want to get ``` user_id where City = 'CityName' and Gender = 'Male' //sample query ``` Something like ``` SELECT user_id,value,country FROM `wp_bp_xprofile_data` WHERE value = 'CityName' AND value = 'Male' GROUP BY user_id ``` OUTPUT should like ``` 1 | nameofperson_1 | country_name 2 | nameofperson_2 | country_name 3 | nameofperson_3 | country_name 4 | nameofperson_4 | country_name ```
In order to match the `field_id` with a particular value you'll have to match them in pairs `(field_id AND value)`. Then you will have to count the amount of matches. ``` SELECT user_id FROM wp_bp_xprofile_data t WHERE (field_id = 270 AND value = 'Gender') OR (field_id = 354 AND value = 'City') GROUP BY user_id HAVING COUNT(*) = 2 ``` Fiddle [here](http://sqlfiddle.com/#!2/415c8d/1). One thing that results from applying the Entity-Attribute-Value (EAV) model is that your tables are not normalized then you should make sure that you don't have more than one `field_id` for a given `user_id` with the same value. A workaround for this would be to query: ``` HAVING COUNT(distinct field_id) = 2 ``` But this won't solve the data issue, of course. **Edit:** Given your question update... in order to return the additional data probably putting the conditions in the `having` clause might be better: ``` SELECT user_id, max(case when field_id = 270 then value end) nameOfPerson, max(case when field_id = 354 then value end) cityName FROM wp_bp_xprofile_data t GROUP BY user_id HAVING SUM( (field_id = 270 AND value = 'Gender') + (field_id = 354 AND value = 'City') ) = 2 ``` Fiddle [here](http://sqlfiddle.com/#!2/2af87/1).
Assuming wp\_bp\_xprofile\_data is for table above, and `value` has real value rather than "Name", "Gender" etc. could you try this? ``` SELECT t1.user_id, t1.value, t2.value FROM wp_bp_xprofile_data t1 INNER JOIN wp_bp_xprofile_data t2 ON t1.user_id = t2.user_id WHERE t1.field_id = 270 AND t1.value = 'Mail' AND t2.field_id = 354 AND t2.value = 'CityName' ```
Mysql Select on row values
[ "", "mysql", "sql", "" ]
I am using Oracle 10g and I have the following table: ``` create table DE_TRANSFORM_MAP ( DE_TRANSFORM_MAP_ID NUMBER(10) not null, CLIENT NUMBER(5) not null, USE_CASE NUMBER(38) not null, DE_TRANSFORM_NAME VARCHAR2(100) not null, IS_ACTIVE NUMBER(1) not null ) ``` That maps to an entry in the following table: ``` create table DE_TRANSFORM ( DE_TRANSFORM_ID NUMBER(10) not null, NAME VARCHAR2(100) not null, IS_ACTIVE NUMBER(1) not null ) ``` I would like to enforce the following rules: * Only one row in DE\_TRANSFORM\_MAP with the same CLIENT and USE\_CASE can have IS\_ACTIVE set to 1 at any time * Only one row in DE\_TRANSFORM with the same NAME and IS\_ACTIVE set to 1 at any time * A row in DE\_TRANSFORM cannot have IS\_ACTIVE changed from 1 to 0 if any rows in DE\_TRANSFORM\_MAP have DE\_TRANSFORM\_NAME equal to NAME and IS\_ACTIVE set to 1 Does this make sense? I have tried to write a stored proc that handles this: ``` create or replace trigger DETRANSFORMMAP_VALID_TRIG after insert or update on SERAPH.DE_TRANSFORM_MAP for each row declare active_rows_count NUMBER; begin select count(*) into active_rows_count from de_transform_map where client = :new.client and use_case = :new.use_case and is_active = 1; if :new.is_active = 1 and active_rows_count > 0 then RAISE_APPLICATION_ERROR(-20000, 'Only one row with the specified client, use_case, policy_id and policy_level may be active'); end if; end; ``` When I do the following it works as expected, I get an error: ``` insert into de_transform_map (de_transform_map_id, client, use_case, de_transform_name, is_active) values (detransformmap_id_seq.nextval, 6, 0, 'TEST', 1); insert into de_transform_map (de_transform_map_id, client, use_case, de_transform_name, is_active) values (detransformmap_id_seq.nextval, 6, 1, 'TEST', 1); ``` But if I then do this: ``` update de_transform_map set use_case = 0 where use_case = 1 ``` I get the following: ``` ORA-04091: table DE_TRANSFORM_MAP is mutating, trigger/function may not see it ``` How can I accomplish my validation? EDIT: I marked Rene's answer as correct because I think the most correct and elegant way to do this is with a compound trigger but our production DB is still just 10g, we are updating to 11g early next year and I will rewrite the trigger then. Until then, I have a blanket trigger that will assert that no rows are duplicated, here it is: ``` create or replace trigger DETRANSFORMMAP_VALID_TRIG after insert or update on DE_TRANSFORM_MAP declare duplicate_rows_exist NUMBER; begin select 1 into duplicate_rows_exist from dual where exists ( select client, use_case, count(*) from de_transform_map where is_active = 1 group by client, use_case having count(*) > 1 ); if duplicate_rows_exist = 1 then RAISE_APPLICATION_ERROR(-20000, 'Only one row with the specified client, use_case may be active'); end if; end; ```
The error you get means that you cannot query the table the trigger is on from within a row level trigger itself. One way to work around this problem is to use a combination of 3 triggers. * a) A before statement level trigger * b) A row level trigger * c) An after statement level trigger Trigger A initializes a collection in a package Trigger B adds every changed row to the collection Trigger C performs the desired action for every entry in the collection. More details here: <http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551198119097816936> One of the improvements in Oracle 11G is that you can do all these action in one compound trigger. More here: <http://www.oracle-base.com/articles/11g/trigger-enhancements-11gr1.php>
You should perhaps consider doing a "before insert" sort of thing! I've only got an MSSQL engine to play with right now, but hopefully something below might help you on your way... I'm not sure what you mean with your example of an error that works, however, as it appears to be in contradiction to the first use case you've posted... Either way, triggers can be a real pain during concurrent writes so you'll want to be careful in doing this sort of business logic validation from only the back end. ``` IF NOT EXISTS ( SELECT 1 FROM sys.objects WHERE name = 'DE_TRANSFORM_MAP' AND type = 'U' ) BEGIN --DROP TABLE DE_TRANSFORM_MAP; CREATE TABLE DE_TRANSFORM_MAP ( DE_TRANSFORM_MAP_ID NUMERIC(10) NOT NULL, PRIMARY KEY ( DE_TRANSFORM_MAP_ID ), CLIENT NUMERIC( 5 ) NOT NULL, USE_CASE NUMERIC( 38 ) NOT NULL, DE_TRANSFORM_NAME NVARCHAR( 100 ) NOT NULL, IS_ACTIVE TINYINT NOT NULL ); END; IF NOT EXISTS ( SELECT 1 FROM sys.objects WHERE name = 'DE_TRANSFORM' AND type = 'U' ) BEGIN --DROP TABLE DE_TRANSFORM; CREATE TABLE DE_TRANSFORM ( DE_TRANSFORM_ID NUMERIC( 10 ) NOT NULL, PRIMARY KEY ( DE_TRANSFORM_ID ), NAME NVARCHAR( 100 ) NOT NULL, IS_ACTIVE TINYINT NOT NULL ); END; GO IF NOT EXISTS ( SELECT 1 FROM sys.objects WHERE name = 'DETRANSFORMMAP_VALID_TRIG' AND type = 'TR' ) BEGIN --DROP TRIGGER DETRANSFORMMAP_VALID_TRIG; EXEC( ' CREATE TRIGGER DETRANSFORMMAP_VALID_TRIG ON DE_TRANSFORM_MAP INSTEAD OF INSERT, UPDATE AS SET NOCOUNT OFF;' ); END; GO ALTER TRIGGER DETRANSFORMMAP_VALID_TRIG ON DE_TRANSFORM_MAP INSTEAD OF INSERT, UPDATE AS BEGIN SET NOCOUNT ON; IF ( ( SELECT MAX( IS_ACTIVE ) FROM ( SELECT IS_ACTIVE = SUM( IS_ACTIVE ) FROM ( SELECT CLIENT, USE_CASE, IS_ACTIVE FROM DE_TRANSFORM_MAP EXCEPT SELECT CLIENT, USE_CASE, IS_ACTIVE FROM DELETED UNION ALL SELECT CLIENT, USE_CASE, IS_ACTIVE FROM INSERTED ) f GROUP BY CLIENT, USE_CASE ) mf ) > 1 ) BEGIN RAISERROR( 'DE_TRANSFORM_MAP: CLIENT & USE_CASE cannot have multiple actives', 16, 1 ); END ELSE BEGIN DELETE DE_TRANSFORM_MAP WHERE DE_TRANSFORM_MAP_ID IN ( SELECT DE_TRANSFORM_MAP_ID FROM DELETED ); INSERT INTO DE_TRANSFORM_MAP ( DE_TRANSFORM_MAP_ID, CLIENT, USE_CASE, DE_TRANSFORM_NAME, IS_ACTIVE ) SELECT DE_TRANSFORM_MAP_ID, CLIENT, USE_CASE, DE_TRANSFORM_NAME, IS_ACTIVE FROM INSERTED; END; SET NOCOUNT OFF; END; GO INSERT INTO DE_TRANSFORM_MAP ( DE_TRANSFORM_MAP_ID, CLIENT, USE_CASE, DE_TRANSFORM_NAME, IS_ACTIVE ) VALUES ( 1, 6, 0, 'TEST', 1 ); INSERT INTO DE_TRANSFORM_MAP ( DE_TRANSFORM_MAP_ID, CLIENT, USE_CASE, DE_TRANSFORM_NAME, IS_ACTIVE ) VALUES ( 2, 6, 1, 'TEST', 1 ); GO SELECT * FROM dbo.DE_TRANSFORM_MAP; GO TRUNCATE TABLE DE_TRANSFORM_MAP; GO INSERT INTO DE_TRANSFORM_MAP ( DE_TRANSFORM_MAP_ID, CLIENT, USE_CASE, DE_TRANSFORM_NAME, IS_ACTIVE ) SELECT 1, 6, 0, 'TEST', 1 UNION ALL SELECT 2, 6, 1, 'TEST', 1 UNION ALL SELECT 3, 6, 1, 'TEST2', 1; GO SELECT * FROM dbo.DE_TRANSFORM_MAP; GO TRUNCATE TABLE DE_TRANSFORM_MAP; GO INSERT INTO DE_TRANSFORM_MAP ( DE_TRANSFORM_MAP_ID, CLIENT, USE_CASE, DE_TRANSFORM_NAME, IS_ACTIVE ) SELECT 1, 6, 0, 'TEST', 1 UNION ALL SELECT 2, 6, 1, 'TEST', 0 UNION ALL SELECT 3, 6, 1, 'TEST2', 1; GO SELECT * FROM dbo.DE_TRANSFORM_MAP; GO UPDATE dbo.DE_TRANSFORM_MAP SET IS_ACTIVE = 1 WHERE DE_TRANSFORM_MAP_ID = 2; GO IF NOT EXISTS ( SELECT 1 FROM sys.objects WHERE name = 'DETRANSFORM_VALID_TRIG' AND type = 'TR' ) BEGIN --DROP TRIGGER DETRANSFORM_VALID_TRIG; EXEC( ' CREATE TRIGGER DETRANSFORM_VALID_TRIG ON DE_TRANSFORM INSTEAD OF INSERT, UPDATE AS SET NOCOUNT OFF;' ); END; GO ALTER TRIGGER DETRANSFORM_VALID_TRIG ON DE_TRANSFORM INSTEAD OF INSERT, UPDATE AS BEGIN SET NOCOUNT ON; IF ( ( SELECT MAX( IS_ACTIVE ) FROM ( SELECT IS_ACTIVE = SUM( IS_ACTIVE ) FROM ( SELECT NAME, IS_ACTIVE FROM DE_TRANSFORM EXCEPT SELECT NAME, IS_ACTIVE FROM DELETED UNION ALL SELECT NAME, IS_ACTIVE FROM INSERTED ) f GROUP BY NAME ) mf ) > 1 ) BEGIN RAISERROR( 'DE_TRANSFORM: NAME cannot have multiple actives', 16, 1 ); END ELSE IF EXISTS (SELECT 1 FROM DE_TRANSFORM_MAP WHERE IS_ACTIVE = 1 AND DE_TRANSFORM_NAME IN ( SELECT NAME FROM DELETED UNION ALL SELECT NAME FROM INSERTED WHERE IS_ACTIVE = 0 ) ) BEGIN RAISERROR( 'DE_TRANSFORM: NAME is active in DE_TRANSFORM_MAP', 16, 1 ); END ELSE BEGIN DELETE DE_TRANSFORM WHERE DE_TRANSFORM_ID IN (SELECT DE_TRANSFORM_ID FROM DELETED ); INSERT INTO DE_TRANSFORM ( DE_TRANSFORM_ID, NAME, IS_ACTIVE ) SELECT DE_TRANSFORM_ID, NAME, IS_ACTIVE FROM INSERTED; END; SET NOCOUNT OFF; END; GO INSERT INTO DE_TRANSFORM ( DE_TRANSFORM_ID, NAME, IS_ACTIVE ) VALUES( 1, 'TEST2', 0 ); GO SELECT * FROM DE_TRANSFORM; GO TRUNCATE TABLE DE_TRANSFORM; GO TRUNCATE TABLE DE_TRANSFORM_MAP; GO ```
Oracle 10g and data validation in a trigger before row update
[ "", "sql", "oracle", "triggers", "oracle10g", "" ]
I have a table named myvals with the following fields: ``` ID number -- ------- 1 7 2 3 3 4 4 0 5 9 ``` Starting on 2nd row, I would like to add the number with the previous row number. So, my end result would look like this ``` ID number -- ------ 1 7 2 10 3 7 4 4 5 9 ```
You could use the [LAG](http://technet.microsoft.com/en-us/library/hh231256.aspx) analytic function ``` SELECT Id, number + LAG(number,1,0) OVER (ORDER BY Id) FROM table ```
First thing's first. You can't add to null to ID 1 must have a value.
SQL - Add value with previous row only
[ "", "sql", "sql-server-2012", "" ]