Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I have the following table, named Example: ``` col1 ---------- 101 102 103 104 ``` I want below result existing column with one addition result ``` col1 newcol --------------- 101 0/''/null 102 101 103 102 104 103 SELECT COL1, @a := @a AS col2 FROM MYTABLE JOIN (SELECT @a := 0) t ORDER BY COL1; ``` Can I get the previous column value in the second column corresponding to the first column
You can use `a` to save the last one. ``` SELECT @a as Prev, @a:=COL1 as Current FROM MYTABLE ORDER BY COL1; ``` You can find further information in [MySQL User-Defined Variables](https://dev.mysql.com/doc/refman/5.7/en/user-variables.html) I hope that you will find it useful. **EDITED**: Removed the initializing sentence of @a: ``` SET @a=0; ``` First value of @a will be NULL
This is a bit complicated, but the following handles it: ``` SELECT COL1, (CASE WHEN (@oldprev := @prev) = NULL THEN NULL -- NEVER HAPPENS WHEN (@prev := col1) = NULL THEN NULL -- NEVER HAPPENS ELSE @oldprev END) as prevcol FROM MYTABLE CROSS JOIN (SELECT @prev := 0) params ORDER BY COL1; ``` As a note: `oldprev` doesn't need to be initialized because its value is used only in the `case`.
Getting previous column values in a new column
[ "", "mysql", "sql", "" ]
I am working on a long query that includes a SELECT DISTINCT and subquery. It is partially working, but not when put together. I am getting a Error: 1241 - Operand should contain 1 column(s) and I cannot see why. Any help would be greatly appreciated. Here is my code: ``` SELECT p.projId, pc.company, s.staffId, s.fName, s.lName FROM projects AS p INNER JOIN projCorp AS pc ON p.projId = pc.projId INNER JOIN projStaff AS ps ON p.projId = ps.projId INNER JOIN staff AS s ON ps.staffId = s.StaffId WHERE p.projId = '9' AND s.company = pc.company UNION SELECT DISTINCT p.projId, pc.company, "NA", "NA", "NA" FROM projects AS p INNER JOIN projCorp AS pc ON p.projId = pc.projId INNER JOIN projStaff AS ps ON p.projId = ps.projId WHERE p.projId = '9' AND pc.company NOT IN (SELECT p.projId, pc.company, s.staffId, s.fName, s.lName FROM projects AS p INNER JOIN projCorp AS pc ON p.projId = pc.projId INNER JOIN projStaff AS ps ON p.projId = ps.projId INNER JOIN staff AS s ON ps.staffId = s.StaffId WHERE p.projId = '9' AND s.company = pc.company); ``` Without the NOT IN subquery, I am getting this result: [![result without NOT IN subquery](https://i.stack.imgur.com/Vycxy.png)](https://i.stack.imgur.com/Vycxy.png) The reason for the NOT IN subquery is to get rid of the third duplicate entry that doesn't have a staff member associated. The result should just have the first two entries from the picture result field.
Snip: ``` ... WHERE p.projId = '9' AND pc.company NOT IN ( SELECT p.projId, pc.company, s.staffId, s.fName, s.lName FROM projects AS p .... ) ``` You're trying to select when `company not in` but there are specifying more than one selected row from within your `not in` query. Just select `company`: ``` ... WHERE p.projId = '9' AND pc.company NOT IN ( SELECT pc.company FROM projects AS p .... ) ```
If you want to keep all the rows for a given projects, use a `left join`: ``` SELECT p.projId, pc.company, s.staffId, s.fName, s.lName FROM projects p LEFT JOIN projCorp pc ON p.projId = pc.projId LEFT JOIN projStaff ps ON p.projId = ps.projId LEFT JOIN staff s ON ps.staffId = s.StaffId AND s.company = pc.company WHERE p.projId = '9'; ``` This seems more sensible than a complicated `union` query. You can use `coalesce()` to convert the `NULL` values to `'NA'`, if that really is desirable. I am a little confused on which table should go first -- but I'm thinking it is all companies as opposed to all projects. If so, this is the `FROM` clause with no `WHERE`: ``` FROM projCorp pc LEFT JOIN projects p ON p.projId = pc.projId AND p.projId = '9' LEFT JOIN projStaff ps ON p.projId = ps.projId LEFT JOIN staff s ON ps.staffId = s.StaffId AND s.company = pc.company; ```
SQL SELECT DISTINCT Subquery Error: 1241
[ "", "mysql", "sql", "subquery", "union", "mysql-error-1241", "" ]
I am (fairly new to) using PL/SQL with Oracle 12c, and am having trouble with a procedure I am trying to implement. I've looked up a lot of tutorials and similar questions here, but so far I've had no luck finding anything that could help me. The situation is this: I have three tables: table1, table2, and table3. > Table1 has the attributes detailA and detailB. > > Table2 has the attributes detailB and detailC. > > Table3 has the attributes detailC and detailD. My goal is to update the value of detailA in table1 to be equal to detailD in table2. I'm aware that in order to do that I need to join table1 and table2 on detailB, then join table2 and table3 on detailC. Actually implementing that, though, is giving me trouble. So far, everything I've tried goes off-rails about halfway throughβ€”I'm having a very hard time implementing all the conditions I need to check while still keeping it within the structure of PL/SQL. Here's an (obviously very wrong) example of something I've tried: ``` UPDATE (SELECT table1.detailB, table2.detailB, table2.detailC, table3.detailC, table3.detail4 FROM table1 JOIN table2 on table1.detailB = table2.detailB JOIN table3 on table2.detailC = table3.detailD) SETtable1.detailA = table3.detail4; ``` If anyone could help me understand this better, I'd be very grateful. EDIT: And for an example with some actual data, if I inserted the following, it should replace the 100 in table1 with the 1000 in table4 when I run my query: ``` INSERT INTO table1(detailA, detailB) VALUES (100, 200); INSERT INTO table2(detailB, detailC) VALUES (200, 400); INSERT INTO table3(detailC, detailD) VALUES(400, 1000); ```
If I understand your problem correctly (sample data would be much of help), then I would use [MERGE](https://oracle-base.com/articles/10g/merge-enhancements-10g) statement as follows (**Tested**): ``` MERGE INTO table1 -- The table you want to update USING ( SELECT t2.detailB detailb, t2.detailC, t3.detailD detailD FROM table2 t2 inner join table3 t3 ON (t2.detailC = t3.detailC) ) ta ON (ta.detailB = table1.detailB) WHEN MATCHED THEN UPDATE SET table1.detailA = ta.detailD; ``` Tested in [FIDDLE](http://sqlfiddle.com/#!4/1dcc6/1)
when you updated column to listed in select query like this ``` UPDATE (SELECT table1.detailB, table2.detailB, table2.detailC, table3.detailC, table3.detail4, table1.detailA FROM table1 JOIN table2 on table1.detailB = table2.detailB JOIN table3 on table2.detailC = table3.detailD) SET detailA = detail4; ```
Updating table while joining across multiple tables
[ "", "sql", "oracle", "join", "sql-update", "" ]
Following table is my database tabel `question`. [![enter image description here](https://i.stack.imgur.com/2ijR5.png)](https://i.stack.imgur.com/2ijR5.png) when I pass `$ID=2` using this function, ``` function getFirstID($ID) { $checkQ = "SELECT * FROM tabel WHERE quiz_id = '$ID"; $query = $this->db->query($checkQ); if ($query->num_rows() > 0) { return $query->result(); } else { return NULL; } } ``` the result will be, [![enter image description here](https://i.stack.imgur.com/qZ2NC.png)](https://i.stack.imgur.com/qZ2NC.png) I want to return `ID` of first row in result. so the output should be 4. What is the modification that I should do in my function to get the expected output?
``` SELECT `ID` FROM tabel WHERE quiz_id = '$ID ORDER BY `ID` LIMIT 1 ``` Please try this query.
You can use `ROWNUM`,`LIMIT`,`TOP`,`PERCENT` with this case, 1.) Deal with row numbers: ``` SELECT columnNames FROM Table_name WHERE ROWNUM <= number; ``` 2.) Can limit rows: ``` SELECT columnNames FROM Table_name LIMIT number; ``` 3.) can select number of rows that you want from `TOP` ``` SELECT TOP 5 * FROM Table_name; ``` 4.) can select percentage from all number of result from this: ``` SELECT TOP 25 PERCENT * FROM Table_name; ``` 4th one is not better when compare with others,but we can use it. Regarding to your case,we can assume these four reasons like this : ``` SELECT * FROM tabel WHERE quiz_id = '$ID AND ROWNUM <= 1; ``` or ``` SELECT * FROM tabel WHERE quiz_id = '$ID AND LIMIT 1; ``` or ``` SELECT TOP 1 * FROM tabel WHERE quiz_id = '$ID; ``` or ``` SELECT 50 PERCENT * FROM tabel WHERE quiz_id = '$ID; ```
return single value of first row when query return more than one row
[ "", "mysql", "sql", "codeigniter", "" ]
I'm new to SQL and I've been racking my brain trying to figure out exactly what a query I received at work to modify is stating. I believe it's using an alias but I'm not sure why because it only has one table that it is referring to. I think it's a fairly simply one I just don't get it. ``` select [CUSTOMERS].Prefix, [CUSTOMERS].NAME, [CUSTOMERS].Address, [CUSTOMERS].[START_DATE], [CUSTOMERS].[END_DATE] from [my_Company].[CUSTOMERS] where [CUSTOMERS].[START_DATE] = (select max(a.[START_DATE]) from [my_company].[CUSTOMERS] a where a.Prefix = [CUSTOMERS].Prefix and a.Address = [CUSTOMERS].ADDRESS and coalesce(a.Name, 'Go-Figure') = coalesce([CUSTOMERS].a.Name, 'Go-Figure')) ```
Here's a shot at it in english... It looks like the intent is to get a list of customer names, addresses, start dates. But the table is expected to contain more than one row with the same customer name and address, and the author wants only the row with the most recent start date. Fine Points: * If a customer has the same name and address and prefix as another customer, the one with the most recent start date appears. * If a customer is missing the name 'Go Figure' is used. And so two rows with missing names will match, and the one with the most recent start date will be returned. A row with a missing name will not match another row that has a name. Both rows will be returned. * Any row that has no start date will be excluded from results. This does not look like a query from a real business application. Maybe it's just a conceptual prototype. It is full of problems in most real world situations. Matching names and addresses with simple equality just doesn't work well in the real world, unless the names and addresses are already cleaned and de-duplicated by some other process. *Regarding the use of alias: Yes. The sub-query uses a as an alias for the my\_Company.CUSTOMERS table.* I believe there is an **error** on the last line. ``` [CUSTOMERS].a.Name ``` is not a valid reference. It was probably meant to be ``` [CUSTOMERS].Name ```
I assume, it selects records about customers records from table `[CUSTOMERS]` whith the most recent `[CUSTOMERS].[START_DATE]`
Understanding an SQL Query
[ "", "sql", "sql-server", "t-sql", "greatest-n-per-group", "" ]
I have a problem with retrieving of a row with max value of a big group in oracle db. my table looks like something like this: id, col1, col2, col3, col4, col5, date\_col The group would consist of 4 columns col1, col2, col3, col4, so members mof the group should be equal on these fields, and from each group I need the rows (id is enough) with max date\_col value (there can be several with same date). Is it should be solved somehow with group by or probably there is a better approach? Thanks for tips! Cheers
You can use the [`RANK`](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions123.htm) (or `DENSE_RANK`) analytic functions to find the maximum value(s) within a group: [SQL Fiddle](http://sqlfiddle.com/#!4/8d174/2) **Oracle 11g R2 Schema Setup**: ``` CREATE TABLE table_name ( id, col1, col2, col3, col4, col5, date_col ) AS SELECT 1, 1, 1, 1, 1, 1, DATE '2015-11-13' FROM DUAL UNION ALL SELECT 2, 1, 1, 1, 1, 2, DATE '2015-11-12' FROM DUAL UNION ALL SELECT 3, 1, 1, 1, 1, 3, DATE '2015-11-11' FROM DUAL UNION ALL SELECT 4, 1, 1, 1, 1, 4, DATE '2015-11-13' FROM DUAL UNION ALL SELECT 5, 1, 1, 1, 1, 5, DATE '2015-11-12' FROM DUAL UNION ALL SELECT 5, 1, 1, 1, 1, 5, DATE '2015-11-12' FROM DUAL UNION ALL SELECT 6, 1, 1, 1, 2, 1, DATE '2015-11-12' FROM DUAL UNION ALL SELECT 7, 1, 1, 1, 2, 2, DATE '2015-11-13' FROM DUAL UNION ALL SELECT 8, 1, 1, 1, 2, 3, DATE '2015-11-11' FROM DUAL UNION ALL SELECT 9, 1, 1, 1, 2, 4, DATE '2015-11-12' FROM DUAL UNION ALL SELECT 10, 1, 1, 1, 2, 5, DATE '2015-11-13' FROM DUAL ``` **Query 1**: ``` SELECT * FROM ( SELECT t.*, RANK() OVER ( PARTITION BY col1, col2, col3, col4 ORDER BY date_col DESC ) AS rnk FROM table_name t ) WHERE rnk = 1 ``` **[Results](http://sqlfiddle.com/#!4/8d174/2/0)**: ``` | ID | COL1 | COL2 | COL3 | COL4 | COL5 | DATE_COL | RNK | |----|------|------|------|------|------|----------------------------|-----| | 1 | 1 | 1 | 1 | 1 | 1 | November, 13 2015 00:00:00 | 1 | | 4 | 1 | 1 | 1 | 1 | 4 | November, 13 2015 00:00:00 | 1 | | 7 | 1 | 1 | 1 | 2 | 2 | November, 13 2015 00:00:00 | 1 | | 10 | 1 | 1 | 1 | 2 | 5 | November, 13 2015 00:00:00 | 1 | ```
I think you need conditional aggregation with `row_number()`: ``` select t.* from (select t.*, row_number() over (partition by col1, col2, col3, col4 order by date_col desc) as seqnum from t ) t where seqnum = 1; ``` If you want all rows with the maximum value, then use `dense_rank()` or `rank()` instead. You can also use `keep` to get the value of `col5` using aggregation: ``` select col1, col2, col3, col4, max(col5) keep (dense_rank first order by date_col desc) as col5, max(date_col) as date_col from t group by col1, col2, col3, col4; ``` However, this only returns one value.
select rows with max value from group
[ "", "sql", "oracle", "" ]
I have a query, and when I execute it in SQL Server 2012, the `ORDER BY` clause is not working. Please help me in this. Regards. ``` DECLARE @Data table (Id int identity(1,1), SKU varchar(10), QtyRec int,Expiry date,Rec date) DECLARE @Qty int = 20 INSERT @Data VALUES ('001A', 5 ,'2017-01-15','2015-11-14'), ('001A', 8 ,'2017-01-10','2015-11-14'), ('001A', 6 ,'2015-12-15','2015-11-15'), ('001A', 25,'2016-01-01','2015-11-16'), ('001A', 9 ,'2015-12-20','2015-11-17'); SELECT * INTO #temp FROM @Data ORDER BY Id DESC SELECT * FROM #temp ```
SQL tables represent *unordered* sets. When you `SELECT` from a table, then the results are *unordered*. The one exception is when you use an `ORDER BY` in the outer query. So, include an `ORDER BY` and the results will be in order. EDIT: You can eliminate the *work* for the sort by introducing a clustered primary key. ``` create table #temp ( Id int identity(1,1) primary key clustered, SKU varchar(10), QtyRec int, Expiry date, Rec date ); ``` Then when you do: ``` insert into #temp(SKU, QtyRec, Expiry, Rec) select SKU, QtyRec, Expiry, Rec from @Data order by id; ``` The clustered primary key in `#temp` is guaranteed to be in the order specified by the `order by`. Then the query: ``` select * from #temp order by id; ``` will return the results in order, using the clustered index. No sort will be needed.
A `SELECT ... INTO` clause will help reach your expected output. I usually use temp tables in this way along with a column with a Row number using the `ROW_NUMBER()` function. It automatically orders the selected rows to the temp table. Or more simply, you can use the `ORDER BY` clause.
Order by not working when insert in temp table
[ "", "sql", "sql-server", "sql-server-2012", "" ]
I have two tables: ``` Table: products ----------------------------------------------------- | id | name | other fields... ----------------------------------------------------- | 20 | Intel i5 4690K | ... | 21 | AMD A6-7400K | ... | 23 | AMD A8-3850 | ... | ... | ... | ... ``` and table product\_details: ``` Table: products_details ----------------------------------------------- | id | product_id | option_id | value | ----------------------------------------------- | 1 | 20 | 2478 | 55032 | | 2 | 20 | 2482 | 55051 | | 3 | 21 | 2478 | 54966 | | 4 | 21 | 2482 | 55050 | | 5 | 22 | 2478 | 55032 | | 5 | 22 | 2482 | 55050 | ----------------------------------------------- 2478 = Number of Cores 2482 = Manufacturer 55032 = 4 cores 55050 = 6 cores ``` I'm trying to get records where products are only 4 cores and from AMD: ``` Select product_details.product_id, products.name, product_details.option_id, product_details.value From product_details Inner Join products On product_details.product_id = products.id Where product_details.option_id In (2478, 2482) And product_details.value In (55032, 55050) ``` SQL above gives me double records ``` product_id name option_id value 20 Intel i5 4690K 2478 55032 21 AMD A6-7400K 2482 55050 23 AMD A8-3850K 2478 55032 23 AMD A8-3850K 2482 55050 ``` Obviously, it won't work with IN. Any ideas?
To get `AMD` `4 CORES`, This query works fine. You need to do inner join Twice on `product_details` to get `manufacturer` as well as `no of cores`. Here is the [**SQLFiddle Demo**](http://sqlfiddle.com/#!9/aa4ad/1) ``` Select product_details.product_id, products.name, product_details.option_id, product_details.value From product_details Inner Join products On product_details.product_id = products.id AND product_details.option_id=2478 INNER JOIN product_details P2 ON P2.product_id = product_details.product_id AND P2.option_id=2482 Where product_details.value = 55032 And P2.value = 55050 ``` Hope this helps.
First if you need only AMD manufacture then where is the option of this value ? you said that: ``` 2478 = Number of Cores 2482 = Manufacturer 55032 = 4 cores 55050 = 6 cores ``` Ok, But where were the `value`of the `option_id = 2482` ? you must know them and include them in your `Where` clause. Also I prefer to use `Left Join` rather than `INNER JOIN`
MySQL - return unique record
[ "", "mysql", "sql", "" ]
I'm trying to get rows that have max value in each group. I have this tables ``` +-------+-----------+--------------+-----------+ | nid | name | crated | grp_id | +-------+-----------+--------------+-----------+ | 1 | RAND_NAME | 123 | 11 | | 2 | EHllo | 111 | 11 | | 3 | Stop by | 444 | 11 | | 4 | Radr c | 555 | 11 | | 5 | NAE | 666 | 22 | | 6 | ABC | 1234 | 22 | | 7 | RAND | 123 | 22 | | 8 | YELLO | 444 | 22 | | 9 | AAA | 555 | 33 | | 10 | WWW | 1235 | 33 | | 11 | ADF | 553 | 33 | +-------+-----------+--------------+-----------+ ``` So, I want this table ``` +-------+-------------+------------+-----------+ | nid | name | created | grp_id | +-------+-------------+------------+-----------+ | 3 | Radr c | 555 | 11 | | 6 | ABC | 1234 | 22 | | 10 | WWW | 1235 | 33 | +-------+-------------+------------+-----------+ ``` which means I want to grab rows that have the highest created value in each group. The table will be grouped by grp\_id. I was thinking this way: ``` SELECT nid, name, created, grp_id FROM table t WHERE t.created = (SELECT MAX(t1.created) FROM table t1) GROUP BY grp_id ORDER BY grp_id ``` But, it didn't work out. What should I do to get three different rows that have the highest created value in each group? Thank you for understanding about my poor explanation.
You need a subselect: ``` SELECT yourtable.* FROM yourtable LEFT JOIN ( SELECT grp_id, MAX(created) AS max FROM yourtable GROUP BY grp_id ) AS maxgroup ON ( (yourtable.grp_id = maxgroup.grp_id) AND (yourtable.created = maxgroup.max) ) ``` subselect the gets the ID/max value for each group, and the parent/outer query joins agains the subselect results to get the rest of the fields for the row(s) that the max value appears on.
Try: ``` SELECT nid, name, MAX(created), grp_id FROM t GROUP BY grp_id; ```
Rows with max value of each group
[ "", "mysql", "sql", "aggregate-functions", "greatest-n-per-group", "" ]
I have a table like this: ``` Date Product 1/1/2015 Apples 1/1/2015 Apples 1/1/2015 Oranges 1/2/2015 Apples 1/2/2015 Apples 1/2/2015 Oranges ``` How can I do a select so I get something like this: ``` Date Count of Apples Count of Oranges 1/1/2015 2 1 1/2/2015 2 1 ``` Thanks. I have tried case like this but the error is being thrown: ``` Select 'Date', CASE WHEN 'Product' = 'Apples' THEN COUNT(*) ELSE 0 END as 'Count' FROM #TEMP Group by 1,2 ``` Each GROUP BY expression must contain at least one column that is not an outer reference.
You can do conditional aggregation like this: ``` select [date], sum(case when Product = 'Apples' then 1 else 0 end) as [Count of Apples], sum(case when Product = 'Oranges' then 1 else 0 end) as [Count of Oranges] from #temp group by [date] ```
With conditional aggregation: ``` select date, sum(case when Product = 'Apples' then 1 else 0 end) as Apples, sum(case when Product = 'Oranges' then 1 else 0 end) as Oranges, from table group by date ```
SQL get count from one table and split to two columns
[ "", "sql", "sql-server", "" ]
What I'm trying to do is when new data is entered into the db, a trigger is run that converts all text to TitleCase. How I previously did this was to create a function, then using `UPDATE TABLE` to call that function, but this is very labour intensive. My previous SQL function: ``` CREATE FUNCTION [dbo].[InitCap] (@InputString varchar(4000)) RETURNS VARCHAR(4000) AS BEGIN DECLARE @Index INT DECLARE @Char CHAR(1) DECLARE @PrevChar CHAR(1) DECLARE @OutputString VARCHAR(255) SET @OutputString = LOWER(@InputString) SET @Index = 1 WHILE @Index <= LEN(@InputString) BEGIN SET @Char = SUBSTRING(@InputString, @Index, 1) SET @PrevChar = CASE WHEN @Index = 1 THEN ' ' ELSE SUBSTRING(@InputString, @Index - 1, 1) END IF @PrevChar IN (' ', ';', ':', '!', '?', ',', '.', '_', '-', '/', '&', '''', '(') BEGIN IF @PrevChar != '''' OR UPPER(@Char) != 'S' SET @OutputString = STUFF(@OutputString, @Index, 1, UPPER(@Char)) END SET @Index = @Index + 1 END RETURN @OutputString END ``` How the function was called: ``` update dbo.table set colName = [dbo].[InitCap](colName); ``` New trigger: ``` CREATE TRIGGER [dbo].[TableInsert] ON [Table] AFTER INSERT AS BEGIN DECLARE @InputString varchar(4000) DECLARE @Index INT DECLARE @Char CHAR(1) DECLARE @PrevChar CHAR(1) DECLARE @OutputString VARCHAR(255) SET @OutputString = LOWER(@InputString) SET @Index = 1 WHILE @Index <= LEN(@InputString) BEGIN SET @Char = SUBSTRING(@InputString, @Index, 1) SET @PrevChar = CASE WHEN @Index = 1 THEN ' ' ELSE SUBSTRING(@InputString, @Index - 1, 1) END IF @PrevChar IN (' ', ';', ':', '!', '?', ',', '.', '_', '-', '/', '&', '''', '(') BEGIN IF @PrevChar != '''' OR UPPER(@Char) != 'S' SET @OutputString = STUFF(@OutputString, @Index, 1, UPPER(@Char)) END SET @Index = @Index + 1 END RETURN @OutputString END ``` Is this the correct way? What could I use to instead of `@InputString`? Thanks
I would actually advise keeping your `InitCap` function, since it's nice to have that logic tucked away for re-use; there's no point in having that logic in more than one place. With that in mind, your trigger really doesn't do much except update the value that got inserted: ``` CREATE TRIGGER [dbo].[TableInsert] ON [Table] AFTER INSERT AS BEGIN UPDATE t SET t.colName = dbo.InitCap(i.colName) FROM dbo.table t INNER JOIN inserted i ON i.primaryKeyColumn = t.primaryKeyColumn END ```
I'm not sure what the MS-specific syntax is for this, but a BEFORE INSERT FOR EACH ROW trigger gives you access to the new data in a record named NEW and allows you to manipulate the record's contents (eg. changing the case of values) before the INSERT is actually performed.
Using SQL Server trigger to change format of text
[ "", "sql", "sql-server", "" ]
I have table full of strings (TEXT) and I like to get all the strings that are substrings of any other string in the same table. For example if I had these three strings in my table: ``` WORD WORD_ID cup 0 cake 1 cupcake 2 ``` As result of my query I would like to get something like this: ``` WORD WORD_ID SUBSTRING SUBSTRING_ID cupcake 2 cup 0 cupcake 2 cake 1 ``` I know that I could do this with two loops (using Python or JS) by looping over every word in my table and match it against every word in the same table, but I'm not sure how this can be done using SQL (PostgreSQL for that matter).
Use self-join: ``` select w1.word, w1.word_id, w2.word, w2.word_id from words w1 join words w2 on w1.word <> w2.word and w1.word like format('%%%s%%', w2.word); word | word_id | word | word_id ---------+---------+------+--------- cupcake | 2 | cup | 0 cupcake | 2 | cake | 1 (2 rows) ```
### Problem The task has the potential to stall your database server for tables of non-trivial size, since it's an **O(NΒ²)** problem as long as you cannot utilize an index for it. In a sequential scan you have to check every possible combination of two rows, that's `n * (n-1) / 2` combinations - Postgres will run `n * n-1` tests since it's not easy to rule out reverse duplicate combinations. If you are satisfied with the first match, it gets cheaper - how much depends on data distribution. For many matches, Postgres will find a match for a row early and can skip testing the rest. For few matches, most of the checks have to be performed anyway. Either way, performance deteriorates rapidly with the number of rows in the table. Test each query with `EXPLAIN ANALYZE` and 10, 100, 1000 etc. rows in the table to see for yourself. ### Solution Create a **trigram index** on `word` - preferably **GIN**. ``` CREATE INDEX tbl_word_trgm_gin_idx ON tbl USING gin (word gin_trgm_ops); ``` Details: * [PostgreSQL LIKE query performance variations](https://stackoverflow.com/questions/1566717/postgresql-like-query-performance-variations/13452528#13452528) The queries in both answers so far wouldn't use the index even if you had it. Use a query that can actually work with this index: To list ***all* matches** (according to the question body): Use a `LATERAL CROSS JOIN`: ``` SELECT t2.word_id, t2.word, t1.word_id, t1.word FROM tbl t1 , LATERAL ( SELECT word_id, word FROM tbl WHERE word_id <> t1.word_id AND word like format('%%%s%%', t1.word) ) t2; ``` To just get rows that have ***any* match** (according to your title): Use an `EXISTS` semi-join: ``` SELECT t1.word_id, t1.word FROM tbl t1 WHERE EXISTS ( SELECT 1 FROM tbl WHERE word_id <> t1.word_id AND word like format('%%%s%%', t1.word) ); ```
In SQL, how to check if a string is the substring of any other string in the same table?
[ "", "sql", "postgresql", "loops", "pattern-matching", "string-function", "" ]
I am just wondering if anyone can see a better solution to this issue. I previously had a flat (wide) table to work with, that contained multiple columns. This table has now been changed to a dynamic table containing just 2 columns (statistic\_name and value). I have amended my code to use sub queries to return the same results as before, however I am worried the performance is going to be terrible when using real live data. This is based on the exacution plan which shows a considerable difference between the 2 versions. See below for a very simplified example of my issue - ``` CREATE TABLE dbo.TEST_FLAT ( ID INT, TEST1 INT, TEST2 INT, TEST3 INT, TEST4 INT, TEST5 INT, TEST6 INT, TEST7 INT, TEST8 INT, TEST9 INT, TEST10 INT, TEST11 INT, TEST12 INT ) CREATE TABLE dbo.TEST_DYNAMIC ( ID INT, STAT VARCHAR(6), VALUE INT ) CREATE TABLE dbo.TEST_URNS ( ID INT ) -- OLD QUERY SELECT D.[ID], D.TEST1, D.TEST2, D.TEST3, D.TEST4, D.TEST5, D.TEST6, D.TEST7, D.TEST8, D.TEST9, D.TEST10, D.TEST11, D.TEST12 FROM [dbo].[TEST_URNS] U INNER JOIN [dbo].[TEST_FLAT] D ON D.ID = U.ID -- NEW QUERY SELECT U.[ID], (SELECT VALUE FROM dbo.TEST_DYNAMIC WHERE ID = U.ID AND STAT = 'TEST1') AS TEST1, (SELECT VALUE FROM dbo.TEST_DYNAMIC WHERE ID = U.ID AND STAT = 'TEST2') AS TEST2, (SELECT VALUE FROM dbo.TEST_DYNAMIC WHERE ID = U.ID AND STAT = 'TEST3') AS TEST3, (SELECT VALUE FROM dbo.TEST_DYNAMIC WHERE ID = U.ID AND STAT = 'TEST4') AS TEST4, (SELECT VALUE FROM dbo.TEST_DYNAMIC WHERE ID = U.ID AND STAT = 'TEST5') AS TEST5, (SELECT VALUE FROM dbo.TEST_DYNAMIC WHERE ID = U.ID AND STAT = 'TEST6') AS TEST6, (SELECT VALUE FROM dbo.TEST_DYNAMIC WHERE ID = U.ID AND STAT = 'TEST7') AS TEST7, (SELECT VALUE FROM dbo.TEST_DYNAMIC WHERE ID = U.ID AND STAT = 'TEST8') AS TEST8, (SELECT VALUE FROM dbo.TEST_DYNAMIC WHERE ID = U.ID AND STAT = 'TEST9') AS TEST9, (SELECT VALUE FROM dbo.TEST_DYNAMIC WHERE ID = U.ID AND STAT = 'TEST10') AS TEST10, (SELECT VALUE FROM dbo.TEST_DYNAMIC WHERE ID = U.ID AND STAT = 'TEST11') AS TEST11, (SELECT VALUE FROM dbo.TEST_DYNAMIC WHERE ID = U.ID AND STAT = 'TEST12') AS TEST12 FROM [dbo].[TEST_URNS] U ``` Note this is in SQL2008 R2 and this will be part of a stored procedure, the flat version of the table contained hundreds of thousands of records (900k or so at last count). Thanks in advance.
Create an index on the STAT column of TEST\_DYNAMIC, for quick lookups. But first consider redesigning TEST\_DYNAMIC changing STAT varchar(6) to STAT\_ID int (referencing a lookup table) Then on TEST\_DYNAMIC, create an index on STAT\_ID which will run quite a bit faster than an index on a text field.
Create your TEST\_DYNAMIC and TEST\_URNS tables like this: ``` CREATE TABLE [dbo].[TEST_DYNAMIC]( [ID] [int] IDENTITY(1,1) NOT NULL, [STAT] [varchar](50) NOT NULL, [VALUE] [int] IDENTITY(1,1) NOT NULL, CONSTRAINT [PK_TEST_DYNAMIC] PRIMARY KEY CLUSTERED ( [ID] )) CREATE TABLE dbo.TEST_URNS ( ID [int] IDENTITY(1,1) NOT NULL ) CONSTRAINT [PK_TEST_URNS] PRIMARY KEY CLUSTERED ( [ID] )) ``` If you notice after a period of time that performance becomes poor, then you can check the index fragmentation: ``` SELECT a.index_id, name, avg_fragmentation_in_percent FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID(dbo.TEST_DYNAMIC'), NULL, NULL, NULL) AS a JOIN sys.indexes AS b ON a.object_id = b.object_id AND a.index_id = b.index_id; GO ``` Then you can rebuild the index like so: ``` ALTER INDEX PK_PK_TEST_DYNAMIC ON dbo.TEST_DYNAMIC REBUILD; GO ``` For details please see <https://msdn.microsoft.com/en-us/library/ms189858.aspx> Also, I like @Brett Lalonde's suggestion to change STAT to an int.
SQL Query Optimization (After table structure change)
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have data in which incremental sequence is broken somewhere, maybe multiple times. E.g. (2,3,4,5,6,8,10). I want to get: 1. the first "broken" place (6 is the last good in the example) 2. the number of "broken" places (2 times, on 7 and 9) using SQL (preferably general, which works on oracle and mysql and other sql platforms). Using sequences or auto\_increment are platform-specific. I tried self-join constructions like ``` select curr.id+1 as first_fail from junk as prev join junk as curr on (prev.id+1 = curr.id) order by curr.id desc limit 1; ``` (<http://sqlfiddle.com/#!9/bae781/4/0>) but it seems ugly, and can't get the number of "broken" places this way.
All beginning "breaks" sorted by ids: ``` select j1.id + 1 as id from junk j1 left join junk j2 on j2.id = j1.id + 1 where j2.id is null and j1.id <> (select max(id) from junk) order by j1.id; ``` Pick the first row to get the first "break". Count number of rows to get the number of "breaks". If you need the number of all missing ids: ``` -- get number of missing ids select -- num rows you should have (select max(id) from junk) - (select min(id) from junk) + 1 -- num rows you really have - count(*) as num_missings from junk; ``` Or shorter: ``` select max(id) - min(id) + 1 - count(*) as num_missings from junk; ```
Check this : ``` select prev_id+1 as first_fail,count(*)-1 as total_broken from ( select curr.id as curr_id,prev.id as prev_id from junk as prev left join junk as curr on prev.id+1 = curr.id ) sub where sub.curr_id is null ; ``` SQL Fiddle Example: <http://sqlfiddle.com/#!9/bae781/62> Query If Gap is wider than 1 ``` select p2_id-1 as first_fail,sum(broken) as total_broken from (select p1.row_num as p1_row,p2.row_num as p2_row,p1.id as p1_id,p2.id as p2_id,(p2.id-p1.id-1) as broken from ( select @row_num:=@row_num+1 as row_num,junk.id from junk,(select @row_num:=0) s1 ) p1 left join ( select @row_num2:=@row_num2+1 as row_num,junk.id from junk,(select @row_num2:=0) s2 ) p2 on p1.row_num+1=p2.row_num ) as sub where broken is not null and broken > 0; ``` SQL Fiddle Demo:<http://sqlfiddle.com/#!9/974ec/1>
SQL to detect "sequence break"
[ "", "mysql", "sql", "" ]
I am to make a SQL Query that will display tenants' sales in a yearly basis based on when it started until the present time, unless it is already not active. In the given table below as an illustration, Tenant 1 and 2 display their per year sales. Tenant 1 having 5 rows as it started in 2011, and with Tenant 2 in 2014. ``` +----------+------+-------------+ | TENANT | YEAR | TOTAL SALES | +----------+------+-------------+ | Tenant 1 | 2011 | 1,000 | | Tenant 1 | 2012 | 3,000 | | Tenant 1 | 2013 | 2,000 | | Tenant 1 | 2014 | 3,000 | | Tenant 1 | 2015 | 2,000 | | Tenant 2 | 2014 | 5,000 | | Tenant 2 | 2015 | 2,000 | +----------+------+-------------+ ``` I am totally lost as of now on what to do, I have an existing code that somehow do the same but it is static and not flexible on the years it will display, it is by default 5 years, and it is in vertical form. ``` SELECT tenant ,(SUM(CASE WHEN YEAR(DATE) = @Year1 THEN sales END)) AS 'Year1' ,(SUM(CASE WHEN YEAR(DATE) = @Year2 THEN sales END)) AS 'Year2' ,(SUM(CASE WHEN YEAR(DATE) = @Year3 THEN sales END)) AS 'Year3' ,(SUM(CASE WHEN YEAR(DATE) = @Year4 THEN sales END)) AS 'Year4' ,(SUM(CASE WHEN YEAR(DATE) = @Year5 THEN sales END)) AS 'Year5' FROM TenantSales ``` --- **TenantSales Table** * Tenant * Location * Date * Sales
You can try doing a `GROUP BY` on the `TENANT` and year to achieve your desired output: ``` SELECT TENANT, YEAR(DATE), SUM(Sales) AS `TOTAL SALES` FROM TenantSales GROUP BY TENANT, YEAR(DATE) ```
One approach is to use [dynamic crosstab](http://www.sqlservercentral.com/articles/Crosstab/65048/): [**SQL Fiddle**](http://sqlfiddle.com/#!3/16b4d3/1/0) ``` DECLARE @sql NVARCHAR(MAX) = N'' SELECT @sql = 'SELECT Tenant' + CHAR(10) SELECT @sql = @sql + ' , SUM(CASE WHEN [Year] = ' + CONVERT(VARCHAR(4), yr) + ' THEN Sales ELSE 0 END) AS ' + QUOTENAME('Year' + CONVERT(VARCHAR(4), N)) + CHAR(10) FROM( SELECT *, ROW_NUMBER() OVER(ORDER BY yr) N FROM( SELECT DISTINCT [Year] AS yr FROM TenantSales ) a ) t ORDER BY N SELECT @sql = @sql + 'FROM TenantSales GROUP BY Tenant ORDER BY Tenant' PRINT @sql EXECUTE sp_executesql @sql ``` Result: ``` | Tenant | Year1 | Year2 | Year3 | Year4 | Year5 | |----------|-------|-------|-------|-------|-------| | Tenant 1 | 1000 | 3000 | 2000 | 3000 | 2000 | | Tenant 2 | 0 | 0 | 0 | 5000 | 2000 | ```
SQL Query to Dynamically Display Sales In a Yearly Basis
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I am new to this, so I apologize upfront for any confusion/frustration. I appreciate any help that I can get! I have a table (**MainTable**) that I have created two views with (**GoodTable** and **BadTable**). * Each table has 4 columns (*ID*, *UserID*, *key*, *value*). * ID is the Primary Key, but * UserID can repeat in several rows. What I need to do in the **Main** table is find the *ID*s that are in the **BAD** table, and update the values from the *value* column of the **GOOD** table, based on a match of *UserID* **AND** a LIKE match with the *key* column, into the **MAIN** table. I hope that makes sense. I've tried: ``` UPDATE MainTable SET value = (SELECT value FROM GoodTable WHERE MainTable.UserID = GoodTable.UserID AND MainTable.key LIKE "some%key%specifics"); ``` This gets me ALMOST there, but the problem is if it doesn't find the LIKE key specifics, it returns a NULL value and I want it to keep it's original value if it's not in BadTable (BadTable is essentially all of the keys that match the LIKE key specifics). Obviously the above doesn't use BadTable, but I thought that might help me solve this (not the case, so far!)... Here's a bit of an example: ``` MainTable: ID UserID key value 1 1 key1 good value 2 1 key2 bad value 3 1 key3 unrelated value 4 2 key1 good value 5 2 key2 bad value 6 2 key3 unrelated value GoodTable: ID UserID key value 1 1 key1 good value 4 2 key1 good value BadTable: ID UserID key value 2 1 key2 bad value 5 2 key2 bad value What I want MainTable to change to: ID UserID key value 1 1 key1 good value 2 1 key2 good value 3 1 key3 unrelated value 4 2 key1 good value 5 2 key2 good value 6 2 key3 unrelated value ``` I also thought if there was something like a VLOOKUP (like in Excel) where I could say what to do if false, but I haven't been able to work that out either. I've tried some other things from researching other questions but I've spun myself dizzy now and decided to reach out for help :) Lastly, I'm not sure if this matters or not, but this if for MySQL... I'm sure I'm making this more complicated for myself than I need to, so I really appreciate any help anyone can provide! **UPDATE**: per @Rabbit suggestion, this is the best I could come up with using the inner join (though I thought this would add to the MainTable, but I want to keep the number of rows in MainTable the same, just update that one field for the applicable rows..): ``` UPDATE MainTable JOIN GoodTable ON MainTable.ID = GoodTable.ID SET value = (SELECT value FROM GoodTable WHERE MainTable.UserID = GoodTable.UserID AND MainTable.key LIKE "some%key%specifics"); ``` I'm sure this is an awful attempt but I am certainly a novice here! I did manage to come up with a solution (though I am sure it is highly inefficient) -- please see answer below! (Thank you @DBug and @Rabbit for pointing me in the right direction!)
You've said "**Main table is find the IDs that are in the BAD table, and update the values from the value column of the GOOD table, based on a match of UserID AND a LIKE match with the key column, into the MAIN table.**" so your sample result is wrong... We will update the 2 and 5 ID from MainTable right? because it is in BadTable **BUT** 2 and 5 key is *Key 2* and there's no Key 2 on GoodTable. If I will base on your answer. This might help you. Please check ``` UPDATE MainTable m INNER JOIN BadTable b ON m.id = b.id LEFT JOIN GoodTable g ON b.UserID = g.UserID SET m.value = g.value ``` A little fix on that. for the **key** you wanted
You can use the Coalesce function, which returns the first non-null argument, giving it the value column from both tables, e.g. ``` UPDATE MainTable SET value = (SELECT COALESCE(GoodTable.value, MainTable.value) FROM GoodTable WHERE MainTable.UserID = GoodTable.UserID AND MainTable.key LIKE "some%key%specifics"); ``` It will return GoodTable.value if it is not NULL, or MainTable.value if it is.
SQL to update selected fields in table from view
[ "", "mysql", "sql", "select", "vlookup", "" ]
I have a table as follows ``` -------------------------------- ChildId | ChildName | ParentId | -------------------------------- 1 | A | 0 | -------------------------------- 2 | B | 1 | -------------------------------- 3 | C | 1 | -------------------------------- ``` I would like to select data as - ``` --------------------------------------- Id | Name | Childs | --------------------------------------- 1 | A | 2 | --------------------------------------- 2 | B | 0 | --------------------------------------- 3 | C | 0 | --------------------------------------- ``` The pseudo SQL statement should be like this- ``` SELECT ChildId AS Id, ChildName as Name, (Count (ParentId) Where ParentId=ChildId) ``` Any Help?
Something like this? ``` SELECT ChildId AS Id, ChildName as Name, (SELECT COUNT(*) FROM TestCountOver T WHERE T.ParentID = O.ChildID) FROM TestCountOver O ``` but that would give you all the nodes plus children who shouldn't be part of the hierarchy If you want only nodes with children then use a cte ``` ;WITH CTE AS ( SELECT ChildId AS Id, ChildName as Name, (SELECT COUNT(*) FROM TestCountOver T WHERE T.ParentID = O.ChildID) Cnt FROM TestCountOver O ) SELECT * FROM CTE WHERE Cnt > 0 ```
This will do the trick: ``` select t1.childid, t1.childname, count(*) as childs from table t1 join table t2 on t1.childid = t2.parentid group by t1.childid, t1.childname ```
SQL Select and Count Statement
[ "", "sql", "sql-server", "" ]
I am interested to pull a data where a task was completed in last six months. The problem is I am interested to look only at the data which is completed between 06:00 AM to 09:00 PM. I am not sure how can I incorporate the time condition within my SQL statement. Can someone help me out here? My simple SQL Code is like: ``` Select TimeTaskCompleted, Task From Task Where TimeTaskCompleted between β€˜07/01/2015’ and β€˜09/30/2015’ ``` Thanks \*\*TimeTaskCompleted is a DateTime column.
Just use the [DATEPART function](https://msdn.microsoft.com/en-us/library/ms174420.aspx): ``` Select TimeTaskCompleted, Task From Task Where TimeTaskCompleted between β€˜07/01/2015’ and β€˜09/30/2015’ AND DATEPART(hh,TimeTaskCompleted) between 6 and 20 --24 hour time ``` [SQLFiddle](http://sqlfiddle.com/#!3/9eecb7/6121/0)
Some comments. You shouldn't use `between` with date/time columns, because it is confusing. Your expression does not find tasks that are completed on 2015-09-30. I assume the intention is to include the last day of September. A better way to write the query is with explicit inequalities and to use ISO standard date formats: ``` Select TimeTaskCompleted, Task From Task Where TimeTaskCompleted >= '2015-07-01' and TimeTaskCompleted < '2015-10-01'; ``` (If you want unambiguous SQL Server dates, then remove the hyphens; the above can be misinterpreted as YYYY-DD-MM with one internationanalization setting.) Then, you can add either of the conditions proposed in the other answers, although I would not use `between` here either: ``` where CAST(TimeTaskCompleted AS TIME) >= '06:00:00' and CAST(TimeTaskCompleted AS TIME) <= '21:00:00' ``` or: ``` where datepart(hour, TimeTaskCompleted) between 6 and 20 ``` Note: the latter can be a bit confusing because it says "20" in the logic (8:00 p.m.), but the intention is to go to "9:00 p.m.".
SQL Server: How to bound a data pull to a specific time Interval?
[ "", "sql", "sql-server", "sql-server-2008", "sql-server-2012", "" ]
I have prepared an SQL query that I will have to run on several databases (Oracle and Sybase) where some data might be stored differently. I have noticed that one of the differences in data storage is the blank string. For example, in the column `PRODUCT_TYPE` below, please have a look at the second record: [![enter image description here](https://i.stack.imgur.com/Fi75K.png)](https://i.stack.imgur.com/Fi75K.png) This "empty string" (the data type is `CHAR(15)`) circled in red is equal to `''` in some of the databases, whereas it's equal to `' '` to some others. The length is never constant and there are several fields that behave as such. So, since I need to filter on these "empty strings", I should change the following statement in my `WHERE` clause: ``` WHERE PRODUCT_TYPE = '' ``` ...because the above will take the `' '` string as different than `''` even if "functionally" speaking is not. I would hence like to make the statement in a way that it "ignores white spaces", i.e. `' '` is equal to `''` that is equal to `' '` etc. How should I do this change in order to make it work? I have tried the simple replacing approach: ``` WHERE REPLACE(PRODUCT_TYPE,' ','') = '' ``` ...but it doesn't seem to work, probably because I should use a different character. For sake of testing, inside the `'` below there is a copied-pasted example of what I find in these "empty strings": ``` ' ' ``` Ideally, it should be a "non-specific SQL" solution since I will have to run the same query on both Oracle and Sybase RDBMS. Any idea?
You can use `trim` on the column. ``` where trim(product_type) is null ``` The above is not DBMS-independent, since Sybase does not provide the `trim` function. However, the below approach will work both in Sybase and Oracle: ``` where rtrim(ltrim(product_type)) is null ```
You can use the replace statement you've tried but you should test for "is null" instead of ='' ``` WHERE REPLACE(PRODUCT_TYPE,' ','') is null ``` See also: [null vs empty string in Oracle](https://stackoverflow.com/questions/13278773/null-vs-empty-string-in-oracle)
SQL string comparison -how to ignore blank spaces
[ "", "sql", "oracle", "" ]
Can I put `like` expression into `SQL` statement like this? ``` select @Count = SUM(cast(Value as int)) from tTag where Name like '[Car],[Truck],[Bike]' ```
If you are searching for whole words use: ``` where Name = 'Car' or Name = 'Truck' or Name = 'Bike' --or where Name = '[Car]' or Name = '[Truck]' or Name = '[Bike]' --or where Name in ('Car', 'Truck', 'Bike') --or where Name in ('[Car]', '[Truck]', '[Bike]') ``` If you are searching as parts of words then use: ``` where Name like '%Car%' or Name like '%Truck%' or Name like '%Bike%' ``` But if you are searching for strings like `some text [car] some text` then this won't work: ``` where Name like '%[Car]%' or Name like '%[Truck]%' or Name like '%[Bike]%' ``` because `%[Car]%` this will match for example `some text ca some text`. You should escape `[` and `]` symbols. But it depends on database engine. For example for `Sql Server`: ``` where Name like '%\[Car\]%' ESCAPE '\' or Name like '%\[Truck\]%' ESCAPE '\' or Name like '%\[Bike\]%' ESCAPE '\' ```
I hope you will add *wildcards* in `like` operator else you can use `IN` operator. Try this. ``` Select @Count = SUM(cast(Value as int)) from tTag where Name like '[Car]' or Name like '[Truck]' or Name like '[Bike]' ```
sql "like" expression
[ "", "sql", "sql-server", "t-sql", "" ]
Is it possible to select distinct company names from the customer table but also displaying the iD's related? at the minute I'm using ``` SELECT company,id, COUNT(*) as count FROM customers GROUP BY company HAVING COUNT(*) > 1; ``` which returns ``` MyDuplicateCompany1 64 2 MyDuplicateCompany2 20 3 MyDuplicateCompany6 175 2 ``` but what I'm after is all the duplicate ID's for each. so ``` CompanyName, TimesDuplicated, DuplicateId1, DuplicateId2, DuplicateId3 ``` or a row for each so ``` MyDuplicateCompany1, DuplicateId1, TimesDuplicated MyDuplicateCompany1, DuplicateId2, TimesDuplicated MyDuplicateCompany2, DuplicateId1, TimesDuplicated MyDuplicateCompany2, DuplicateId2, TimesDuplicated MyDuplicateCompany2, DuplicateId3, TimesDuplicated ``` is this possible?
You can use `GROUP_CONCAT(id)` to concat your id by comma, your query should be: ``` SELECT company, GROUP_CONCAT(id) as ids, COUNT(id) as cant FROM customers GROUP BY company HAVING cant > 1 ``` You can test the query with this ``` CREATE TABLE IF NOT EXISTS `customers` ( `id` int(11) NOT NULL, `company` varchar(50) NOT NULL ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `customers` (`id`, `company`) VALUES (1, 'MyDuplicateCompany1'), (2, 'MyDuplicateCompany1'), (3, 'MyDuplicateCompany1'), (4, 'MyDuplicateCompany2'), (5, 'MyDuplicateCompany2'), (6, 'MyDuplicateCompany3'), (7, 'MyDuplicateCompany3'), (8, 'MyDuplicateCompany3'), (9, 'MyDuplicateCompany3'), (10, 'MyDuplicateCompany4'); ``` Output: [![enter image description here](https://i.stack.imgur.com/i87F3.png)](https://i.stack.imgur.com/i87F3.png) Read more at: <http://monksealsoftware.com/mysql-group_concat-and-postgres-array_agg/>
Not sure if this would be acceptable but there's a function in mySQL which allows you to combine multiple rows into one [Group\_Concat(Field)](http://dev.mysql.com/doc/refman/5.7/en/group-by-functions.html#function_group-concat), but show the distinct values for each record for columns specified (like ID in this case) ``` SELECT company , COUNT(*) as count , group_concat(ID) as DupCompanyIDs FROM customers GROUP BY company HAVING COUNT(*) > 1; ``` [SQL Fiddle](http://sqlfiddle.com/#!9/7ed10b/1/0) showing similar results with duplicate companies listed in one field. If you need it in multiple columns or multiple rows, you could wrap the above as an inline view and inner join it back to customers on the name to list the duplicates and times duplicated.
How select data from distinct Companies in the same row?
[ "", "mysql", "sql", "database", "select", "" ]
Table of surnames in a reference list and I need to find the average number of times a surname appears in the list, I have used the command: ``` SELECT column5, COUNT(*) FROM table1 GROUP BY column5 ``` to get a list of occurrences but there are over 800 in my database so I can manually find an average So some authors have published 9 books, some only 1, how do I find the average of this?
To find the "average number of books per author", you must select the total number of books divided by the total number of authors: ``` SELECT CAST(COUNT(*) AS DECIMAL) / COUNT(DISTINCT column5) FROM table1; ``` Note: I used the `CAST ... AS DECIMAL` syntax to make the result show as a decimal instead of being truncated to an integer.
It is allowed to compose (i.e. nest) aggregate functions, so why not simply this: ``` SELECT AVG(COUNT(*)) average_occurrences FROM table1 GROUP BY column5 ```
How to find average number of occurences (Oracle)
[ "", "sql", "oracle", "" ]
MS SQL Server 2008R2 Management Studio I am running a `SELECT` on two tables. I'll simplify it to the part where I'm having trouble. I need to modify the `SELECT` results to a certain format for a data import. My `CASE` statement works fine until I get to the point that I need to base the `WHEN ... THEN...` on a different table column ``` TABLE1 ----------------- name | tag | code ----------------------- name1 | N | 100 name2 | N | 100 name3 | N | 200 name4 | Y | 100 name5 | N | 400 name6 | N | 700 CODES ------------------------- code | desc ------------------------- 100 | string1 200 | string2 300 | string2 400 | string2 700 | string2 SELECT name, Case CODES.desc when 'string1' then 'String 1' when 'string2' then 'String 2' when 'string3' then 'String 3' when 'string4' then 'String 4' END as description FROM TABLE1 join CODES on TABLE1.code = CODES.code ``` This works fine. The problem is if `TABLE1.tag = Y`, then description needs to be `'Other string'` which is not in the `CODES` table I tried adding: ``` Case CODES.desc ..... when TABLE1.tag = Y then CODES.desc 'Other String' ``` but it didn't work.
You were close but I think this is what you are looking for. The fact that they are in different tables really doesn't matter to the CASE, just the JOIN: ``` SELECT name, Case WHEN Table1.tag = 'Y' then CODES.Desc ELSE 'Other String' END as description FROM TABLE1 join CODES on TABLE1.code = CODES.code ```
I can't comment on a post yet. But why have a code table with a code description if you're going to change the description anyways? Instead, you should just modify the current description in that table or add a column with the secondary description you need. Then the case statement is a lot less complex. ``` CASE WHEN TABLE1.tag = 'Y' THEN 'Other String' ELSE CODES.other_desc END AS description ```
select CASE statement based on two tables
[ "", "sql", "sql-server", "select", "case", "" ]
I have a table where I have to get top 10 in each category based on the amount spent by a person in that category. My sample data is ``` person category amount roger perfume 20 jim perfume 50 joe tv 5 jim tv 25 kathy car 40 alicia perfume 100 ``` I gotta get this in one query. I know limit does not work on **amount** like **limit amount 10**. Please help
Use variables: ``` SELECT person, category, amount FROM ( SELECT person, category, amount, @rn := IF(@cat = category, @rn+1, IF(@cat := category, 1, 1)) AS rn FROM mytable CROSS JOIN (SELECT @rn := 0, @cat := '') AS vars ORDER BY category, amount DESC) AS t WHERE t.rn <= 10 ``` `@rn` is used in order to enumerate rows within each `category` slice by descending `amount` order. Hence, the outer query returns the 10 highest ranking rows per `category`. If less than 10 records exist for a specific category, then all of these records are returned. [**Demo here**](http://sqlfiddle.com/#!9/f611f/5)
since category appears multiple times, you will need to group by category, summing up the amounts. Then sort on it and keep the number of rows you want. I think the following will work: ``` SELECT category, SUM(amount) as amount FROM mytable GROUP BY category ORDER BY SUM(amount) DESC LIMIT 10 ```
top 10 by category mysql
[ "", "mysql", "sql", "mysql-variables", "" ]
As I tried to create new table from existing 2 table with specific column name in oracle. I tried below code ``` CREATE TABLE D_T1 AS SELECT a.col1, a.col2, a.col3, a.col4, a.col5, b.col6, b.col7, b.col8 FROM db1.table1 a INNER JOIN db1.table2 b ON (a.col1 = b.colNum AND a.col2 = b.colnum1) ``` But I get error > CREATE TABLE failed ORA 00957 Duplicate column name Can anyone help?
Ignoring the other errors you seem to have introduced by retyping the code, you've shown that you do have a duplicate column, which is what the error is telling you: ``` a.VALIDFLAG, b.VALIDFLAG ``` You seem to be under the impression that the table (alias) prefix makes the column names in the projection unique. They do not. The table prefix tells Oracle which table to get the column value from (unless you're using the `using` join syntax, which you are not). If the column appears in two tables you have to prefix the column name with the table. If you want the value from both tables you have to prefix both of them. With a simple query then referring to both table columns without column aliases is OK, but something trying to consume the result set might struggle. [This is fine](http://sqlfiddle.com/#!4/9eecb7d/11270): ``` select a.dummy, b.dummy from dual a join dual b on b.dummy = a.dummy; DUMMY DUMMY ------- ------- X X ``` But notice that both columns have the same heading. If you tried to create a table using that query: ``` create table x as select a.dummy, b.dummy from dual a join dual b on b.dummy = a.dummy; ``` You'd get the error you see, ORA-00957: duplicate column name. If you alias the duplicated columns then [the problem goes away](http://sqlfiddle.com/#!4/b4550): ``` create table x as select a.dummy as dummy_a, b.dummy as dummy_b from dual a join dual b on b.dummy = a.dummy; ``` So in your case you can alias those columns, if you need both: ``` ..., a.VALIDFLAG AS validflag_a, b.VALIDFLAG AS validflag_b, ... ```
To be completely honest, that query is a mess. You've got several errors in your SQL statement: ``` CREATE TABLE AS SELECT ``` The table name is missing - this should be ``` CREATE TABLE my_new_table AS SELECT ``` to create a new table named my\_new\_table. ``` a.ALIDFLAG,b,VALIDFLAG, ``` I've got a suspicion that this should really be `a.VALIDFLAG` instead of `a.ALIDFLAG`. Also, you need to replace `b,VALIDFLAG` with `b.VALIDFLAG`. ``` SELECT a.BILLFREQ a.CDHRNUM, ``` You're missing a comma after `a.BILLFREQ` - this is a syntax error. ``` a.Aβ€Œβ€‹GNYCOY,a.AGNTCOY ``` There's the culprit - you're selecting the same column twice. Get rid of the second one. *EDIT* Actually, the names are different, so this isn't the cause of the error (unless you've mistyped your query in the comment instead of copy& paste). To debug this kind of errors, try to * format your SQL statement in a readable way * comment out everything but one column, run the statement and ensure it works * add one column * repeat until you find the error or you've added all columns *2ND UPDATE* With the updated query, the error is here: ``` a.VALIDFLAG, b, VALIDFLAG, ``` You have two columns named VALIDFLAG - use an alias for one of these, and it should work.
CREATE TABLE failed ORA 00957 Duplicate column name
[ "", "sql", "oracle", "oracle11g", "ddl", "" ]
I am trying to find a way to manipulate values that are returned as part of a query. Basically, if the value is less than 255 char, use this value but if value is more than 255 characters, need to return a string "value more than 255 characters" instead of actual value. I need to achieve this a part of SQL query. Appreciate any feedback.Thanks Jay
Assuming SQL Server, try this which uses the [`LEN`](https://msdn.microsoft.com/en-us/library/ms190329.aspx) function combined with a [`CASE`](https://msdn.microsoft.com/en-us/library/ms181765.aspx): ``` SELECT CASE WHEN LEN(StringColumn) > 255 THEN 'value more than 255 characters' ELSE StringColumn END As MyColumnName FROM MyTable ``` Example sqlfiddle: <http://www.sqlfiddle.com/#!6/e0508/1/0> Edit: In case you are using Oracle as the tags suggest, instead of `LEN`, use [`LENGTH`](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions076.htm): ``` SELECT CASE WHEN LENGTH(StringColumn) > 255 THEN 'value more than 255 characters' ELSE StringColumn END As MyColumnName FROM MyTable ``` Example sqlfiddle: <http://www.sqlfiddle.com/#!4/9bc74/2/0>
The following `CASE` pattern will work with most systems: ``` SELECT CASE WHEN LENGTH(someCol) <= 255 THEN someCol ELSE "value more than 255 characters" END AS ColName FROM TableName ``` Take note that string functions differ depending on your database software. See: * **[`CHARACTER_LENGTH`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_character-length) for Mysql**, (*[fiddle example](http://sqlfiddle.com/#!9/212aa/2)*) * **[`LEN`](https://msdn.microsoft.com/en-us/library/ms190329.aspx) for SQL Server**, or * **[`LENGTH`](http://www.techonthenet.com/oracle/functions/length.php) for Oracle**
SQL for string more than 255 chars
[ "", "sql", "sql-server", "database", "oracle", "" ]
Hi i want to change the default datetime type in sql server. I have already table who has rows and i dont want to delete them. Now the datetime format that had rows is: `2015-11-16 09:04:06.000` and i want to change in `16.11.2015 09:04:06` and every new row that i insert i want to take this datetime format.
SQL Server does not store `DATETIME` values in the way you're thinking it does. The value that you see is simply what the DBMS is choosing to render the data as. If you wish to change the display of the `DATETIME` type, you can use the `FORMAT()` built-in function in `SQL Server 2012` or later versions, but keep in mind this is converting it to a `VARCHAR` You can get the format you desire via the following: ``` SELECT FORMAT(YourDateField, N'dd.MM.yyyy HH:mm:ss') ```
There is no such thing as format of the `DATETIME` data type, it has no format by nature, formatted is the text representation you can set when converting to `VARCHAR` or some visualization settings of the client / IDE. If you, however, want to be able to insert dates using string representations that are alternatively formatted (i.e. control the way string input is parsed to datetime type) you can check [SET DATEFORMAT](https://msdn.microsoft.com/en-us/library/ms189491.aspx?f=255&MSPPError=-2147217396) - as explained in the remarks section this will not change the display representation of date fields / variables.
Change Datetime format in Microsoft Sql Server 2012
[ "", "sql", "sql-server", "datetime", "sql-server-2012", "format", "" ]
``` SELECT CAST(β€˜2015-01-01 14:48:34.69’ AS DATETIME) FirstVal, CAST(β€˜2015-01-01 14:48:34:69’ AS DATETIME) SecondVal ``` When we look at the answer, there is a difference between the milliseconds part in the result set, whereas you can notice that in the `SELECT` statement I have specified different milliseconds part. The question is why there is a difference in the millisecond part even though I have different value selected? [![enter image description here](https://i.stack.imgur.com/30IwP.png)](https://i.stack.imgur.com/30IwP.png)
`DATETIME` has an accuracy of 3.33ms - you will **never** see a value with a `.069` stored in a `DATETIME` - you only ever get `.xx0`, `.xx3` and `.xx7`. If you need millisecond precision, use `DATETIME2(3)` (introduced in SQL Server **2008**) as your datatype instead.
you have selected Diff Value in First Value:-34.69 and Second Value :- 34:69. you can see both value diff so out put diff.
millisecond in sql tsql
[ "", "sql", "sql-server", "t-sql", "sql-server-2008-r2", "" ]
I want to create a **script** for my **oracle DB**, which **drops tables**. If the table does not exist, the script won't exit as fail, just print a text: "*does not exists*". The script is the following: ``` BEGIN EXECUTE IMMEDIATE 'DROP TABLE mytable'; DBMS_Output.Put_Line(' table dropped'); EXCEPTION WHEN OTHERS THEN IF SQLCODE = -942 THEN DBMS_Output.Put_Line(' table not exists'); ELSE DBMS_Output.Put_Line(' Unknown exception while dropping table'); RAISE; END IF; END; ``` I want to drop a lot of table in one script, and I don't want to write these lines more than once. Is there any way, to write it to a **procedure or function** which gets a parameter (the name of the table), and call this procedure in that script? Maybe something like this: ``` drop_table_procedure('mytableA'); drop_table_procedure('mytableB'); ``` Or maybe a procedure, which gets an undefined size list (like in java: **String ... table names**): ``` drop_tables_procedure('mytableA','mytableB'); ``` Please give me some examples. Thanks!
Yes, you can declare a "temporary" procedure in an anonymous PL/SQL block: ``` DECLARE PROCEDURE drop_if_exists(p_tablename VARCHAR) IS BEGIN EXECUTE IMMEDIATE 'DROP TABLE '||p_tablename; DBMS_Output.Put_Line(' table dropped'); EXCEPTION WHEN OTHERS THEN IF SQLCODE = -942 THEN DBMS_Output.Put_Line(' table not exists'); ELSE DBMS_Output.Put_Line(' Unknown exception while dropping table'); RAISE; END IF; END; BEGIN drop_if_exists('TABLE_1'); drop_if_exists('TABLE_2'); END; / ```
in `execute immediate` you need add name of database object. here's the script ``` create table t1 (col1 int); create table t2 (col1 int); create procedure drop_my_table(av_name varchar2) as begin EXECUTE IMMEDIATE 'DROP TABLE '||av_name; DBMS_Output.Put_Line(' table dropped'); EXCEPTION WHEN OTHERS THEN IF SQLCODE = -942 THEN DBMS_Output.Put_Line(' table not exists'); ELSE DBMS_Output.Put_Line(' Unknown exception while dropping table'); RAISE; END IF; end drop_my_table; declare type array_t is varray(2) of varchar2(30); atbls array_t := array_t('t1', 't2'); begin for i in 1..atbls.count loop drop_my_table(atbls(i)); end loop; end; ```
How to create a procedure in an oracle sql script and use it inside the script?
[ "", "sql", "oracle", "function", "procedure", "sql-scripts", "" ]
At least with MariaDB v10.x, why truth clauses with left joins doesn't work as I expect when there is `NULL` return value? The below works: ``` SELECT u.id FROM user u INNER JOIN role r on r.user = u.id INNER JOIN customer c ON c.id = r.customer LEFT JOIN customer_subclass cs ON cs.customer = c.id WHERE u.status = 'NEW' AND (cs.code != 4 OR cs.code IS NULL) ``` but when I first tried ``` WHERE u.status = 'NEW' AND cs.code != 4 ``` it didn't work when `cs.code` was `NULL`. Why do I have to specifically test against `NULL` itself? I would assume `NULL != 4`?
The thing is that engine is build upon three-valued predicate logic. If predicate compares two non null values then it can be evaluated to `TRUE` or `FALSE`. If at least one of then is `NULL` then predicate evaluates to third logical value - `UNKNOWN`. Now what happens in `WHERE` clause? It is designed in such a way that it returns rows where predicate evaluates to `TRUE` only! If predicate evaluates to `FALSE` or `UNKNOWN` then corresponding row is just filtered out from resultset. At first this is very confusing and leads newcomers into world of `SQL` to several typical mistakes. They just don't think that data may contain `NULL`s. One of classic mistake is for example: ``` Employyes(Name varchar, Contry varchar) 'John', 'USA' 'Peter', NULL 'Mike', 'England' ``` And you want all rows where `Contry` is not `USA`. And you just write: ``` select * from Employees where Country <> 'USA' ``` and get only: ``` 'Mike', 'England' ``` as a result. This is very confusing at first glance, but as far as you understand that engine is doing three-valued logic the result is logical.
Because comparing with `null` results in neither `true` nor `false`. It is **unknown**. And that is true for all DB engines I know. The `is` operator handles especially `null` values which you already use.
Left join weirdness
[ "", "sql", "left-join", "" ]
I am trying to create a query that displays the average attendance by conference when at least one team was in a game. [Relationships](https://i.stack.imgur.com/5FI8c.png) [![enter image description here](https://i.stack.imgur.com/5FI8c.png)](https://i.stack.imgur.com/5FI8c.png) this is very close to what im looking for ``` SELECT Conference.ConferenceName, AVG(Game.Attendance) AS AVG_ATT FROM ( Conference INNER JOIN School ON Conference.[ConferenceID] = School.[ConferenceID] ) INNER JOIN Game ON ( School.[SchoolID] = Game.[Team1] OR School.[SchoolID] = Game.[Team2] ) GROUP BY Conference.ConferenceName; ``` the problem is if a game has 2 teams from the same conference it adds the attendance twice, and should only do it once. consider 2 games ## game1 ``` Team1- Wisconsin Conference - BIG10 Team2 - Michigan Conference - BIG10 Attendance - 100,000 ``` ## game2 ``` Team1- Wisconsin Conference - BIG10 Team2 - USC Conference - PAC12 Attendance - 65,000 ``` ## Results ``` BIG10-correct 82,500 PAC12 65,000 BIG10-Actual 88,333 ```
Get a distinct list of the games by conference in a derived query, then do your average. ``` SELECT ConferenceName, AVG(Attendance) AS AVG_ATT FROM ( SELECT DISTINCT GameID, Conference.ConferenceName, Game.Attendance FROM ( Conference INNER JOIN School ON Conference.[ConferenceID] = School.[ConferenceID] ) INNER JOIN Game ON ( School.[SchoolID] = Game.[Team1] OR School.[SchoolID] = Game.[Team2] ) ) DerivedDistinctGamesAndConferences GROUP BY ConferenceName; ```
USE `UNION` to discard duplicated first before doing the average ``` SELECT Game.GameID Game.Attendance Conference.ConferenceName FROM ( Conference INNER JOIN School ON Conference.[ConferenceID] = School.[ConferenceID] ) INNER JOIN Game ON ( School.[SchoolID] = Game.[Team1] -- TEAM 1 ) UNION SELECT Game.GameID Game.Attendance Conference.ConferenceName FROM ( Conference INNER JOIN School ON Conference.[ConferenceID] = School.[ConferenceID] ) INNER JOIN Game ON ( School.[SchoolID] = Game.[Team2] -- TEAM 2 ) ``` So you will get `query1 union query2` ``` GameID Attendance Conference 1 100,000 BIG10 < one row will disapear 2 80,000 BIG10 1 100,000 BIG10 < after union 2 80,000 PAC12 ``` Then you calculate the avergage over this result
SQL MS Access join help average not correct
[ "", "sql", "ms-access", "join", "average", "" ]
I have a table in Oracle which contains : ``` id | month | payment | rev ---------------------------- A | 1 | 10 | 0 A | 2 | 20 | 0 A | 2 | 30 | 1 A | 3 | 40 | 0 A | 4 | 50 | 0 A | 4 | 60 | 1 A | 4 | 70 | 2 ``` I want to calculate the payment column `(SUM(payment))`. For `(id=A month=2)` and `(id=A month=4)`, I just want to take the greatest value from `REV` column. So that the sum is `(10+30+40+70)=150`. How to do it?
This presupposes you don't have more than one value per rev. If that's not the case, then you probably want a `row_number` analytic instead of `max`. ``` with latest as ( select id, month, payment, rev, max (rev) over (partition by id, month) as max_rev from table1 ) select sum (payment) from latest where rev = max_rev ```
You can also use below. ``` select id,sum(payment) as value from ( select id,month,max(payment) from table1 group by id,month ) group by id ``` `Edit`: for checking greatest rev value ``` select id,sum(payment) as value from ( select id,month,rev,payment ,row_number() over (partition by id,month order by rev desc) as rno from table1 ) where rno=1 group by id ```
Use SUM function in oracle
[ "", "sql", "oracle", "sum", "analytics", "" ]
I have a data frame like below ``` col1 col2 col3 A Z 10 A Y 8 A Z 15 B X 11 B Z 7 C Y 10 D Z 11 D Y 14 D L 16 ``` I have to select, for each `distinct col1` which of the `col2` have `max(col3)` Output data frame should look like, ``` col1 col2 col3 A Z 15 B X 11 C Y 10 D L 16 ``` How to do this either in `R` or in `SQL` Thanks in advance
We can use `data.table`. We convert the 'data.frame' to 'data.table' (`setDT(df1)`), grouped by 'col1', we subset the data.table (`.SD`) based on the index of max value of 'col3' ``` library(data.table) setDT(df1)[, .SD[which.max(col3)], col1] # col1 col2 col3 #1: A Z 15 #2: B X 11 #3: C Y 10 #4: D L 16 ``` --- Or we can use `top_n` from `dplyr` after grouping by 'col1'. ``` library(dplyr) df1 %>% group_by(col1) %>% top_n(1) ```
Another way of doing in MySQL. Here is the [**SQLFiddle Demo**](http://sqlfiddle.com/#!9/d631f/1) **Output** : =>[![enter image description here](https://i.stack.imgur.com/Utli4.png)](https://i.stack.imgur.com/Utli4.png) ``` SELECT T1.* FROM table_name T1 INNER JOIN (SELECT col1,MAX(col3) AS Max_col3 FROM table_name GROUP BY col1) T2 ON T1.`col1` = T2.`col1` and T2.`Max_col3`=t1.`col3` ``` Hope this helps.
How to find the distinct of one column based on other columns
[ "", "mysql", "sql", "r", "groupwise-maximum", "" ]
I have a query like : ``` SELECT * FROM mytable WHERE (`field1` LIKE '%search%' OR `field2` LIKE '%search%' OR `field3` LIKE '%search%') ``` Keyword `search` can be any other words. How can i set priority to this query ? I want to search first in `field1`, after in `field2` and then in `field3`. So if I find my keyword `search` in field1, row with `search` in `field1` must be in first in my results set. How can i do that ?
You can do it with order by: ``` SELECT * FROM mytable WHERE (`field1` LIKE '%search%' OR `field2` LIKE '%search%' OR `field3` LIKE '%search%') ORDER BY CASE WHEN `field1` LIKE '%search%' THEN 1 WHEN `field2` LIKE '%search%' THEN 2 WHEN `field3` LIKE '%search%' THEN 3 ELSE 4 END ```
You'd better split query and union them. ``` select * from ( SELECT 1 as s, * FROM mytable WHERE ('field1' LIKE '%search%') union SELECT 2 as s, * FROM mytable WHERE ('field2' LIKE '%search%') union SELECT 3 as s, * FROM mytable WHERE ('field3' LIKE '%search%') ) a order by s ```
SQL priority condition
[ "", "sql", "" ]
I have a table like so. ``` ID Name Year 404 Matt 2015 406 Matt 1 2015 304 Matt 2 2015 334 Matt3 2015 655 Matt4 2015 923 Matt 5 2015 856 Banana 2015 274 Banana 2 2015 716 Banana7 2015 472 Orangutan 2015 844 OrangutanA 2015 159 Orangutan 2 2015 739 Big Foot 2015 866 Big Foot B 2015 273 Big Foot 5 2015 896 Big FootB 2015 ``` And I would like to add an extra column to this table which groups Names (ignoring numbers and letters after them) If it was just numbers after the name i would just remove numbers however some have letters a, b, c etc and some have letters and numbers! My expected output would be. ``` ID Name Year SName 404 Matt 2015 Matt 406 Matt 1 2015 Matt 304 Matt 2 2015 Matt 334 Matt3 2015 Matt 655 Matt4 2015 Matt 923 Matt 5 2015 Matt 856 Banana 2015 Banana 274 Banana 2 2015 Banana 716 Banana7 2015 Banana 472 Orangutan 2015 Orangutan 844 OrangutanA 2015 Orangutan 159 Orangutan 2 2015 Orangutan 739 Big Foot 2015 Big Foot 866 Big Foot B 2015 Big Foot 273 Big Foot 5 2015 Big Foot 896 Big FootB 2015 Big Foot ``` SQL Fiddle: <http://sqlfiddle.com/#!4/3e7c7> Input Script: ``` CREATE TABLE mytable ("ID" int, "Name" varchar2(11), "Year" int) ; INSERT ALL INTO mytable ("ID", "Name", "Year") VALUES (404, 'Matt', 2015) INTO mytable ("ID", "Name", "Year") VALUES (406, 'Matt 1', 2015) INTO mytable ("ID", "Name", "Year") VALUES (304, 'Matt 2', 2015) INTO mytable ("ID", "Name", "Year") VALUES (334, 'Matt3', 2015) INTO mytable ("ID", "Name", "Year") VALUES (655, 'Matt4', 2015) INTO mytable ("ID", "Name", "Year") VALUES (923, 'Matt 5', 2015) INTO mytable ("ID", "Name", "Year") VALUES (856, 'Banana', 2015) INTO mytable ("ID", "Name", "Year") VALUES (274, 'Banana 2', 2015) INTO mytable ("ID", "Name", "Year") VALUES (716, 'Banana7', 2015) INTO mytable ("ID", "Name", "Year") VALUES (472, 'Orangutan', 2015) INTO mytable ("ID", "Name", "Year") VALUES (844, 'OrangutanA', 2015) INTO mytable ("ID", "Name", "Year") VALUES (159, 'Orangutan 2', 2015) INTO mytable ("ID", "Name", "Year") VALUES (739, 'Big Foot', 2015) INTO mytable ("ID", "Name", "Year") VALUES (866, 'Big Foot B', 2015) INTO mytable ("ID", "Name", "Year") VALUES (273, 'Big Foot 5', 2015) INTO mytable ("ID", "Name", "Year") VALUES (896, 'Big FootB', 2015) SELECT * FROM dual ; ```
**If** the rule you're trying to apply is simply that you want to remove a single digit or uppercase character at the end of the value, with or without a single preceding space; and your real data is less ambiguous than speculated in comments and the complications and edge cases really don't apply; then you can use a fairly simple regular expression to remove a pattern: ``` select "ID", "Name", "Year", regexp_replace("Name", '[ ]?[[:upper:][:digit:]]$', null) as "SName" from mytable; ID Name Year SName ---------- ----------- ---------- -------------------------------------------------- 404 Matt 2015 Matt 406 Matt 1 2015 Matt 304 Matt 2 2015 Matt 334 Matt3 2015 Matt 655 Matt4 2015 Matt 923 Matt 5 2015 Matt 856 Banana 2015 Banana 274 Banana 2 2015 Banana 716 Banana7 2015 Banana 472 Orangutan 2015 Orangutan 844 OrangutanA 2015 Orangutan 159 Orangutan 2 2015 Orangutan 739 Big Foot 2015 Big Foot 866 Big Foot B 2015 Big Foot 273 Big Foot 5 2015 Big Foot 896 Big FootB 2015 Big Foot ``` [SQL Fiddle](http://sqlfiddle.com/#!4/3e7c7/3). Or as @LalitKumarB suggested, use a virtual column: ``` alter table mytable add ("SName" varchar2(11) as (cast(regexp_replace("Name", '[ ]?[[:upper:][:digit:]]$', null) as varchar2(11)))); ``` [SQL Fiddle](http://sqlfiddle.com/#!4/9ef06/1). The `cast()` is needed because the string returned by `regexp_replace()` isn't size-constrained - it doesn't necessarily know about the limit on the size of the input value, and you could be making it longer - and could be up to 4000 characters (or 32k in 12c); so the `alter` would get ORA-12899 without it. These are getting the required result for the sample data you provided, but if your rule is really more complicated and you have more complicated data, then it would need to be amended - or a completely different approach might be needed (recursion, model clause, ... depending on the full requirement).
The following query builds triples from the ids of records containing the base and the derived names and the short name (base and derived in terms of prefixes of "Name" values as defined below). The Assumptions (and these are restrictive !) are that: * any pair of derived Names have a common prefix * that common prefix does not have a common prefix with any other db entry * there is a db record with the common prefix itself. * for each base record there is at least 1 derived record. 'prefix' means true prefix. ``` select derived.ID did , base.ID bid , base."Name" SName from mytable base join mytable derived on ( INSTR(derived."Name", base."Name" ) = 1 and derived."Name" <> base."Name" ) ; ``` **Update** A full-fledged query providing the result set columns the OP asked for and including the base cases follows (the same caveats apply): ``` select derived.ID OrigID , derived."Name" OrigName , base."Name" SName , base."Year" OrigYear from mytable base join mytable derived on ( INSTR(derived."Name", base."Name" ) = 1 and derived."Name" <> base."Name" ) union all select base.ID OrigID , base."Name" OrigName , base."Name" SName , base."Year" OrigYear from mytable base where base.ID IN ( select distinct b.ID from mytable b join mytable d on ( INSTR(d."Name", b."Name" ) = 1 and d."Name" <> b."Name" ) ) ; ```
SQL Give fields a single name when there are multiple variances with numbers
[ "", "sql", "oracle", "" ]
I have below Oracle Command to insert a row but before that I am supposed to check whether that row exists in table and if exists do nothing. ``` IF NOT EXISTS(SELECT * FROM table1 WHERE Value1 ='Value 1' AND Value2='Value 2' AND Value3='Value 3' AND Value4='Value 4') BEGIN INSERT INTO table1 ( Value1, Value2, Value3, Value4, Value5 ) VALUES ( 'Value 1', 'Value 2', 'Value 3', 'Value 4', 'Value 5' ) END; ``` I feel that above statement should hold good however it throws > **Error starting at line : 7 in command - IF NOT EXISTS(SELECT \* FROM > table1 WHERE Value1='Value 1' AND > Value2='Value 2' AND Value3='Value 3' AND > Value4='Value 4') Error report - Unknown Command** The error thrown is not much helpful here to dig more on this problem and am newbie on this. Hope to find some help on this.
The simplest possible approach (not the most efficient) would be to do something like ``` DECLARE l_cnt pls_integer; BEGIN SELECT COUNT(*) INTO l_cnt FROM table_name WHERE col1 = 'value1' AND col2 = 'value2' ... IF( l_cnt = 0 ) THEN <<your insert>> END IF; END; ``` You can do the `insert`, catch the exception, and ignore it (I assume there is a unique constraint that would prevent duplicate rows otherwise you'll have a serious race condition no matter how you implement the check) ``` BEGIN <<your insert>> EXCEPTION WHEN dup_val_on_index THEN NULL; -- Ignore the exception, we don't have to insert the row END; ``` Or you could do a `merge`. Assuming that you're on 10g, the `WHEN MATCHED` clause becomes optional ``` MERGE INTO table_name dest USING( SELECT 'value 1' col1, 'value 2' col2, 'value 3' col3, 'value 4' col4 FROM dual ) src ON( src.col1 = dest.col1 and src.col2 = dest.col2 and src.col3 = dest.col3 and src.col4 = dest.col4 ) WHEN NOT MATCHED THEN INSERT( col1, col2, col3, col4 ) VALUES( src.col1, src.col2, src.col3, src.col4 ); ``` Or you could do an `insert` ``` INSERT INTO table_name( col1, col2, col3, col4 ) SELECT 'value1', 'value2', 'value3', 'value4' FROM dual WHERE NOT EXISTS( SELECT 1 FROM table_name WHERE col1 = 'value1' AND col2 = 'value2' AND col3 = 'value3' AND col4 = 'value4' ); ```
I usually use [MERGE](https://oracle-base.com/articles/9i/merge-statement) for this kind of problems. It is pure SQL, simple and straightforward. In your example, it would be like this: ``` MERGE table1 T1 USING (SELECT 'Value 1' Value1, 'Value 2' Value2, 'Value 3' Value3, 'Value 4' Value4 FROM DUAL) T2 ON (T1.Value1 = T2.Value1 AND T1.Value2 = T2.Value2 AND T1.Value3 = T2.Value3 AND T1.Value4 = T2.Value4) WHEN NOT MATCHED THEN INSERT VALUES (T2.Value1, T2.Value2, T2.Value3, T2.Value4); ```
If Not Exists Insert fails with error Unknown command for If Not Exists statement
[ "", "sql", "oracle", "" ]
I am new with SQL in general and with Oracle in particular. I created a collection of dates => **dates** consisted of **date (i)** I have a table **table** with columns **date** and **revenue**. When I make query ``` select sum(revenue) from table where date between date(i) and date(i+1) ``` I cant get correct number => the values is less than correct one. When I make ``` select sum(revenue) from table where date between to_date(date(i),'dd-mm-yyyy') and to_date(date(i+1),'dd-mm-yyyy') ``` the different story => I receive null value. Only ``` select sum(revenue) from table where date>=date(i) and date<=date(i+1) ``` works well. But why?
The variables in your collection are already dates and you're converting them again into dates (that's something you should never do). When you convert a DATE into a DATE, the variable is previously converted into a string by Oracle using the default date format mask. You can see yours with: ``` SELECT value FROM nls_session_parameters WHERE parameter = 'NLS_DATE_FORMAT' ``` In my case, my date format is `DD-MM-RR`. So, if I convert the date 01-01-2015 into a date again, it would get converted like this: ``` 1. It will be implicitly converted into a string using the DD-MON-RR mask. I would get 01-01-15. 2. Then, it will convert the string 01-01-15 into a date using the mask you provide (dd-mm-yyyy). In this case, to_date('01-01-15','dd-mm-yyyy') will return 01-01-0015. ``` That might be the cause why you're not getting any results in your query: ``` select sum(revenue) from table where date between to_date(date(i),'dd-mm-yyyy') and to_date(date(i+1),'dd-mm-yyyy') ``` If you used the 'dd-mm-rrrr' mask instead, it will convert the string 01-01-15 into 01-01-2015 and your query might return somethin. I don't know, it will depend on the dates in your collection.
Why do you convert a `DATE` value into a `DATE` value? When you do `to_date(date(i),'dd-mm-yyyy')` then following happens: 1. Date value `date(i)` is converted implicitly to a string using the current `NLS_DATE_FORMAT` format. 2. Then this string is converted back to a `DATE` value using the format `dd-mm-yyyy` So, if **by chance** your current `NLS_DATE_FORMAT` format is equal to `dd-mm-yyyy` then your query will work. Or, in order to make it working perform `ALTER SESSION SET NLS_DATE_FORMAT = 'dd-mm-yyyy';` before you run your query. However, it is a stupid and useless idea to convert a DATE value into a DATE value.
Oracle SQL - how dates are compared
[ "", "sql", "oracle", "date", "comparison", "" ]
Currently my table structire for table SHIFT is as follow: ``` ID Name Start End 1 Shift1 06:00 14:00 2 Shift2 14:00 22:00 3 Shift3 22:00 06:00 ``` Now I pass parameter to this query in hour like 11 or 15 or 22 or 03 For that parameter, I would like to get the result that in which shift the passed hour will reside. So if I pass 11, it shoud give me Shift1. If I pass 23, it should give me Shift3. Following query that I wrote works fine for any value from 07 to 21, it is giving me blank value and for obvious reasons. ``` select * from MII_SHIFT where '[Param.1]' >= left(START,2) and '[Param.1]' < left(END,2) ``` Can anyone help me how can I change the query so that I can get proper response for 22,23,00,01,02,03,04,05. Thanks
``` SELECT * FROM shift WHERE ( left(START,2) > left(END,2) AND ('[Param.1]' >= left(START,2) OR '[Param.1]' < left(END,2)) ) OR ( left(START,2) < left(END,2) AND '[Param.1]' >= left(START,2) AND '[Param.1]' < left(END,2) ) ``` [![enter image description here](https://i.stack.imgur.com/VuaPT.png)](https://i.stack.imgur.com/VuaPT.png) I answer a similar answer a litle time ago. * Shorts `start < end` (5-9): the value need be between start and end * Jacket `start > end` (10-4): the value is `< start` or `> end`
Simplest way is most likely to convert the times into dates, and if the end date is earlier than start, then add one day. You could use time datatype as input too, instead of just hour, but this is now an example with int: ``` declare @hour int, @date datetime set @hour = 3 set @date = convert(datetime, convert(varchar(2), @hour) + ':00', 108) select Name from ( select Name, [Start] as Start1, case when [End] < [Start] then dateadd(day, 1, [End]) else [End] End as End1, case when [End] < [Start] then dateadd(day, -1, [Start]) else [Start] End as Start2, [End] as End2 from ( select Name, convert(datetime, [Start], 108) as [Start], convert(datetime, [End], 108) as [End] from Table1 ) X ) Y where ((Start1 <= @date and @date < End1) or (Start2 <= @date and @date < End2)) ``` Edit: added 2nd start / end columns to the derived table to handle second part of the shift. Example in [SQL Fiddle](http://sqlfiddle.com/#!3/2b965/6)
Difficulty in getting Shift Value in SQL Query
[ "", "sql", "sql-server", "" ]
I have a table listing student grade, ``` AG T1 T2 T3 L0011001 A B A L0011002 A B B L0011003 A A C L0011004 A A C ``` I want output for AG like this: ``` L0011001 2A 1B L0011002 1A 2B L0011003 2A 1C L0011004 2A 1C ``` How to get this?
Your data is in a really, really bad format. You should have one row per "T" and per student. However, sometimes we are stuck with bad data formats. You can do what you want with `iif()`: ``` select ag, (iif(t1 = 'A', 1, 0) + iif(t2 = 'A', 1, 0) + iif(t2 = 'A', 1, 0)) as A_s, (iif(t1 = 'B', 1, 0) + iif(t2 = 'B', 1, 0) + iif(t2 = 'B', 1, 0)) as B_s, (iif(t1 = 'C', 1, 0) + iif(t2 = 'C', 1, 0) + iif(t2 = 'C', 1, 0)) as C_s from t; ``` This doesn't do exactly what you want. It puts the values into separate columns -- a format that makes more sense to me. For your specific format: ``` select ag, (iif(A_s > 0, A_s & "A ") & iif(B_s > 0, B_s & "B ") & iif(C_s > 0, C_s & "C ") ) from (select ag, (iif(t1 = 'A', 1, 0) + iif(t2 = 'A', 1, 0) + iif(t2 = 'A', 1, 0)) as A_s, (iif(t1 = 'B', 1, 0) + iif(t2 = 'B', 1, 0) + iif(t2 = 'B', 1, 0)) as B_s, (iif(t1 = 'C', 1, 0) + iif(t2 = 'C', 1, 0) + iif(t2 = 'C', 1, 0)) as C_s from t ) as x ```
You might like to consider a union query and a crosstab, the output is not exactly as you wish, but it does make sense. ``` TRANSFORM Count(qry.ag) AS countofag SELECT qry.ag FROM (SELECT tbl.ag, tbl.t1 FROM tbl UNION ALL SELECT tbl.ag, tbl.t2 FROM tbl UNION ALL SELECT tbl.ag, tbl.t3 FROM tbl) AS qry GROUP BY qry.ag PIVOT qry.t1; ```
Group by and multiple colum count and sum
[ "", "sql", "ms-access", "count", "sum", "" ]
I am trying to figure out this question on a practice page online with the following tables: **Question:** For all cases in which the same customer rated the same product more than once, and in some point in time gave it a lower rating than before, return the customer name, the name of the product, and the lowest star rating that was given. I cant seem to figure out why this isnt correct - would anyone be able to help?
Here is what I have so far (without sample data): ``` SELECT Customer.customer_name, Product.product_name, MIN(Rating.rating_stars) FROM Rating JOIN Product ON Rating.prod_id = Product.prod_id JOIN Customer ON Rating.cust_id = Customer.prod_id GROUP BY Customer.customer_name, Product.product_name HAVING COUNT(Product.prod_id) > 1 ```
This query will return the minimum rating stars of a product that has been reviewed more than once by the same customer, with any of the newer ratings lower than an older rating: ``` SELECT r1.prod_id, r1.cust_id, MIN(r1.rating_star) AS min_rating FROM rating r1 INNER JOIN rating r2 ON r1.prod_id=r2.prod_id AND r1.cust_id=r2.cust_id AND r1.rating_date>r2.rating_date AND r1.rating_star<r2.rating_star GROUP BY r1.prod_id, r1.cust_id ``` you can then join this query with products and customers table: ``` SELECT customer.customer_name, product.product_name, m.min_rating FROM ( SELECT r1.prod_id, r1.cust_id, MIN(r1.rating_star) AS min_rating FROM rating r1 INNER JOIN rating r2 ON r1.prod_id=r2.prod_id AND r1.cust_id=r2.cust_id AND r1.rating_date>r2.rating_date AND r1.rating_star<r2.rating_star GROUP BY r1.prod_id, r1.cust_id) m INNER JOIN customer on m.cust_id = customer.cust_id INNER JOIN product ON m.product_id = product.product_id ```
MySQL nested selection
[ "", "mysql", "sql", "database", "psql", "" ]
I am selecting the column names from a table with the following query. ``` SELECT column_name FROM all_tab_cols WHERE table_name = 'TBL1' AND column_name like 'INFORMATION%' order by column_name ``` The resultset is like ``` INFORMATION1 INFORMATION10 INFORMATION11 INFORMATION12 . . . INFORMATION2 INFORMATION20 ``` Now, I want it to be sorted like ``` INFORMATION1 INFORMATION2 INFORMATION3 INFORMATION4 . . . INFORMATION19 INFORMATION20 ``` How to achieve this without being much cpu-intense?? Thanks in advance!
Order first by string length, then by string: ``` SELECT column_name FROM all_tab_cols WHERE table_name = 'TBL1' AND column_name LIKE 'INFORMATION%' ORDER BY LENGTH(column_name), column_name; ``` `SqlFiddleDemo`
> How to achieve this without being much cpu-intense? **REGEX** is CPU intensive and slower as compared to old **SUBSTR**. Use **SUBSTR** to get the digit part and use it in the **ORDER BY**. Also, since you are filtering rows only with `INFORMATION` string, you need only one ORDER BY on the digit part. ``` SELECT column_name FROM all_tab_cols WHERE table_name = 'TBL1' AND column_name like 'INFORMATION%' ORDER BY TO_NUMBER(SUBSTR(column_name, LENGTH('INFORMATION') +1)); ``` You can hard-code the **LENGTH** as `12`. ``` ORDER BY TO_NUMBER(SUBSTR(column_name, 12)) ``` Here is the **[SQL Fiddle demo](http://sqlfiddle.com/#!4/94a448/8)**.
Alphanumeric sorting in Oracle
[ "", "sql", "database", "oracle", "sorting", "" ]
Let's say I have a table t1 with only one column: `id`, and I have a table t2 with two columns: `id` and `Memo`. I need to select those `id` from t1, for which there is NO row in t2 that satisfies both of the following two conditions `t1.id = t2.id` and `t2.Memo = 'myText'`. How can I do that? I have tried using `join`, but that selects row that do satisfy some conditions, whereas I need the opposite.
``` SELECT * FROM t1 WHERE NOT EXISTS (SELECT 1 FROM t2 WHERE t2.id = t1.id AND t2.Memo = 'myText') ```
One way to do it is using `LEFT JOIN`: ``` select id from t1 left join t2 on t1.id = t2.id and t2.Memo = 'myText' where t2.id is null ```
SQL: select from t1 all rows for which there's no corresponding in t2
[ "", "sql", "" ]
I have dates and I want to retrieve periods of dates. I have table LeaveMaster: ``` ID LeaveDate 1 23/09/2015 1 24/09/2015 1 25/09/2015 1 27/09/2015 1 29/09/2015 1 30/09/2015 1 01/10/2015 1 02/10/2015 1 04/10/2015 ``` The result should be: ``` ID StartDate EndDate 1 23/09/2015 25/09/2015 1 27/09/2015 27/09/2015 1 29/09/2015 02/10/2015 1 04/10/2015 04/10/2015 ``` How to solve this ?
The trick to find consecutive values is to build a group key by subtracting the records position from the value. Then group by that key and show its min and max value. ``` select id, min(leavedate) as startdate, max(leavedate) as enddate from ( select id, leavedate, dateadd(day, - row_number() over (partition by id order by leavedate), leavedate) as groupkey from leavemaster ) dates group by id, groupkey, datepart(year, leavedate), datepart(month, leavedate); ``` In your example you show that you also want a new range for a new month, so I added `datepart(year, leavedate), datepart(month, leavedate)` to the GROUP BY clause. Remove this, if it was just a mistake. Execute the inner query alone to see how it works.
You can use the following query: ``` SELECT ID, MIN(LeaveDate) AS StartDate, MAX(LeaveDate) AS EndDate FROM ( SELECT ID, LeaveDate, DATEDIFF(d, '19000101', LeaveDate) - ROW_NUMBER() OVER (ORDER BY LeaveDate) AS grp FROM mytable ) AS t GROUP BY ID, grp ``` `grp` is used to identify slices of rows with *consecutive* `LeaveDate` values. Using this field in a `GROUP BY` clause we can perform aggregate functions on each slice and get start / end dates. **Note:** This works fine for intervals related to just *consecutive* `LeaveDate` values. It doesn't identify interval splits due to month changes. [**Demo here**](http://sqlfiddle.com/#!6/766d0a/3)
Get Periods of dates from SQL Query
[ "", "sql", "sql-server", "database", "datetime", "" ]
Consider the simple table representing the history of a race: ``` laphistory(race_id integer,lap_number integer,pos_number integer, driver_label text,id integer NOT NULL,CONSTRAINT id PRIMARY KEY (id) ...) ``` Now, consider the following example: ``` race_id;lap_number;pos_number;driver_label;id 1 ;1 ;1 ;"Matheus" ;1 1 ;1 ;2 ;"Nicolas" ;2 1 ;1 ;3 ;"Diego" ;3 1 ;2 ;1 ;"Nicolas" ;4 1 ;2 ;2 ;"Diego" ;5 1 ;2 ;3 ;"Matheus" ;6 1 ;3 ;1 ;"Nicolas" ;7 1 ;3 ;2 ;"Diego" ;8 1 ;4 ;1 ;"Diego" ;9 ``` According to the instance above, the result of race 1 (ordering the position of the last lap for each driver) should be (we can ignore here the "pos\_number"): ``` race_id;lap_number;pos_number;driver_label;id 1 ;4 ;1 ;"Diego" ;9 1 ;3 ;1 ;"Nicolas" ;7 1 ;2 ;3 ;"Matheus" ;6 ``` Querying the BD using the simple query: ``` select * from laphistory order by lap_number DESC, pos_number ASC ``` Will bring the following result: ``` race_id;lap_number;pos_number;driver_label;id 1 ;4 ;1 ;"Diego" ;9 <<< 1 ;3 ;1 ;"Nicolas" ;7 <<< 1 ;3 ;2 ;"Diego" ;8 1 ;2 ;1 ;"Nicolas" ;4 1 ;2 ;2 ;"Diego" ;5 1 ;2 ;3 ;"Matheus" ;6 <<< 1 ;1 ;1 ;"Matheus" ;1 1 ;1 ;2 ;"Nicolas" ;2 1 ;1 ;3 ;"Diego" ;3 ``` It is correct if we consider the first occurrence of each driver.However, I don't know how to select it. Any ideas?
PostgreSQL's [`window function`](http://www.postgresql.org/docs/9.4/static/tutorial-window.html) can be apply to get the desired output: ``` SELECT * FROM ( SELECT * ,row_number() OVER ( PARTITION BY driver_label ORDER BY lap_number DESC ) AS rank FROM race ) t WHERE rank = 1 ``` Or [`SQL-DISTINCT`](http://www.postgresql.org/docs/9.4/static/sql-select.html#SQL-DISTINCT) ``` SELECT DISTINCT ON (driver_label) * FROM race ORDER BY driver_label ASC ,lap_number DESC ```
Simple GROUP BY should do what you need: ``` select race_id, lap_number, min(id) from foo group by race_id, lap_number order by race_id asc, lap_number desc ```
SQL - Remove tuples from a select based on occurences on a specific column
[ "", "sql", "postgresql", "greatest-n-per-group", "" ]
I am using the Mondial database schema and am trying to find: For each country, find city that has the highest population and the city's population. Right now I have: ``` SELECT Country.Name, city.name, MAX(city.population) Population FROM city Join Country On Country.Code=City.Country WHERE city.population IS NOT NULL GROUP BY Country.Name, city.name ORDER BY Country.Name; ``` This gives me **ALL** of the cities in each country and their populations and not just the largest city.
Use analytical functions. Something like this should work (untested): ``` select country.name, city.name, city.population from country join ( select country, name, population, row_number() over ( partition by population desc) as rn from city ) city on city.country = country.code and city.rn = 1 order by country.name ```
Don't know in oracle but if done in SQL Server it can be done like this: ``` Select * from (select Country.Name, city.name, city.population, ROW_NUMBER() over(partition by Country.Name order by Country.Name,city.population desc) RowNum from Country inner join city city on Country.Code=City.Country) tbl where RowNum = 1 ``` function similar to row\_number in oracle will help. Hope This help.
For each country, find city that has the highest population and the city's population
[ "", "sql", "database", "oracle", "" ]
I want to update a table column with row number. Each row in `empid` column should update with related row number. I tried following query. ``` UPDATE employee SET empid = row_number(); ``` But this is not working. Any idea?
First, this is not the correct syntax for the `row_number()` function, since you're missing the `over` clause (resulting in an ORA-30484 error). Even if it was, this would not work, as you cannot directly use window functions in a `set` clause (resulting in an ORA-30483 error). For this usecase, however, you could just use the `rownum` pseudo-column: ``` UPDATE employee SET empid = ROWNUM; ``` **[SQLFiddle](http://sqlfiddle.com/#!4/ac08f/1/0)**
You could do something like the following. You can change the ORDER BY order the rows if needed. ``` UPDATE emp SET empid = emp.RowNum FROM (SELECT empid, ROW_NUMBER() OVER (ORDER BY empid) AS rowNum FROM employee) emp ```
Update Oracle table column with row number
[ "", "sql", "oracle", "sql-update", "oracle10g", "" ]
Cannot truncate a table referenced in a foreign key constraint. What should i do first to enable truncate? details: ``` ( `guacamole`.`guacamole_connection_history`, CONSTRAINT `guacamole_connection_history_ibfk_2` FOREIGN KEY (`connection_id`) REFERENCES `guacamole`.`guacamole_connection` (`connection_id`) ) ``` I want to clear guacamole\_connection table for development testing. [![DB structure](https://i.stack.imgur.com/Rx5zg.jpg)](https://i.stack.imgur.com/Rx5zg.jpg)
Why not add a constraint adding `ON DELETE CASCADE` and `ON UPDATE CASCADE`? Then all you need to do is `TRUNCATE guacamole_connection CASCADE` Example: ``` ALTER TABLE guacamole_connection_history ADD CONSTRAINT guacamole_connection_history_cascade_delete FOREIGN KEY (connection_id) REFERENCES guacamole_connection (connection_id) ON UPDATE CASCADE ON DELETE CASCADE; ``` Then just run `TRUNCATE guacamole_connection CASCADE`
You can do truncate by skipping foreign key checks. ``` SET FOREIGN_KEY_CHECKS = 0; TRUNCATE table1; SET FOREIGN_KEY_CHECKS = 1; ```
Cannot truncate a table referenced in a foreign key constraint
[ "", "mysql", "sql", "" ]
When i run my query i receive the following message: ``` Invalid column name 'column1' ``` query: ``` SELECT TOP 1 * From ( (select Top 2 convert(datetime,column1,101) AS TEST from table1 ORDER BY column1 DESC) X ORDER BY column1) ```
``` SELECT TOP 1 TEST From ( SELECT Top 2 convert(datetime,column1,101) AS TEST, column1 // <- include column1 FROM table1 ORDER BY column1 DESC ) X ORDER BY column1 ASC ```
Perhaps the simplest way is to move the conversion out of the subquery: ``` SELECT TOP 1 convert(datetime, column1, 101) as test From (select Top 2 t.* from table1 order by column1 desc ) X ORDER BY column1; ``` Or, if your database supports it, use the ANSI standard `FETCH`/`OFFSET`: ``` select convert(datetime, column1, 101) as test from table1 order by column1 desc offset 1 row fetch first 1 row only; ```
"Invalid column name" ORDER BY
[ "", "sql", "" ]
there is a table column message value is ``` Command triggerEvent started Command stopService stopped Command startService started Command executeCommand running ...... ``` Now I want to select the command name, i.e. triggerEvent, stopService, startService I try to use oracle substr and instr to get them but failed... ``` select substr(message, instr(message, ' ')) from event ``` Is there any good way to do that ?
You could do it with shorter code using **REGEXP\_SUBSTR**. For example, ``` SQL> WITH sample_data AS( 2 SELECT 'Command triggerEvent started' str FROM dual UNION ALL 3 SELECT 'Command stopService stopped' str FROM dual UNION ALL 4 SELECT 'Command startService started' str FROM dual UNION ALL 5 SELECT 'Command executeCommand running' str FROM dual 6 ) 7 -- end of sample_data mocking as real table 8 SELECT trim(regexp_substr(str, '[^ ]+', 1, 2)) command 9 FROM sample_data; COMMAND ------------------------------ triggerEvent stopService startService executeCommand ``` Of course, better to use **SUBSTR** and **INSTR** as they are less **CPU intensive** as still **faster** than REGEX. ``` SQL> WITH sample_data AS( 2 SELECT 'Command triggerEvent started' str FROM dual UNION ALL 3 SELECT 'Command stopService stopped' str FROM dual UNION ALL 4 SELECT 'Command startService started' str FROM dual UNION ALL 5 SELECT 'Command executeCommand running' str FROM dual 6 ) 7 -- end of sample_data mocking as real table 8 SELECT trim(SUBSTR(str, instr(str, ' ', 1, 1), 9 instr(str, ' ', 1, 2) - instr(str, ' ', 1, 1)) 10 ) command 11 FROM sample_data; COMMAND ------------------------------ triggerEvent stopService startService executeCommand ```
This will give you the second word from your message ``` with event(message) as ( select 'command triggerEvent started' from dual union select 'Command stopService stopped' from dual) --recreate table --query start select trim(regexp_substr(message,'[[:alpha:]]+ ',1,2)) command FROM event ```
How to use substr in Oracle ?
[ "", "sql", "database", "oracle", "" ]
I am trying to create a view in SQL Server 2012 that excludes columns where the entry is null (represent by 0 here so it's easier to read). My base data is this ``` ╔════╦══════╦══════╦══════╦══════╦══════╗ β•‘ ID β•‘ Col1 β•‘ Col2 β•‘ Col3 β•‘ Col4 β•‘ Col5 β•‘ ╠════╬══════╬══════╬══════╬══════╬══════╣ β•‘ 1 β•‘ 1 β•‘ 0 β•‘ 0 β•‘ 0 β•‘ 5 β•‘ β•‘ 2 β•‘ 1 β•‘ 2 β•‘ 3 β•‘ 0 β•‘ 5 β•‘ β•‘ 3 β•‘ 0 β•‘ 0 β•‘ 0 β•‘ 0 β•‘ 0 β•‘ β•‘ 4 β•‘ 0 β•‘ 2 β•‘ 3 β•‘ 0 β•‘ 0 β•‘ β•šβ•β•β•β•β•©β•β•β•β•β•β•β•©β•β•β•β•β•β•β•©β•β•β•β•β•β•β•©β•β•β•β•β•β•β•©β•β•β•β•β•β•β• ``` What I would like to return would be ``` ╔════╦══════╦══════╦══════╦══════╗ β•‘ ID β•‘ Res1 β•‘ Res2 β•‘ Res3 β•‘ Res4 β•‘ ╠════╬══════╬══════╬══════╬══════╣ β•‘ 1 β•‘ 1 β•‘ 5 β•‘ 0 β•‘ 0 β•‘ β•‘ 2 β•‘ 1 β•‘ 2 β•‘ 3 β•‘ 5 β•‘ β•‘ 3 β•‘ 0 β•‘ 0 β•‘ 0 β•‘ 0 β•‘ β•‘ 4 β•‘ 2 β•‘ 3 β•‘ 0 β•‘ 0 β•‘ β•šβ•β•β•β•β•©β•β•β•β•β•β•β•©β•β•β•β•β•β•β•©β•β•β•β•β•β•β•©β•β•β•β•β•β•β• ``` In this case, since column 4 had value 0 in all entries it was not included as a result. ID1 returned 1 and 5 skipping the 0s, and had 0s to fill since ID2 had 4 columns. If all values were 0, this would either just return Res1 with 0 or no columns and just the IDs. Hope this is clear. I'm having trouble explaining it.
You cannot really do what you want to do. Queries, views, and user-defined functions returns a specific set of columns. The set of columns is defined in advance. So, you cannot remove them. You *could* create a dynamic query that only included columns that are currently not `NULL`. Or, you could create an XML data structure with the columns you want. But, the columns in a view are fixed when the view is created, and cannot be added and removed when the view is run.
If you drop the requirement where the result excludes columns where it is all null, you can do this with an [UNPIVOT and then a PIVOT](https://technet.microsoft.com/en-us/library/ms177410(v=sql.105).aspx).
Exclude columns with Null
[ "", "sql", "sql-server", "" ]
I am trying to update a table with two column first is **marketplace\_fee** and other is s**ettlment\_value**, for updating **marketplace\_fee**, need to calculate some column and for calculate **settlment\_value** need marketplace\_fee. The problem is: When run the update query first time it only update marketplace\_fee not Settlment\_value but when execute it again then it update both column . **Query** ``` update fk_pay_cal set marketplace_fee = (((((totalprice * commission )/100) + shippingcharge + fixed_fee))+ ((((totalprice * commission )/100) + shippingcharge + fixed_fee)) * (service_tax) /100) , settlment_value = totalprice - marketplace_fee ```
Your bracketsfull expression can be written down in much simpler form. Use it twice in both assignments: ``` update fk_pay_cal set marketplace_fee = (totalprice * commission/100 + shippingcharge + fixed_fee) * (service_tax /100+ 1), settlment_value = totalprice - (totalprice * commission/100 + shippingcharge + fixed_fee) * (service_tax /100+ 1); ```
One method is to do the calculation in a CTE. This is most helpful if you have a unique id or primary key on the data: ``` with newvals as ( select pc.*, (((((totalprice * commission )/100) + shippingcharge + fixed_fee))+ ((((totalprice * commission )/100) + shippingcharge + fixed_fee)) * (service_tax) /100) as new_markeplace_fee from fk_pay_cal ) update fk_pay_cal set market_place_fee = new_markeplace_fee, settlment_value = totalprice - new_marketplace_fee from newvals where newvals.id = fk_pay_cal.id; ```
update multiple column in table using same table data in postgres?
[ "", "sql", "postgresql", "sql-update", "" ]
I'm trying to find an object by checking for several of its relations. ``` Loan.joins(:credit_memo_attributes) .where(credit_memo_attributes: {name: 'pr2_gtx1_y', value: '2014'}) .where(credit_memo_attributes: {name: 'pr1_gtx1_y', value: '2013'}) .where(credit_memo_attributes: {name: 'tx1_y', value: '2014'}) ``` Calling `to_sql` on that gives: ``` "SELECT `loans`.* FROM `loans` INNER JOIN `credit_memo_attributes` ON `credit_memo_attributes`.`loan_id` = `loans`.`id` WHERE `credit_memo_attributes`.`name` = 'pr2_gtx1_y' AND `credit_memo_attributes`.`value` = '2014' AND `credit_memo_attributes`.`name` = 'pr1_gtx1_y' AND `credit_memo_attributes`.`value` = '2013' AND `credit_memo_attributes`.`name` = 'tx1_y' AND `credit_memo_attributes`.`value` = '2014'" ``` So, I'm checking for Loans that have credit\_memo\_attributes with all of those attributes. I know at least 1 of our 20,000 loans meets this criteria, but this query returns an empty set. If I only use 1 of the `where` clauses, it returns several, as I'd expect, but once I add even 1 more, it's empty. Any idea where I'm going wrong?
# Update: Based on comments I believe you want multiple joins in your criteria. You can do that like this: ``` attr_1 = {name: 'pr2_gtx1_y', value: '2014'} attr_2 = {name: 'pr1_gtx1_y', value: '2013'} attr_3 = {name: 'tx1_y', value: '2014'} Loan.something_cool(attr_1, attr_2, attr_3) class Loan < ActiveRecord::Base ... def self.something_cool(attr_1, attr_2, attr_3) joins(sanitize_sql(["INNER JOIN credit_memo_attributes AS cma1 ON cma1.loan_id = loans.id AND cma1.name = :name AND cma1.value = :value", attr_1])) .joins(sanitize_sql(["INNER JOIN credit_memo_attributes AS cma2 ON cma2.loan_id = loans.id AND cma2.name = :name AND cma2.value = :value", attr_2])) .joins(sanitize_sql(["INNER JOIN credit_memo_attributes AS cma3 ON cma3.loan_id = loans.id AND cma3.name = :name AND cma3.value = :value", attr_3])) end ``` If you look at the SQL generated (that you included in your question, thank you) you'll see that all those conditions are being ANDed together. There are NO rows for which name = 'pr2\_gtx1\_y' AND name = 'pr1\_gtx1\_y' (and so forth). So you are getting the result I would expect (no rows).
You can put all names and values into array like ids and years and pass those into where clause like this. Active Record will query all the values in the array. ``` Loan.joins(:credit_memo_attributes) .where(credit_memo_attributes: {name: ids, value: years}) ``` Personally I'm still learning active record, in this concern i don't think active record supports multiple where clauses.
ActiveRecord has_many with multiple conditions
[ "", "sql", "ruby-on-rails", "activerecord", "" ]
Is it possible to create a lock for a specific value in an INSERT-statement in PostgreSQL? Lets say, I have this table: ``` CREATE TABLE IF NOT EXISTS bookings ( bookingID SERIAL PRIMARY KEY, tableID integer REFERENCES tables ON DELETE CASCADE, startTime timestamp NOT NULL, endTime timestamp NOT NULL, ); ``` Now if a transaction-block with an INSERT is started, can I create a lock for all other INSERTS with the same tableID and the same date during the transaction-time? So a second booking for the same tableID has to wait for the first one to finish. And after that can check again, if the INSERT is still possible. So its basically a ROW-lock for a specific value in INSERTS. The program is written in Java, but I don't want to use a synchronized-block, because of bottleneck. Thanks for the help
The question is, why do you want to block other inserts for this table? For me it looks like you want to be sure there are no intersecting intervals for the same tableID. Probably you check this in Java code and you don't want other inserts to interfere with the check. If so, you need no locks at all: use [EXCLUDE](http://www.postgresql.org/docs/9.4/static/ddl-constraints.html#DDL-CONSTRAINTS-EXCLUSION) constraint. For this, you need: 1. Change two `timestamp` fields into one `tsrange` field. 2. Install [btree\_gist](http://www.postgresql.org/docs/9.4/static/btree-gist.html) extension (it is included in contribs). Your table will looks like this: ``` CREATE TABLE IF NOT EXISTS bookings ( bookingID SERIAL PRIMARY KEY, tableID integer REFERENCES tables ON DELETE CASCADE, during tsrange NOT NULL, EXCLUDE using gist(during with &&, tableID with =) ); ``` Special GIST index will be created automatically to ensure that there will be no intersecting intervals (`&&` operator) for identical tableID (`=` operator). Some examples: ``` -- interval for tableID=10 test=# insert into bookings values (1, 10, '[2015-11-17 10:00, 2015-11-17 12:00)'); INSERT 0 1 -- interval for tableID=11 test=# insert into bookings values (2, 11, '[2015-11-17 10:00, 2015-11-17 12:00)'); INSERT 0 1 -- can't create intersecting interval for tableID=10 test=# insert into bookings values (3, 10, '[2015-11-17 11:00, 2015-11-17 13:00)'); ERROR: conflicting key value violates exclusion constraint "bookings_during_tableid_excl" DETAIL: Key (during, tableid)=(["2015-11-17 11:00:00","2015-11-17 13:00:00"), 10) conflicts with existing key (during, tableid)=(["2015-11-17 10:00:00","2015-11-17 12:00:00"), 10). -- ok to create non-intersecting interval test=# insert into bookings values (4, 10, '[2015-11-17 12:00, 2015-11-17 13:00)'); INSERT 0 1 ```
What you want is called predicate locking. It is not directly supported by PostgreSQL, but advisory locking as described by @stas.yaranov is a good way around that. As @EgorRogov points out, if possible you should remove the need for locking entirely by using appropriate constraints. Another option nobody's mentioned is using `SERIALIZABLE` transaction isolation in PostgreSQL 9.1 or newer. This uses a concurrency control method quite like optimistic locking, where each transaction proceeds without locking, but one of them might get aborted if it conflicts with another. This means your app has to be prepared to trap serialization failure errors and retry transactions, but it's generally a very efficient and quite simple way to handle things like this - especially in cases where exclusion constraints won't help you. I advise using exclusion constraints in this particular case if you can do so, since that's exactly what they were designed for. But you can use `serializable` isolation without schema changes or adding new indexes, and it's a more general solution. (You can't use `SERIALIZABLE` for this in 9.0 or older, by the way, as it's not smart enough in those versions)
PostgreSQL lock INSERTS with specific values possible?
[ "", "sql", "postgresql", "sql-insert", "" ]
I have table something like this: ``` date|status|value ``` date is date, status is 1 for pending, 2 to confirmed and value is value of order I want to get 3 columns: ``` date|#status pending|#status pending+confirmed ``` example of data: ``` +------------+-----------------+-----------------+ | date | status | value | +------------+-----------------+-----------------+ | 2015-11-17 | 1 | 89| | 2015-11-16 | 1 | 6 | | 2015-11-16 | 2 | 16 | | 2015-11-16 | 2 | 26 | | 2015-11-15 | 2 | 26 | | 2015-11-14 | 2 | 24 | +------------+-----------------+-----------------+ ``` example of what I want: ``` +------------+-----------------+-----------------+ | date | confirmed |confirmed+pending| +------------+-----------------+-----------------+ | 2015-11-17 | 0 | 1 | | 2015-11-16 | 2 | 3 | | 2015-11-15 | 1 | 1 | | 2015-11-14 | 1 | 1 | +------------+-----------------+-----------------+ ``` I am trying to do: ``` SELECT array1.DATE ,array1.confirmed ,array2.total FROM ( SELECT DATE (DATE) AS DATE ,count(value) AS confirmed FROM Orders WHERE STATUS = '2' GROUP BY DATE (DATE) DESC limit 5 ) AS array1 INNER JOIN ( SELECT DATE (DATE) AS DATE ,count(value) AS total FROM Orders GROUP BY DATE (DATE) DESC limit 5 ) AS array2 ``` But I get 4 results per date with repeated confirmed value and different total transactions. If I try separated, I can get both correct informations: will list only sum of confirmed orders of last 5 days: ``` SELECT array1.DATE ,array1.confirmed ,array2.total FROM ( SELECT DATE (DATE) AS DATE ,count(valor) AS confirmed FROM Orders WHERE STATUS = '2' GROUP BY DATE (DATE) DESC limit 5; ) ``` will list sum of all orders of last 5 days: ``` SELECT DATE (DATE) AS DATE ,count(valor) AS total FROM Orders GROUP BY DATE (DATE) DESC limit 5 ``` I observed at least one big problem: Sometimes we will have one day with a lot of not confirmed orders and zero confirmed, so probably inner join will fail.
You can use `CASE WHEN`, To get the expected output,you have given. ``` SELECT `date`, (SUM(CASE WHEN `status`=1 THEN 1 ELSE 0 END)) AS Confirmed, (SUM(CASE WHEN `status`=1 OR `status`=2 THEN 1 ELSE 0 END)) AS Confirmed_Pending FROM table_name GROUP BY DATE(`date`) DESC ``` Hope this helps.
You are missing an `ON` clause in your `INNER JOIN`. Or, since in your case the column you join on is the same on both sides, you can use `USING`: ``` SELECT array1.DATE ,array1.confirmed ,array2.total FROM ( SELECT DATE (DATE) AS DATE ,count(value) AS confirmed FROM Orders WHERE STATUS = '2' GROUP BY DATE (DATE) DESC limit 5 ) AS array1 INNER JOIN ( SELECT DATE (DATE) AS DATE ,count(value) AS total FROM Orders GROUP BY DATE (DATE) DESC limit 5 ) AS array2 USING (DATE) ```
How merge two select with different WHERE and special conditions
[ "", "mysql", "sql", "inner-join", "" ]
How would you go about transferring data into an object table? Say you have a table: ``` create table thisTable( column1 varchar2(20), column2 varchar2(20), column3 varchar2(20) ) / ``` And you have a new object table: ``` create table oo_thisTable( object1 object1_t ) / create type object1_t as object ( column1 varchar2(20), column2 varchar2(20), column3 varchar2(20) ) / ``` How would you transfer the data from thisTable to oo\_thisTable? ``` declare cursor c1 is select * from thisTable; begin open c1; loop fetch c1 into column1, column2, column3; exit when c1%notfound; ... ```
No need of using **PL/SQL**, you could do it in pure **SQL**. ``` INSERT INTO oo_thistable SELECT object1_t(column1, column2, column3) FROM thistable; ``` **Demo** **Create** the required **type** and **tables**: ``` SQL> create table thisTable( 2 column1 varchar2(20), 3 column2 varchar2(20), 4 column3 varchar2(20) 5 ) 6 / Table created. SQL> create type object1_t as object ( 2 column1 varchar2(20), 3 column2 varchar2(20), 4 column3 varchar2(20) 5 ) 6 / Type created. SQL> create table oo_thisTable( 2 object1 object1_t 3 ) 4 / Table created. ``` **Insert** few rows in `thistable`: ``` SQL> INSERT INTO thistable VALUES('a','b','c'); 1 row created. SQL> INSERT INTO thistable VALUES('d','e','f'); 1 row created. SQL> INSERT INTO thistable VALUES('g','h','i'); 1 row created. ``` Now we want to **insert** all rows from `thistable` into `oo_thistable`: ``` SQL> INSERT INTO oo_thistable SELECT object1_t(column1, column2, column3) FROM thistable; 3 rows created. ``` **Validate**: ``` SQL> SELECT * FROM oo_thistable; OBJECT1(COLUMN1, COLUMN2, COLUMN3) -------------------------------------------------------------------------------- OBJECT1_T('a', 'b', 'c') OBJECT1_T('d', 'e', 'f') OBJECT1_T('g', 'h', 'i') ``` You have all the rows inserted.
I have written some sequential steps for the issue described. Please try this and let me know if this helps. ``` SQL> set sqlbl on; SQL> set define off; SQL> set timing on; SQL> DROP TYPE TEST_TAB; Type dropped. Elapsed: 00:00:00.90 SQL> SQL> DROP TYPE Test_oo; Type dropped. Elapsed: 00:00:00.58 SQL> CREATE OR REPLACE type Test_oo 2 IS 3 OBJECT 4 ( 5 col1 NUMBER, 6 COL2 VARCHAR2(100 CHAR), 7 COL3 TIMESTAMP 8 ); 9 / Type created. Elapsed: 00:00:00.21 SQL> CREATE OR REPLACE TYPE TEST_TAB IS TABLE OF TEST_OO; 2 / Type created. Elapsed: 00:00:00.20 SQL> DROP TABLE TEST_TABLE; Table dropped. Elapsed: 00:00:00.39 SQL> CREATE TABLE TEST_TABLE 2 ( 3 O_col1 NUMBER, 4 O_COL2 VARCHAR2(100 CHAR), 5 O_COL3 TIMESTAMP 6 ); Table created. Elapsed: 00:00:00.28 SQL> INSERT INTO TEST_TABLE 2 SELECT LEVEL,LEVEL||'AV',SYSDATE+LEVEL 3 FROM DUAL 4 CONNECT BY LEVEL < 10; 9 rows created. Elapsed: 00:00:00.11 SQL> SQL> COMMIT; Commit complete. Elapsed: 00:00:00.10 SQL> DECLARE 2 lv_obj TEST_TAB; 3 BEGIN 4 dbms_output.put_line('test'); 5 SELECT test_oo(T1.O_COL1,T1.O_COL2,T1.O_COL3) bulk collect 6 INTO lv_obj 7 FROM test_table t1; 8 FOR I IN LV_OBJ.FIRST..LV_OBJ.LAST 9 LOOP 10 dbms_output.put_line(LV_OBJ(I).COL1||' '||LV_OBJ(I).COL2||' '||LV_OBJ(I).COL3); 11 END LOOP; 12 END; 13 / test 1 1AV 18-NOV-15 02.04.29.000000 AM 2 2AV 19-NOV-15 02.04.29.000000 AM 3 3AV 20-NOV-15 02.04.29.000000 AM 4 4AV 21-NOV-15 02.04.29.000000 AM 5 5AV 22-NOV-15 02.04.29.000000 AM 6 6AV 23-NOV-15 02.04.29.000000 AM 7 7AV 24-NOV-15 02.04.29.000000 AM 8 8AV 25-NOV-15 02.04.29.000000 AM 9 9AV 26-NOV-15 02.04.29.000000 AM PL/SQL procedure successfully completed. Elapsed: 00:00:00.12 SQL> spool off; ```
Store data from table into object table
[ "", "sql", "database", "oracle", "plsql", "object-oriented-database", "" ]
I have some big tables (30+ columns) with `NOT NULL` constraints. I would like to change all those constraints to `NULL`. To do it for a single column I can use ``` ALTER TABLE <your table> MODIFY <column name> NULL; ``` Is there a way to do it for all columns in one request ? Or should I copy/paste this line for all columns (><) ?
> Is there a way to do it for all columns in one request ? Yes. By (ab)using **EXECUTE IMMEDIATE** in **PL/SQL**. Loop through all the columns by querying **USER\_TAB\_COLUMNS** view. For example, ``` FOR i IN ( SELECT * FROM user_tab_columns WHERE table_name = '<TABLE_NAME>' AND NULLABLE='N' ) LOOP EXECUTE IMMEDIATE 'ALTER TABLE <TABLE_NAME> MODIFY i.COLUMN_NAME NULL'; END LOOP; ``` In my opinion, by the time you would write the PL/SQL block, you could do it much quickly by using a good **text editor**. In pure **SQL** you just need 30 queries for 30 columns.
For a single table you can issue a single `alter table` command to set the listed columns to allow `null`, which is a little more efficient than running one at a time, but you still have to list every column. ``` alter table ... modify ( col1 null, col1 null, col3 null); ``` If you were applying `not null` constraints then this would be more worthwhile, as they require a scan of the table to ensure that no nulls are present, and (I think) an exclusive table lock.
How to change Null constraint for all columns ?
[ "", "sql", "oracle", "" ]
I have data that looks like this: **Nodes** ``` Name Attribute Date 14 A1 11-OCT-2015 14 A2 7-Nov-2015 12 B1 11-Nov-2015 ``` **Vectors** ``` Node V_NAME color Date 14 V1 blue 11-OCT-2015 14 V1 red 10-Nov-2015 14 V2 blue 7-Nov-2015 12 V3 black 11-Nov-2015 12 V4 black 11-Nov-2015 ``` I want to get results like below ``` Node Attribute V_NAME color 14 A2 V1 red 14 A2 V2 blue 12 B1 V3 black 12 B1 V4 black ``` Date column in both tables are not same
I use two cte to calculate the most recent row in each category. Then join both. **[SqlFiddleDemo](http://sqlfiddle.com/#!4/8b18c/28)** ``` WITH n_node as ( SELECT "Name", "Attribute", row_number() over (partition by "Name" order by "Date" DESC) rn FROM Nodes ), n_vector as ( SELECT "Node", "V_NAME", "color", row_number() over (partition by "Node", "V_NAME" order by "Date" DESC) rn FROM Vectors ) SELECT "Name", "Attribute", "V_NAME", "color" FROM n_node JOIN n_vector ON n_node.rn = n_vector.rn AND n_node.rn = 1 AND n_node."Name" = n_vector."Node" ORDER BY "Name" DESC ``` **OUTPUT** ``` | Name | Attribute | V_NAME | color | |------|-----------|--------|-------| | 14 | A2 | V1 | red | | 14 | A2 | V2 | blue | | 12 | B1 | V3 | black | | 12 | B1 | V4 | black | ```
``` SELECT v.Node, n.Attribute, v.V_NAME, v.color from Nodes n inner join Vector v on n.Name = v.Node and n.Date = v.Date ``` <http://sqlfiddle.com/#!4/d3464/1>
How do I join the most recent row in one table to most recent row in another table (oracle)
[ "", "sql", "oracle", "" ]
I have a select on a table which selects a number ``` SELECT * FROM table ``` And I generates something that looks like ``` | column | | 2 | | 5 | ``` I am wondering if there is a way to turn that 2 and 5 into multiple rows of 2 and 5. So basically ``` | column | | 2 | | 2 | | 5 | | 5 | | 5 | | 5 | | 5 | ``` Edit: As asked by a few what my ultimate goal is, is I am really trying to accomplish is a sort of ballot select, basically I store the number of ballots a person gets as a # and I want to turn that # into multiple rows so I can transfer it over to another program for the winner selection.
You can do it using a numbers (aka tally) table: ``` SELECT col FROM mytable AS t1 INNER JOIN ( SELECT @rn := @rn + 1 AS num FROM ( SELECT 0 AS n UNION ALL SELECT 0 UNION ALL SELECT 0 UNION ALL SELECT 0 UNION ALL SELECT 0 UNION ALL SELECT 0 UNION ALL SELECT 0 UNION ALL SELECT 0 UNION ALL SELECT 0 UNION ALL SELECT 0 ) AS x CROSS JOIN ( SELECT 0 AS n UNION ALL SELECT 0 UNION ALL SELECT 0 UNION ALL SELECT 0 UNION ALL SELECT 0 UNION ALL SELECT 0 UNION ALL SELECT 0 UNION ALL SELECT 0 UNION ALL SELECT 0 UNION ALL SELECT 0 ) AS y CROSS JOIN (SELECT @rn := 0) AS var ) AS t2 ON t1.col >= t2.num ORDER BY col ``` The above query uses a numbers table with a range of values [1-100]. If your column contains bigger values then you have to grow the numbers table using additional `CROSS JOIN` operations. [**Demo here**](http://sqlfiddle.com/#!9/50a51/1)
You can grow it exponentially with repeated joins to itself. But what are you really trying to accomplish? ``` select a.* from table a left join table b on 1=1 left join table c on 1=1 ```
Select Number Into Multiple Rows
[ "", "mysql", "sql", "" ]
I have two columns: GRP\_ACCT\_NO, TRANS\_DATE Data is about the transactions made by different accounts. So, GRP\_ACCT\_NO has recurring values and the TRANS\_DATE gives the date, that particular account made a transaction on. I have about 1.5 million records in the data set but only 97k unique accounts. I want to find the second latest transaction date for each Account in SAS Here is the code which I tried: ``` proc sql; create table second_latest_trans as select GRP_ACCT_NO,Max(TRANS_DATE) from project.spend as ps where TRANS_DATE < (select max(TRANS_DATE) from project.spend as ps2 where ps.GRP_ACCT_NO = ps2.GRP_ACCT_NO group by GRP_ACCT_NO) group by GRP_ACCT_NO; quit; ``` The code doesn't seem to get any results. Its taking a long time to load. Please help!!
SQL has no concept of order of observations. Use a DATA step. If your data is not already sorted then sort it (create an index). If you just want the second record, even if there are ties then you can just count the records for each account. ``` data second_latest_trans; set project.spend; by GRP_ACCT_NO TRANS_DATE; if first.grp_acct then recno=0; recno+1; if recno=2 then output; run; ``` If there are multiple records for the same value of TRANS\_DATE and you want the second distinct value of TRANS\_DATE then this more complicated step would work. ``` data second_latest_trans; set project.spend; by GRP_ACCT_NO TRANS_DATE; if first.grp_acct then found=0; else if not found and first.trans_date then do; output; found=1; end; retain found; run; ```
Your query (apart from the formatting) looks reasonable. I wonder if the `group by` in the subquery is throwing things off. Try this version: ``` proc sql; create table second_latest_trans as Select GRP_ACCT_NO, Max(TRANS_DATE) from project.spend ps where TRANS_DATE < (SELECT max(ps2.TRANS_DATE) from project.spend ps2 where ps.GRP_ACCT_NO = ps2.GRP_ACCT_NO ) group by GRP_ACCT_NO; quit; ``` If you want this to run faster, you want an index on `project.spend(GRP_ACCT_NO, TRANS_DATE)`. The data step solution (proposed in another answer) would probably be much faster.
Finding the second latest date in SAS
[ "", "sql", "sas", "" ]
So lets say that i have this table ``` language | offer chinese | 1 chinese | 1 english | 1 spanish | 2 spanish | 2 italian | 2 french | 3 ``` and I want the languange that appears most times for each different offer,like this ``` language | offer chinese | 1 spanish | 2 french | 3 ``` How do I do this in oracle sql?
This is one way to do it using `common table expression`s. [SQL Fiddle](http://www.sqlfiddle.com/#!4/ac11a/4) In the first cte, you calculate the counts grouped by offer and language. In the next cte, use `rank` or `row_number` to assign `1` to the offer with the highest language count. Finally, select from the 1st ranked rows. ``` with counts as( select offer, language, count(*) cnt from tablename group by offer, language) ,ranking as (select rank() over(partition by offer order by cnt desc) rnk , c.* from counts c) select language, offer from ranking where rnk = 1 ``` Alternate approach without window functions: ``` with counts as ( select offer, language, count(*) cnt from tablename group by offer, language) ,maxcount as (select offer, max(cnt) mxcnt from counts group by offer) select c.language, m.offer from counts c join maxcount m on m.offer = c.offer and m.mxcnt = c.cnt ```
You can use row\_number in oracle for your output. ``` SELECT LANGUAGE ,offer FROM ( SELECT LANGUAGE ,offer ,row_number() OVER ( PARTITION BY offer ORDER BY count(offer) DESC ) AS rno FROM table1 GROUP BY LANGUAGE ,offer ) WHERE rno = 1 ```
Oracle Sql getting the value that appears most times for each different value of another column
[ "", "sql", "database", "oracle", "" ]
How to add `minutes(INT)` to the time part of `datetime` ? For example : If I have datetime variable like this : ``` @shift_start_time = 2015-11-01 08:00:00.000 @increase = 30 ``` How to get this result? ``` 2015-11-01 08:30:00.000 ```
Use **[`DATEADD`](https://learn.microsoft.com/en-us/sql/t-sql/functions/dateadd-transact-sql?view=sql-server-2017)**: ``` SELECT DATEADD(mi, @increase, @shift_start_time); ``` **[db<>fiddle demo](https://dbfiddle.uk/?rdbms=sqlserver_2017&fiddle=a2787bd91f6984645276b795aa7f09fe)**
Using dateadd: ``` DATEADD(minute,@increase,@shift_start_time) ``` the first argument can be chosen among: year quarter month dayofyear day week weekday hour minute second millisecond microsecond nanosecond please check <https://learn.microsoft.com/en-us/sql/t-sql/functions/dateadd-transact-sql?redirectedfrom=MSDN&view=sql-server-ver16>
How to add minutes to the time part of datetime
[ "", "sql", "sql-server", "datetime", "time", "dateadd", "" ]
We have Non-`Identity` `Column` in our Database Which have a specific value . We have a requirement as below, Whenever a record `insert` into that column, value should be incremented by one. how to handle this in `sql server` ? Thanks for the help.
Well, you can use [`SEQUENCE`](https://msdn.microsoft.com/en-IN/library/ff878370.aspx) statement introduced in SQL Server 2012 brings the method of generating IDs To use it in insert statement, you need to first create sequence like this - ``` CREATE SEQUENCE dbo.Id_Sequence AS INT START WITH 1 INCREMENT BY 1 MINVALUE 0 NO MAXVALUE ``` Now use it in your insert statement like this - ``` INSERT INTO dbo.Test1 ( orderid , custid , empid ) SELECT NEXT VALUE FOR dbo.Id_Sequence, @custid , @empid ``` That's it.
Try creating a TRIGGER ``` CREATE TRIGGER incrementValue ON Test FOR Insert AS Update Test set columnvalue = columnvalue +1 where id in (select id from inserted) GO ```
Auto Increment a non-identity Column in sql-server
[ "", "sql", "sql-server", "t-sql", "sql-server-2012", "" ]
My data looks like: ``` MEDIA_ID | CHANNEL_NAME EH/123A CH-1 EH/123A CH-4 EH/132A CH-5 ES/133B CH-1 ES/133B CH-2 ES/133B CH-5 ``` What i want is: ``` EH/123A | CH-1,CH-4,CH-5 ES/123B | CH-1,CH-2,CH-5 ``` I am using this SQL in Oracle: ``` SELECT DISTINCT PR.MEDIA_ID , LISTAGG(PR.CHANNEL_NAME, ', ') WITHIN GROUP (ORDER BY CHANNEL_NAME) AS PREM_CHAN FROM PREM_REPORT PR GROUP BY PR.MEDIA_ITEM, PR.CHANNEL_NAME; ``` What i am getting is: ``` MEDIA_ID | CHANNEL_NAME EH/123A CH-1,CH-1,CH-1,CH-1,CH-1,CH-1,CH-1,CH-1,CH-1,CH-1,CH-1,CH-1 EH/123A CH-4,CH-4,CH-4,CH-4,CH-4,CH-4,CH-4,CH-4,CH-4,CH-4,CH-4,CH-4 EH/132A CH-5,CH-5,CH-5,CH-5,CH-5,CH-5,CH-5,CH-5,CH-5,CH-5,CH-5,CH-5 ES/133B CH-1,CH-1,CH-1,CH-1,CH-1,CH-1,CH-1,CH-1,CH-1,CH-1,CH-1,CH-1 ES/133B CH-2,CH-2,CH-2,CH-2,CH-2,CH-2,CH-2,CH-2,CH-2,CH-2,CH-2,CH-2 ES/133B CH-5,CH-5,CH-5,CH-5,CH-5,CH-5,CH-5,CH-5,CH-5,CH-5,CH-5,CH-5 ``` Ideas? Thanks. Ben
I think the query you want is: ``` SELECT PR.MEDIA_ID, LISTAGG(PR.CHANNEL_NAME, ', ') WITHIN GROUP (ORDER BY CHANNEL_NAME) AS PREM_CHAN FROM PREM_REPORT PR GROUP BY PR.MEDIA_ITEM; ``` That is, remove `PR.CHANNEL_NAME` from your query. I am not sure why you would get your results with the query you gave. Perhaps there is some weird interaction between the `select distinct` and `group by`. You almost never use `select distinct` with `group by`. EDIT: To return distinct values in `LIST_AGG()`, you need to use a subquery. An easy way that works in this case is: ``` SELECT PR.MEDIA_ID, LISTAGG(PR.CHANNEL_NAME, ', ') WITHIN GROUP (ORDER BY CHANNEL_NAME) AS PREM_CHAN FROM (SELECT DISTINCT MEDIA_ID, CHANNEL_NAME FROM PREM_REPORT PR ) PR GROUP BY PR.MEDIA_ITEM; ```
You can remove `GROUP BY` and just add `PARTITION BY`: ``` SELECT DISTINCT PR.MEDIA_ID ,LISTAGG(PR.CHANNEL_NAME, ', ') WITHIN GROUP (ORDER BY CHANNEL_NAME) OVER (PARTITION BY PR.MEDIA_ID) AS PREM_CHAN FROM PREM_REPORT PR; ``` `SqlFiddleDemo` Output: ``` ╔═══════════╦══════════════════╗ β•‘ MEDIA_ID β•‘ PREM_CHAN β•‘ ╠═══════════╬══════════════════╣ β•‘ ES/133B β•‘ CH-1, CH-2, CH-5 β•‘ β•‘ EH/123A β•‘ CH-1, CH-4, CH-5 β•‘ β•šβ•β•β•β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β• ```
I cant get LISTAGG in Oracle to do as it should?
[ "", "sql", "oracle", "listagg", "" ]
Given an example table definition: ``` school: school_id | school_name -------------+------------ 1 | school1 2 | school2 3 | school3 classroom: classroom_id | has_projector | school_id -------------+---------------+------------- 1 | f | 1 2 | f | 1 1 | t | 2 2 | t | 2 1 | f | 3 2 | t | 3 3 | t | 3 ``` How to select schools which all classrooms have projector? (In this case only school2 schould be selected.) My idea is : ``` select school_name from school where school_id in (select school_id from classroom where has_projector='t') and school_id not in (select school_id from classroom where has_projector!='t'); ``` It works, but is it an optimal solution? Are there any better ways to select requested data?
I think a simple aggregation with a `having` clause is a bit simpler: ``` select s.school_id from school s join classroom c on s.school_id = c.school_id group by s.school_id having sum(case when has_projector = 't' then 1 else 0 end) = count(*); ```
Left join school to classroom on the school id and has projector equal to false. Then only return the ones where the school id on the classroom table is null. ``` SELECT school_name FROM school LEFT JOIN classroom ON school.school_id = classroom.school_id AND classroom.has_projector = 'f' WHERE classroom.school_id IS NULL ```
SQL query : how to select records which all records in related table have specific value of attribute
[ "", "sql", "" ]
I have the following exercise: > concatenate first, middle, last name and name suffix to form the > customer’s name in the following format: FirstName [MiddleName.] > LastName[, Suffix]. Note that NULL values should be omitted. I interpret this as the following scenario (created the table from the image and inserted some values): Please find my sample data below, name is #TB\_Customer [![enter image description here](https://i.stack.imgur.com/ya3i0.png)](https://i.stack.imgur.com/ya3i0.png) The column **CustomerName is the expected result** and should be of form * `FirstName MiddleName.LastName, Suffix` if i have enteries for all the fields. * MiddleName and Suffix can be optional, so the cases are: * If there is a `suffix` but not a `MiddleName` then `CustomerName` should be of form `Firstname LastName,Suffix` * If there is a `MiddleName` but not a `suffix` then `CustomerName` should be of form `FirstName MiddleName.LastName` * If both `MiddleName` and `Suffix` are null then `CustomerName` should be of form `FirstName LastName)` This is what i'm getting: [![enter image description here](https://i.stack.imgur.com/OhvRE.png)](https://i.stack.imgur.com/OhvRE.png) But as you can see the CustomerName case query I wrote doesn't work as expected (please see the `cases` above with bullets) The query I wrote to get the `CustomerName` column is: ``` SELECT (case when (MiddleName is not null and Suffix is not null) then CONCAT(c.FIRSTNAME,' ', c.MiddleName,'.', c.LASTNAME, ', ',Suffix) when (MiddleName is null and suffix is null) then CONCAT(c.FIRSTNAME,' ' ,c.LASTNAME) when (MiddleName is null and Suffix is not null )then concat (c.FirstName, ' ', c.LastName, ', ',Suffix ) when (Suffix is null and MiddleName is not null) then concat (c.FirstName, ' ',MiddleName,'.',LastName) end )AS CustomerName ,c.* FROM #TB_Customer c; ``` I have 2 questions: * Did I understand the exercise and do i have a good logic? * Where have I made a mistake and what's the correct query? Using SQL-Server 2012 **edit** to recreate my scenario please see the code below (sorry for not linking a fiddle but the website is not responding at my current location) ``` CREATE TABLE #TB_Customer ( CustomerID int , --PK Title varchar(50), FirstName varchar(50), MiddleName varchar(50), LastName varchar(50), Suffix varchar(50), EmailAddress varchar(50), Phone varchar(50), Gender varchar(50), Birthdate varchar(50), --no fk PRIMARY KEY (CustomerID) ) insert into #TB_Customer values ('1','Sir','John','Jacob','Adams','St','johnJacobAdams@gmail.com','0677731235','M','1989-04-06'), ('2','Mr.','Russel','Thyrone','Peterson','pr','thyronePeterson@yahoo.com','555-010405','M','1963-02-01'), ('3','Ms.','Anne','Candice','Acola','aca','CandiceA@gmail.com','07408989989','F','1988-05-19'), ('4','Mrs.','Sophia','Veronika','Turner','tvs','SophiaVT@facebook.de','0423423887','F','1983-06-20'), ('5','Ms','Margaret','','Delafleur','','delaMarg@yahoo.co','233223355','Female','1982-02-25'), ('6','Mrs','Jessica','Luana','Cruz','','Jess@yahoo.com','787876631','Female','1922-05-05'), ('7','Mr','Dyrius','','Cruz','dc','dyr33@yahoo.com','0673332211','Male','1987-03-01') update #TB_Customer set Gender = 'Male' where Gender = 'M' update #TB_Customer set Gender = 'Female' where Gender = 'F' ```
Something like this should work as well... ``` SELECT concat(firstname ,CASE WHEN ISNULL(middlename,'') <> '' THEN ' '+middlename+'.' WHEN ISNULL(middlename,'') <> '' AND ISNULL(suffix,'') = '' THEN '.' ELSE ' ' END ,lastname ,CASE WHEN ISNULL(suffix,'') <> '' THEN ', '+suffix END) FROM #TB_Customer ``` OUTPUT: ``` John Jacob.Adams, St Russel Thyrone.Peterson, pr Anne Candice.Acola, aca Sophia Veronika.Turner, tvs Margaret Delafleur Jessica Luana.Cruz Dyrius Cruz, dc John Adams, St ```
I cant see the error in you query, i know if one of the columns is null, all the others will be, but try this way: ``` SELECT COALESCE(c.FIRSTNAME,'') + ' ' + CASE WHEN COALESCE(c.MiddleName,'') = '' THEN '' ELSE c.MiddleName + '.' END + COALESCE(c.LASTNAME,'') + CASE WHEN COALESCE(Suffix,'') = '' THEN '' ELSE ', ' + Suffix END AS CustomerName, c.* FROM #TB_Customer c; ``` @Henrik is right, '' and NULL are diferent things
SQL-Server CONCAT case
[ "", "sql", "sql-server", "t-sql", "sql-server-2012", "" ]
I have a field in SSRS that i need to filter on. For 3 table I can use a IN filter. But I am in need to use a NOT IN operator. The field contains numeric values. I need to be able to say not in `(30,31,32,33,34,35,36,37,38,39)` I can't do it within the dataset either, it needs to be a filter. How should I achieve it ?
You can use an expression to determine which values are going to be filtered. Go to `Tablix properties`/ `Filters` In `expression` use: ``` =IIF(Array.IndexOf(split("30,31,32,33,34,35,36,37,38,39",","), CStr(Fields!YourField.Value))>-1,"Exclude","Include") ``` For Operator use: ``` = ``` For `Value` use: ``` ="Include" ``` Let me know if this can help you
For a variation of the selected answer that I find easier to use, please see below. This is in a Visibility expression, but should be easily ported to a Filter expression by setting Expression type to Boolean and comparing if `Expression = False`: `=IIf(InStr("unwanted values here", Fields!fieldToCheck.Value), True, False)`
How to implement 'Not In' Filter in SSRS?
[ "", "sql", "sql-server", "reporting-services", "ssrs-2008-r2", "" ]
I need to divide two tables, `nr1` and `nr2` like shown below ``` SELECT COUNT(candidate.id) as nr1 FROM candidate WHERE candidate.id=2 select count (candidate.id) as nr2 from candidate where candidate.id=2 or candidate.id = 3; select nr1/nr2 from nr1, nr2; ``` The problem is they don't existe outside the `select` query.
You can do it in single query. `Aggregate` the data *conditionally* in `SELECT`. ``` SELECT COUNT(CASE WHEN id = 2 THEN 1 END) / COUNT(CASE WHEN id = 2 || id = 3 THEN 1 END) FROM candidate ```
In oracle you can create a cte. ``` With table1 as ( SELECT COUNT(candidate.id) as nr1 FROM candidate WHERE candidate.id=2 ), table2 as ( select count (candidate.id) as nr2 from candidate where candidate.id=2 or candidate.id = 3 ) SELECT table1.nr1 / table2.nr2 FROM table1 CROSS JOIN table2 ``` But a simple way should be using conditional `SUM` also you have to validate for div 0 cases. ``` WITH cte AS ( SELECT SUM(CASE WHEN candidate.id = 2 THEN 1 ELSE 0 END) as nr1, SUM(CASE WHEN candidate.id IN (2,3) THEN 1 ELSE 0 END) as nr2 FROM candidate ) SELECT (CASE WHEN nr2 > 0 THEN nr1 / nr2 END) as result FROM cte ```
How To Divide Two Values From Different Tables in Select
[ "", "sql", "oracle", "oracle12c", "" ]
I have two tables: rooms (all the rooms) ``` id | title | ... ----------------- 1 |Room 1 | 2 |Room 2 | 3 |Room 3 | ``` user\_rooms (in which room is every user, column user is user's id and it's primary column) ``` user | room | ... ------------------ 20 | 3 | 14 | 1 | 35 | 3 | ``` So I want to select all the rooms from the 'rooms' table but to order them in that way to show the rooms with the most users in them and after that the rooms with less and less users. For example, I want to show room 3 first (because 2 users are in it), then room 1 (one user in it), and finally room 2 (since no users are in it). How to achieve that?
``` select r.id, r.title, coalesce(t.cnt,0) from rooms r left join (select room, count(*) as cnt from user_rooms group by room) t on t.room = r.id order by t.cnt desc ```
This would often be done without a subquery: ``` select r.id, r.title, count(ur.room) as numusers from rooms r left join user_rooms ur on r.id = ur.room group by r.id, r.title order by numusers desc; ``` This would often be more efficient than a version using a subquery in the `from` clause because it can take advantage of an index on the join key. Interestingly, the same index would be used for a correlated subquery in the `select`, which is an alternative approach: ``` select r.id, r.title, (select count(*) from user_rooms ur where r.id = ur.room ) as numusers from rooms r order by numusers desc; ``` This might be the most efficient approach, because it removes the aggregation in the outer query.
Order data by data from another table
[ "", "mysql", "sql", "sql-order-by", "" ]
This is a different question then [sorting by time or distance](https://stackoverflow.com/questions/962998/sorting-a-string-that-could-contain-either-a-time-or-distance). Imagine a table storing track & field results. A result for a running event would be in seconds and a field event would be in meters. What is the proper way to architect this in a database? We currently have `athlete_id INT event_id INT result DECIMAL(10,3)`. We convert times to seconds and store meters as meters. Is there a better way to do this?
The proper way to architect this is to stop thinking about entering data and start thinking about how you intend to use the data. If all you are going to do is to display the results, then storing it in an nvarchar field including the measurement is fine. Optimize for reporting whenever you are thinking something may need calculations. If it takes an extra quarter of a millisecond to add additional information to the database to make reporting easier, that beats taking additional minutes/hours to query the multi-million record tables and do calculations on all the records every time. If you want to do calculations, or compare results for different competitions or events, then you need to think about the easiest way to store that data so that you can do the calculations. What you don't want to do in this case is have to perform any conversions in reporting. Any conversions you need to perform should be when you insert the record rather than when you look at a large table of records and do the reporting or analysis querying. It is generally worth your while in this case to store start time, end time, calculated time and distance in separate fields. Depending on the event, you might need to store multiple records per person. I would separate out timed event and distance events and any events that are based on judges scores (like dressage or ice skating, but I am not sure of Track has any of these types of events) into separate child tables from the original even table because how you want to deal with the information for each type would be different. If you want to do both, it might be worth your while to simply querying for results with a nvarchar or varchar field and then use separate field to store the results in a calculated form.
That seems fine to me, partly because any comparison or aggregation -- max, min, avg -- of values is only going to be valid in the context of a particular event type anyway. I'd assume that there would be another value to indicate the units in which the value is stored for that event, or event type. You'd probably want to store scores as well as distances and times also, I guess. I'm not convinced about displaying a pretty value also, because you might want to internationalise that or present it in different formats or units. It might be calculated on demand.
Database Architecture: Storing time or distance in one column
[ "", "sql", "database-design", "decimal", "" ]
Assume I have a subscriptions table : ``` uid | subscription_type ------------------------ Alex | type1 Alex | type2 Alex | type3 Alex | type4 Ben | type2 Ben | type3 Ben | type4 ``` And want to select only the users that have more than 2 subscriptions but never subscribed with type 1 The expected result is selecting "Ben" only. I easy can found the users that have more than 2 subscribes using: ``` SELECT uid FROM subscribes GROUP BY uid HAVING COUNT(*) > 2 ``` But how to check if in a group some value never exists? Thanks for the help!
Try this query: ``` SELECT uid FROM subscribes GROUP BY uid HAVING COUNT(*) > 2 AND max( CASE "subscription_type" WHEN 'type1' THEN 1 ELSE 0 END ) = 0 ```
Create Sample Table: ``` CREATE TABLE subscribes ( uid NVARCHAR(MAX), subscription_type NVARCHAR(MAX) ) ``` Insert Values: ``` INSERT INTO subscribes VALUES ('Alex', 'type1'), ('Alex', 'type2'), ('Alex', 'type3'), ('Alex', 'type4'), ('Ben', 'type2'), ('Ben', 'type3'), ('Ben', 'type4') ``` SQL Query: ``` SELECT uid FROM subscribes GROUP BY uid HAVING COUNT(*) > 2 AND MAX(CASE subscription_type WHEN 'type1' THEN 1 ELSE 0 END) = 0 ``` Output: ``` ====== |uid | ------ |Ben | ====== ```
How to check if value exists in each group (after group by)
[ "", "sql", "postgresql", "" ]
I have a table like this: ``` EMPNO ENAME JOB MGR 7369 SMITH CLERK 7902 7499 ALLEN SALESMAN 7698 7521 WARD SALESMAN 7698 7566 JONES MANAGER 7839 7654 MARTIN SALESMAN 7698 7698 BLAKE MANAGER 7839 7782 CLARK MANAGER 7839 7788 SCOTT ANALYST 7566 7839 KING PRESIDENT null 7844 TURNER SALESMAN 7698 7876 ADAMS CLERK 7788 7900 JAMES CLERK 7698 7902 FORD ANALYST 7566 7934 MILLER CLERK 7782 ``` I need to select all managers with 2 employees only (empl. MGR field has to be equal to MANAGER's EMPNO) Can I do it with PARTITION or other OLAP function?
> Can I do it with PARTITION Sure. Analytics would certainly do it. Use **COUNT() OVER()**. For example, using standard EMP table in SCOTT schema: ``` SQL> WITH DATA AS( 2 SELECT e.*, count(empno) OVER(PARTITION BY mgr ORDER BY NULL) cnt 3 FROM emp e 4 ) 5 SELECT empno, ename, mgr 6 FROM DATA 7 WHERE cnt = 2; EMPNO ENAME MGR ---------- ---------- ---------- 7902 FORD 7566 7788 SCOTT 7566 ```
For example: ``` SELECT manager_id FROM employees GROUP BY manager_id HAVING count(employee_id)=2 ```
select managers with 2 employees
[ "", "sql", "oracle", "" ]
I have MS SQL 2012 database with ~80 tables. Each table has a column `UserID` that identifies the user who created or last edited the record. I would like to write the sql statement that would give me the number of created/edited records for some userID in all database tables. For example, user with UserID = 1 is the author of 3 records in Table1 and author of 2 records in Table2. Sql statement would need to give me a result like this: * UserID NumberOfRecords * 1 5 How to do it? Thanks.
I doubt "EXEC sp\_MSForEachTable" solution will not work, assuming the database has few more table, which don't have that User Id column, unless you are explicitly handling such failure using try catch block. In that case It will surely fail. Here the solution to consider only those table which have the required column. ``` --To get the List of Table having the required column and Storing them into Temp Table. Select ID = IDENTITY(Int,1,1),Object_Name(object_id) As TableName Into #ReqTables From sys.columns where name = 'Crets' --Creating Table to Store Row count result. Create Table #RowCounts ( Row_Count Int , UserID Int ) --Declaring variables Declare @min Int,@max int,@TableName Nvarchar(255) --get min and Max values from Required table for looping Select @min = Min(Id),@max = Max(ID) From #ReqTables --loop through Min and Max While(@min <= @max) BEgin --get the table for a given loop Counter Select @TableName = tableName From #ReqTables Where Id = @min --Executing the Dynamic SQl Exec ('Insert Into #RowCounts (Row_Count,UserID) Select Count(*),UserID From ' + @TableName + ' Group by UserID') --incrementing the Counter Set @min = @min + 1 End ```
Try this, it builds a dynamic query for all tables in your database and then executes it: ``` DECLARE @sql NVARCHAR(MAX) SELECT @sql = '(SELECT COUNT(*) FROM QUOTENAME(TABLE_NAME) WHERE user_id = 1) +' FROM <DATABASE_NAME>.INFORMATION_SCHEMA.TABLES SELECT @sql = 'SELECT' + LEFT(@sql, LEN(@sql) - 1) -- remove the last + --PRINT(@sql) -- we may want to use PRINT to debug the SQL EXEC(@sql) ``` Type your database name instead of `<DATABASE_NAME>` before you run it.
How to count records in multiple tables containing the same column?
[ "", "sql", "sql-server", "" ]
I've got a table with a lot of entries per day. Each entry has got a "Level". Now what I'am trying to do is, to select foreach day how many records exists with a Level. The Levels can be different so first of all I need to select distinct all levels and the show for each day how many records foreach level are there. If there are none records by a level for a day, the value should be 0. so to illustrate my needs: my table has the following content ``` Β¦Date Β¦ Level Β¦ RecId Β¦ ------------------------------ Β¦2014-06-18 Β¦ 1 Β¦ 1 Β¦2014-06-18 Β¦ 1 Β¦ 2 Β¦2014-06-18 Β¦ 2 Β¦ 3 Β¦2014-11-06 Β¦ 1 Β¦ 4 Β¦2014-11-11 Β¦ 2 Β¦ 5 Β¦2014-11-13 Β¦ 3 Β¦ 6 ``` what I'd like to have is the following: ``` Β¦Date Β¦ Level Β¦ CountOfRecords Β¦ -------------------------------------- Β¦2014-06-18 Β¦ 1 Β¦ 2 Β¦2014-06-18 Β¦ 2 Β¦ 1 Β¦2014-08-18 Β¦ 3 Β¦ 0 Β¦2014-11-06 Β¦ 1 Β¦ 1 Β¦2014-11-06 Β¦ 2 Β¦ 0 Β¦2014-11-06 Β¦ 3 Β¦ 0 Β¦2014-11-11 Β¦ 1 Β¦ 0 Β¦2014-11-11 Β¦ 2 Β¦ 1 Β¦2014-11-11 Β¦ 3 Β¦ 0 Β¦2014-11-13 Β¦ 1 Β¦ 0 Β¦2014-11-13 Β¦ 2 Β¦ 0 Β¦2014-11-13 Β¦ 3 Β¦ 1 ``` The reason why I need that is to display these values in a line chart and each level will be a line. Thanks for your help, while I'm still going on with google... Cheers
try this: ``` SELECT T.[Date], T.[Level], COALESCE(COUNT(D.[Level]),0) AS CountOfRecords FROM ( SELECT DISTINCT B.Date, A.[Level] FROM tablename AS A INNER JOIN (SELECT DISTINCT [Date] FROM tablename) AS B ON 1=1 ) AS T LEFT JOIN tablename AS D ON T.[Level] = D.[Level] AND T.[Date] = D.[Date] GROUP BY T.[Date], T.[Level] ORDER BY T.[Date], T.[Level] ``` [test is here](https://data.stackexchange.com/stackoverflow/query/395213)
Produce `cross join` of distinct `dates` and `levels` and then `left join` on actual table: ``` ;with cte1 as(select distinct level from tablename), cte2 as(select distinct date from tablename) select c1.level, c2.date, count(distinct t.RecId) as CountOfRecords from cte1 c1 cross join cte2 c2 left join tablename t on t.date = c2.date and t.level = c1.level group by c1.level, c2.date ``` **EDIT:** ``` DECLARE @t TABLE ( Date DATE , Level INT , RecId INT ) INSERT INTO @t VALUES ( '2014-06-18', 1, 1 ), ( '2014-06-18', 1, 2 ), ( '2014-06-18', 2, 3 ), ( '2014-11-06', 1, 4 ), ( '2014-11-11', 2, 5 ), ( '2014-11-13', 3, 6 ) ;with cte1 as(select distinct level from @t), cte2 as(select distinct date from @t) select c1.level, c2.date, count(distinct t.RecId) as CountOfRecords from cte1 c1 cross join cte2 c2 left join @t t on t.date = c2.date and t.level = c1.level group by c1.level, c2.date ``` Output: ``` level date CountOfRecords 1 2014-06-18 2 2 2014-06-18 1 3 2014-06-18 0 1 2014-11-06 1 2 2014-11-06 0 3 2014-11-06 0 1 2014-11-11 0 2 2014-11-11 1 3 2014-11-11 0 1 2014-11-13 0 2 2014-11-13 0 3 2014-11-13 1 ```
SQL Server: Display 0 in Group by with Distinct values
[ "", "sql", "sql-server", "date", "group-by", "" ]
I need to select a set of data from a table (`TableA`), but only if it's not in another table (`TableB`). ``` SELECT thisData FROM dbo.TableA WHERE thisData is not existing in dbo.TableB ``` I'm not really well versed in SQL.
You can use a `not exists` condition: ``` SELECT thisData FROM dbo.TableA a WHERE NOT EXISTS ( SELECT * FROM TableB b WHERE a.thisData = b.thisData ) ```
You can do `EXCEPT`: ``` SELECT thisData FROM dbo.TableA except SELECT thisData FROM dbo.TableB ``` Or, a more general solution, `NOT EXISTS`: ``` select * from dbo.TableA ta where not exists (select 1 from dbo.TableB tb where tb.thiscolumn1 = ta.thiscolumn1 [ and tb.thiscolumn2 = ta.thiscolumn2 etc] ) ```
How can I select a data if that data is not existing in another table?
[ "", "sql", "sql-server", "" ]
I am having a difficult time wrapping my head around the path for solving a problem in DB2. I have three tables that look like this... **PARENT** ``` id | label -------------- 1 | One 2 | Two 3 | Three ``` **TABLE1** ``` id | parentid | eventdate ------------------------- 1 | 1 | 2015-11-01 2 | 1 | 2015-12-01 3 | 2 | 2015-10-01 4 | 2 | 2015-09-01 5 | 3 | 2015-08-01 6 | 3 | 2015-07-01 ``` **TABLE2** ``` id | parentid | eventdate ------------------------- 1 | 1 | 2015-11-15 2 | 1 | 2015-12-15 3 | 2 | 2015-07-15 4 | 2 | 2015-09-15 5 | 3 | 2015-08-15 6 | 3 | 2015-05-15 ``` Ultimately, I need to find the max date from either table for each parent id. My thought is to UNION two SELECTs, each being JOINed to PARENT, but I am at a complete loss as to how to only pull back a single row for each parent that consists of the max date from either TABLE1 or TABLE2 like this: ``` One: 2015-12-15 Two: 2015-10-01 Three: 2015-08-15 ``` If anyone could offer some guidance I would be extremely grateful.
You are on the right track. Use a union in a subquery and then join PARENT and GROUP BY label to get the MAX date. ``` SELECT label, MAX(eventdate) AS maxeventdate FROM ( SELECT parentid, eventdate FROM TABLE1 UNION ALL SELECT parentid, eventdate FROM TABLE2) JOIN PARENT ON (id = parentid) GROUP BY label ```
``` SELECT label, CASE WHEN max(t1.eventdate) > max(t2.eventdate) THEN max(t1.eventdate) ELSE max(t2.eventdate) END as eventdate FROM PARENT p JOIN TABLE1 t1 ON p.id = t1.id JOIN TABLE2 t2 ON p.id = t2.id GROUP BY p.label ```
DB2 JOIN, UNION, and pull max value from each group
[ "", "sql", "db2", "union", "" ]
I have a number of months. I wish to represent them as a number of years and months in the format YYMM e.g. ``` 5 -> 0005 13 -> 0101 24 -> 0200 ``` Does anyone know of a non-convoluted way to do this in Oracle SQL?
A little bit of maths and some formatting are the way to go: ``` with sample_data as (select 5 num from dual union all select 13 num from dual union all select 24 num from dual union all select 1400 num from dual) select to_char(floor(num/12), 'fm9999909')||to_char(mod(num, 12), 'fm09') yrmn from sample_data; YRMN ----------- 0005 0101 0200 11608 ``` I included one that had more than 100 years just to show you how it might look; I don't know if that's a possibility in your case, or if you'd want the other values to be zero-padded since you didn't say.
``` select lpad(trunc(yourValue/12),2,'0')||lpad(mod(yourValue,12),2,'0') result from dual ```
Converting a number of months to date string
[ "", "sql", "oracle", "" ]
Take the following statement: ``` SELECT SUM( IF( talktime > 4 AND status = 'answer' , 1, 0 ) ) as count FROM table ``` The output should be count the rows that fulfill both conditions. Is the statement valid? EDIT: I'm interested in the multiple condition part in IF.
This statement: ``` SELECT SUM( IF( talktime > 4 AND status = 'answer' , 1, 0 ) ) as count FROM table ``` is kind of correct, with caveats: * `table` is not allowed as a table name, because it is a reserved word. It needs to be escaped. * `count` is a poor choice for a column name, because it is a built-in function name You can also simplify the query because the `if()` is not needed in MySQL: ``` SELECT SUM(talktime > 4 AND status = 'answer') as cnt FROM `table`; ``` Of course, you can move the comparison logic to a `WHERE` clause, but this is keeping the same structure as the original query.
> **IF(expr1,expr2,expr3)** > > If expr1 is TRUE then IF() returns expr2; otherwise it returns expr3 So the above code will return `1` when the conditions satisfies otherwise return `0`. So the count will be the `No. of rows satisfying the condition`. --- For example, If you want to display `total no .of calls`,`total no of short calls`,`total no of long calls`(Assuming `talktime` length of call duration ). ``` SELECT SUM( IF( talktime > 4 , 1, 0 ) ) as No_of_Long_Calls, SUM( IF( talktime < 4 , 1, 0 ) ) as No_of_Short_Calls, COUNT(*) AS Total_Calls FROM table_name WHERE status = 'answer' ``` --- Yeas It is a valid Select Statement, 100 % it will work. Hope this helps.
Multiple condition in IF in SELECT
[ "", "mysql", "sql", "" ]
(Caution: Newbie alert) I have to put a count of the subsections of a generated view along with the view itself. Is it possible? For example, this is the generated view from a SQL select: ``` Client Type Year 8963 Rural 2012 9044 City 2013 8963 Rural 2014 5145 Rural 2014 5145 City 2012 ``` What I want displayed is: ``` Client Type Year CountofRural2012 CountofCity2012 CountofRural2013 8963 Rural 2012 1 1 0 9044 City 2013 8963 Rural 2014 5145 Rural 2014 5145 City 2012 ``` ...and so on for all the count permutations.
First of all, I'd advise you to look into functions such as pivot. <https://www.ibm.com/developerworks/community/blogs/SQLTips4DB2LUW/entry/pivoting_tables56?lang=en> Secondly, if the fields you want to create are known in advance (meaning, you want to create fields for specific years), you can try to sum/count a case field. ``` Select client, type, year, sum(case when type='Rural' and year=2012 then 1 else 0 end) countOfRural2012 from YOUR_TABLE Group by client, type, year ```
Look into [Subqueries](http://www.tutorialspoint.com/sql/sql-sub-queries.htm) and the [Count](http://www.w3schools.com/sql/sql_functions.asp) function. You can wrap a query in another query and use aggregates to count, sum or even do standard deviations on the results.
show a SQL select along with a count of itself?
[ "", "sql", "count", "db2", "" ]
In PostgreSQL 9.3.4 release note it says: > Ensure that the planner sees equivalent VARIADIC and non-VARIADIC function calls as equivalent (Tom Lane) I searched the PostgreSQL manual and couldn't find a definition of what it is. I found that it's realated to the mode of function argument (IN, OUT, VARIADIC) but I didn't understand what it means? When would I want to use it? What does it mean in terms of performance if function has the VARIADIC property?
Variadic functions are those with an undefined number of arguments, they are present in many programming and query languages. In the case of PostgreSQL, you can find an example at <http://www.postgresql.org/docs/9.1/static/xfunc-sql.html> (35.4.5. SQL Functions with Variable Numbers of Arguments): > Effectively, all the actual arguments at or beyond the VARIADIC > position are gathered up into a one-dimensional array, as if you had > written
When you are not sure about the number of parameters then we use Variadic. You can refer [VARIADIC FUNCTIONS IN POSTGRESQL](http://www.postgresonline.com/journal/archives/211-Variadic-Functions-in-PostgreSQL.html) for details. See the wiki about [Variadic functions](https://en.wikipedia.org/wiki/Variadic_function): > In computer programming, a variadic function is a function of > indefinite arity, i.e., one which accepts a variable number of > arguments. Support for variadic functions differs widely among > programming languages.
What does VARIADIC declarations mean in PostgreSQL?
[ "", "sql", "database", "postgresql", "variadic-functions", "variadic", "" ]
I'm working with a database containing customer orders. These orders contain the customer id, order month, order year, order half month( either first half 'FH' or last half 'LH' of the month), and quantity ordered. I want to query monthly totals for each customer for given month. Here's what I have so far. ``` SELECT id, half_month, month, year, SUM(nbr_ord) FROM Orders WHERE month = 7 AND year = 2015 GROUP BY id, half_month, year, month ``` The problem with this is that if a customer did not order anything during one half\_month there will not be a row returned for that period. I want there to be a row for each customer for every half month. If they didn't order anything during a half month then a row should be returned with their id, the month, year, half month, and 0 for number ordered.
First, generate all the rows, which you can do with a `cross join` of the customers and the time periods. Then, bring in the information for the aggregation: ``` select i.id, t.half_month, t.month, t.year, coalesce(sum(nbr_ord), 0) from (select distinct id from orders) i cross join (select distinct half_month, month, year from orders where month = 7 and year = 2015 ) t left join orders o on o.id = i.id and o.half_month = t.half_month and o.month = t.month and o.year = t.year group by i.id, t.half_month, t.month, t.year; ``` Note: you might have other sources for the `id` and date parts. This pulls them from `orders`.
IF you know the entire dataset has an occurance of each half\_month, month, year combination you could use the listing of those 3 things as the left side of a left join. That would look like this: ``` Select t1.half_month, t1.month, t1.year, t2.ID, t2.nbr_ord from (Select half_month, month, year)t1 Left Join (SELECT id, half_month, month, year, SUM(nbr_ord)nbr_ord FROM Orders WHERE month = 7 AND year = 2015 GROUP BY id, half_month, year, month)t2 on t1.half_month = t2.half_month and t1.month = t2.month and t1.year = t2.year ```
How can I return a row for each group even if there were no results?
[ "", "sql", "oracle", "" ]
I have a table in SQL that I would like to have a unique constraint so that any of two values cannot already exist. for example if I have 2 columns, I would like it to not insert if the value in column B does not exist in column A or column B. Is this possible and if so how is it done? example: ``` Column A | Column B -------------------- 4 | 6 ``` I would want any object that tries to insert 4 or 6 not to be allowed into the table
Trigger with ROLLBACK TRANSACTION is the way to go. ``` create trigger dbo.something after insert as begin if exists ( select * from inserted where ...check here if your data already exists... ) begin rollback transaction raiserror ('some message', 16, 1) end end ```
You can create a function which takes in these values & create a check constraint on it (referencing your functions return values) to your table. ``` create table t11 (Code int, code2 int) create function fnCheckValues (@Val1 int, @Val2 int) Returns int /*YOu can write a better implementation*/ as Begin DECLARE @CntRow int IF(@Val1 IS NULL OR @Val2 IS NULL) RETURN 0 select @CntRow = count(*) from t11 where Code in (@Val1,@Val2 ) or Code2 in (@Val1,@Val2 ) RETURN @CntRow End GO alter table t11 Add constraint CK_123 check ([dbo].[fnCheckValues]([Code],[code2])<=(1)) ```
SQL unique constraint on either of 2 columns
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have seen some questions related to my issue, but never exactly the same structure, and as I am quite new to SAS/SQL I can't find a proper way out of my problem. I am trying for a couple of days to join multiple tables (about thirty) Basically, my problem boils down to this: Let say I have 3 tables: TAB, with 2 variables V1 and V2 t1, with 2 variables V2 and V3 t2, with the same 2 variables V2 and V3 I want to join TAB with t1 or t2 depending on the value of V2 to obtain the Full\_TABLE ``` TAB V1 V2 A 1 B 1 C 2 D 2 t1 V2 V3 1 x 1 y 1 z t2 V2 V3 2 h 2 i 2 j ``` intended result: ``` Full_TABLE V1 V2 V3 A 1 x A 1 y A 1 z B 1 x B 1 y B 1 z C 2 h C 2 i C 2 j D 2 h D 2 i D 2 j ``` Logically, it is a conditional join based on the value of V2: * if V2=1 then merge TAB with t1 * if V2=2 then merge TAB with t2 I don't know if it is possible to do it automatically, To give a rough idea, V1 has 30.000 different values, V2 has 27 (so 27 tables t1-t27), V3 has 10 values per value of V2, thus I expect a Full\_TABLE of 30.000\*10=300.000 rows So I can manage a semi-automated solution based on V2 and/or V3, but not V1 Any leads in SAS or SQL (or proc sql...) highly appreciated ! S
You can do this with `proc sql` and some SQL cleverness: ``` proc sql; select tab.v1, tab.v2, coalesce(t1.v3, t2.v3) as v3 from tab left join t1 on tab.v2 = 1 left join t2 on tab.v2 = 2; ```
The data step solution is interesting: ``` data want; set tab; if v2=1 then do; do _n_ = 1 to nobs_t1; set t1 point=_n_ nobs=nobs_t1; output; end; end; else if v2=2 then do; do _n_=1 to nobs_t2; set t2 point=_n_ nobs=nobs_t2; output; end; end; run; ``` That's the standard "cartesian join in the data step", just done twice, with an if telling SAS which to do. Basically, you use `point` to iterate through either t1 or t2 depending on the value of v2. This could be extended through the macro language if needed to allow for many tables/variable values (similarly to how you would extend the sql solution).
Conditional combining of multiple datasets/tables based on a value of one variable
[ "", "sql", "join", "merge", "sas", "" ]
I have the following data ``` Activity | Indicator | Impact ------------------------------------------------ Payroll risk | Indicator A | Low Payroll risk | Indicator B | Low Payroll risk | Indicator C | Low Wrong selections | Indicator D | High Wrong selections | Indicator E | High Fraudulant Cred | Indicator F | Medium Fraudulant Cred | Indicator G | Medium ``` Then this data should be recieved in the following format ``` Activity | Indicator | Indicator | Indicator | Impact -------------------------------------------------------------------- Payroll risk | Indicator A | Indicator B | Indicator C | Low Wrong selections | Indicator D | Indicator E | | High Fraudulant cred | Indicator F | Indicator G | | Medium ``` What can be the process for the following operation in sql? The number of columns shoild be same as the maximum number of rows for any common record. Thanks.
You can do that by using CASE or PIVOT. Here is example of Pivot for your sample data: ``` SELECT Activity, Impact, [1], [2], [3] FROM ( SELECT Activity, Indicator, Impact ,ROW_NUMBER()OVER(PARTITION BY Activity ORDER BY Indicator) AS R FROM @T ) AS M PIVOT ( MAX(Indicator) FOR R IN ([1], [2], [3]) ) P ``` In case you want to do it dynamically you can refer : [Dynamic PIVOT in SQL Server](http://mangalpardeshi.blogspot.com/2015/11/dynamic-pivot-in-sql.html)
One way of doing it - ``` select Activity , [Indicator] = max(case when RowID = 1 then Indicator end) , [Indicator] = max(case when RowID = 2 then Indicator end) , [Indicator] = max(case when RowID = 3 then Indicator end) ,Impact from ( select Activity , Indicator , Impact , RowID = row_number() over (partition by Impact order by Activity) from #PIVOT ) SourceTable group by Activity,Impact ```
How to select dynamic number of columns according to rows in sql
[ "", "sql", "sql-server", "" ]
I am trying to figure how can I calculate the number of days,the customer did not eat any candy. Assuming that the Customer eats 1 candy/day. If customer purchases more candy, it gets added to previous stock Eg. ``` Day Candy Puchased 0 30 40 30 65 30 110 30 125 40 170 30 ``` Answer here is 20. Meaning on 0th day, customer brought 30 candies and his next purchase was on 40th day so he did not get to eat any candy between 30th to 39th day, also in the same way he did not eat any candy between 100th to 109th day. Can anyone help me to write the query. I think I have got the wrong logic in my query. ``` select sum(curr.candy_purchased-(nxt.day-curr.day)) as diff from candies as curr left join candies as nxt on nxt.day=(select min(day) from candies where day > curr.day) ```
You need a recursive CTE First I need create a `row_id` so I use `row_number` Now I need the base case for recursion. * `Day`: Mean how many day has pass. (0 from db) * `PrevD`: Is the Prev day amount so you can calculate `Day` (start at 0) * `Candy Puchased`: How many cadies bought (30 from db) * `Remaining`: How many candies left after eating (start at 0) * `NotEat`: How many days couldnt eat candy (start at 0) * `Level`: Recursion Level (start at 0) Recursion Case * `Day`, `PrevD`, `Candy Puchased` are easy * `Remaining`: if I eat more than I have then 0 * `NotEat`: Keep adding the diffence when doesnt have candy. [**SQL Fiddle Demo**](http://sqlfiddle.com/#!6/0a7cd/1) ``` WITH Candy as ( SELECT ROW_NUMBER() over (order by [Day]) as rn, * FROM Table1 ), EatCandy ([Day], [PrevD], [Candy Puchased], [Remaining], [NotEat], [Level]) as ( SELECT [Day], 0 as [PrevD], [Candy Puchased], [Candy Puchased] as [Remaining], 0 as [NotEat], 1 as [Level] FROM Candy WHERE rn = 1 UNION ALL SELECT c.[Day] - ec.[PrevD], c.[Day], c.[Candy Puchased], c.[Candy Puchased] + IIF((c.[Day] - ec.[PrevD]) > ec.[Remaining], 0, ec.[Remaining] - (c.[Day] - ec.[PrevD])), ec.[NotEat] + IIF((c.[Day] - ec.[PrevD]) > ec.[Remaining], (c.[Day] - ec.[PrevD]) - ec.[Remaining], 0), ec.[Level] + 1 FROM Candy c JOIN EatCandy ec ON c.rn = ec.[level] + 1 ) select * from EatCandy ``` **OUTPUT** ``` | Day | PrevD | Candy Puchased | Remaining | NotEat | Level | |-----|-------|----------------|-----------|--------|-------| | 0 | 0 | 30 | 30 | 0 | 1 | | 40 | 40 | 30 | 30 | 10 | 2 | | 25 | 65 | 30 | 35 | 10 | 3 | | 45 | 110 | 30 | 30 | 20 | 4 | | 15 | 125 | 40 | 55 | 20 | 5 | | 45 | 170 | 30 | 40 | 20 | 6 | ``` Just add `SELECT MAX(NotEat)` over the last query
Nice question. Check my answer and also try with different sample data. and please,if with different sample data it is not working then let me know. ``` declare @t table([Day] int, CandyPuchased int) insert into @t values (0, 30),(40,30),(65, 30) ,(110, 30),(125,40),(170,30) select * from @t ;With CTE as ( select *,ROW_NUMBER()over(order by [day])rn from @t ) ,CTE1 as ( select [day],[CandyPuchased],rn from CTE c where rn=1 union all select a.[Day],case when a.Day-b.Day<b.CandyPuchased then a.CandyPuchased+(b.CandyPuchased-(a.Day-b.Day)) else a.CandyPuchased end CandyPuchased ,a.rn from cte A inner join CTE B on a.rn=b.rn+1 ) --select * from CTE1 select sum(case when a.Day-b.Day>b.CandyPuchased then (a.Day-b.Day)-b.CandyPuchased else 0 end)[CandylessDays] from CTE1 A inner join CTE1 b on a.rn=b.rn+1 ```
calculate sum based on value of other row in another column
[ "", "sql", "sql-server", "sql-server-2008", "" ]
When I try to restore a SQL Server database, I get: "An exception occured while executing a Transact-SQL statement or batch. Incorrect syntax near 's' Unclosed quotation mark near character string ". (Microsoft SQL Server, Error:102)" I am not using any script, just clicking "Restore Database", selecting the file (it has no extension, by the way), and get this error. I tried to search for a solution, but can't find anyone that had a similar problem. I would really appreciate any input! Thanks!
This is due to permission problem.First you need to move the \*.bak file to the data-directory of the SQL-server .. then try again..
Make sure that you didn't name the directory using special characters. For example, I named my directory `SQLDB's\` and got that error. To resolve it, I removed the ' and it worked.
Can't restore a SQL Server database from a backup file - Error: 102
[ "", "sql", "sql-server", "" ]
I don't know enough about databases to find the right words to ask this question, so let me give an example to explain what I'm trying to do: Suppose I want the primary key for a table to be an ID I grab from an API, but the majority of those API requests result in 404 errors. As a result, my table would look like this: I also don't know how to format a table-like structure on Stack Overflow, so this is going to be a rough visual: ``` API_ID_PK | name ------------------ 1 | Billy 5 | Timmy 23 | Richard 54 | Jobert 104 | Broccoli ``` Is it okay for the ID's not to be sequentially separated by 1 digit? Or should I do this: ``` ID PK | API_ID | NAME ---------------------------------------- 1 | 1 | Billy 2 | 5 | Timmy 3 | 23 | Richard 4 | 54 | Jobert 5 | 104 | Broccoli ``` Would the second table be more efficient for indexing reasons? Or is the first table perfectly fine? Thanks!
No, there won't be any effect on efficiency if you have non-consecutive IDs. In fact, MySQL (and other databases) allow for you to set a variable [`auto_increment_increment`](https://dev.mysql.com/doc/refman/5.6/en/replication-options-master.html#sysvar_auto_increment_increment) to have the ID increment by more than 1. This is commonly used in multi-master setups.
It's fine to have IDs not sequential. I regularly use GUIDs for IDs when dealing with enterprise software where multiple business could share the same object and they're never sequential. The one thing to watch out for is if the numbers are the same. What's determining the ID value you're storing?
Is it okay to have non sequential ids as primary keys for a table in your database?
[ "", "mysql", "sql", "database", "indexing", "primary-key", "" ]
My SQL currently looks like this. ``` SELECT t1.field1, t1.field2, t1.field3, t1.field4, t1.field5, t1.field6, t1.field7, t1.field8, t2.field1, t2.field2, t2.field3, t2.field4, t2.field5, t2.field6, t2.field7, t2.field8, t2.field9, t3.field1, t4.field1, t5.field1, SUM(t6.field1), MIN(t6.THEDATE) FROM table1 t1 LEFT JOIN table2 t2 ON t1.field2 = t2.sameFieldName LEFT JOIN table3 t3 ON t2.field9 = t3.sameFieldName LEFT JOIN table4 t4 ON t1.field2 = t4.sameFieldName AND t2.field1 = t4.sameFieldName LEFT JOIN table5 t5 ON t4.field1 = t5.sameFieldName LEFT JOIN table6 t6 ON t4.field1 = t6.sameFieldName AND t4.colName1 = t6.sameFieldName WHERE t6.THEDATE BETWEEN SYSDATE - 70 AND SYSDATE - 50 AND t1.field2 = 'SUBMIT' AND t1.field3 LIKE 'H%' GROUP BY t1.field1, t1.field2, t1.field3, t1.field4, t1.field5, t1.field6, t1.field7, t1.field8, t2.field1, t2.field2, t2.field3, t2.field4, t2.field5, t2.field6, t2.field7, t2.field8, t2.field9, t3.field1, t4.field1, t5.field1; ``` The problem I have is I need to select with the condition based on the min date. But doing it this way will "show" the min date, but it will filter records based on whatever the last "theDate" value is. I know you cannot use agg functions in the where clause because the where only operates on a single record. So how can I get something that would work like this? ``` SELECT *, sum(somthing), min(theDate) FROM Table WHERE min(theDate) BETWEEN SYSDATE - 70 AND SYSDATE - 50 GROUP BY <<<ALL GROUP COLUMNS>>> ```
The main performance drain may be unnecessary columns in `GROUP BY`. This can happen if your `Table` refers to a denormalized table: ``` EMP (EMP_ID*, DEPT_ID, DEPT_NAME, SAL, THEDATE) ``` or if `Table` refers to a join, e.g. ``` EMP(EMP_ID*, DEPT_ID, SAL, THEDATE) DEPT(DEPT_ID*, DEPT_NAME) "Table" == EMP JOIN DEPT USING (DEPT_ID) ``` In either case, the query: ``` SELECT DEPT_ID, DEPT_NAME, SUM(SAL), MIN(THEDATE) FROM EMP GROUP BY DEPT_ID, DEPT_NAME HAVING MIN(THEDATE) >= SYSDATE-70 AND MIN(THEDATE) < SYSDATE-50; ``` will experience all the overhead of grouping the `DEPT_NAME`, even though `DEPT_NAME` will always have same value for a given `DEPT_ID`. In other words, `DEPT_ID` is a candidate key of the selected columns. If the "\*" in your `SELECT *, SUM(whatever)` has one or more candidate keys (commonly all of the "\_ID" columns) that determine unique values for all other columns, then it will be far more efficient to do something like this: ``` SELECT DEPT_ID, MAX(DEPT_NAME) DEPT_NAME, SUM(SAL), MIN(THEDATE) FROM EMP USING (DEPT_ID) GROUP BY DEPT_ID HAVING MIN(THEDATE) >= SYSDATE-70 AND MIN(THEDATE) < SYSDATE-50; ``` The performance difference can be especially dramatic if your dependent columns are long strings. Gordon Linoff's answer makes a good point (though I think his implementation can be further optimized, see below) - in some cases it can make sense to "pre-filter". Few rule-of-thumb indications that this might be faster: 1) majority (80% +) of rows have THEDATE older than 70 days 2) a single index on `Table` which includes most/all `GROUP BY` columns and, preferably, `THEDATE` 3) either a separate index on `THEDATE` or, the `THEDATE` is first column of the index noted in #2 4) alternatively for #3 - `Table` is partitioned by `THEDATE` (and index in #2 being a local index would be better still) Basic "pre-filter logic": sum over all `Table` rows which 1) don't belong to a grouping with any rows that are "too old" 2) do belong to a grouping with least 1 row that's "old enough but not too old" 3) the row itself is not "too old" ``` SELECT DEPT_ID, DEPT_NAME, SUM(SAL), MIN(THEDATE) FROM EMP E1 WHERE NOT EXISTS (SELECT 1 FROM EMP E2 WHERE E2.DEPT_ID = E1.DEPT_ID AND E2.DEPT_NAME=E1.DEPT_NAME AND E2.THEDATE < SYSDATE - 70) AND EXISTS (SELECT 1 FROM EMP E2 WHERE E2.DEPT_ID = E1.DEPT_ID AND E2.DEPT_NAME=E1.DEPT_NAME AND E2.THEDATE BETWEEN SYSDATE-70 AND SYSDATE - 50) AND E1.THEDATE >= SYSDATE -70 GROUP BY DEPT_ID, DEPT_NAME; ``` Final note: If both the candidate-key-grouping and pre-filter optimizations seem applicable, they can be applied in tandem: ``` SELECT DEPT_ID, MAX(DEPT_NAME) DEPT_NAME, SUM(SAL), MIN(THEDATE) FROM EMP E1 WHERE NOT EXISTS (SELECT 1 FROM EMP E2 WHERE E2.DEPT_ID = E1.DEPT_ID AND E2.THEDATE < SYSDATE - 70) AND EXISTS (SELECT 1 FROM EMP E2 WHERE E2.DEPT_ID = E1.DEPT_ID AND E2.THEDATE BETWEEN SYSDATE-70 AND SYSDATE - 50) AND E1.THEDATE >= SYSDATE -70 GROUP BY DEPT_ID; ``` Beyond that, there's probably not much more you can do to boost performance in the query (though PARALLELISM may be an option). To get the correct results any faster, you'd have to look at structural changes (mat views, indexes, partitioning options, etc) to support the query.
The syntax you posted isn't valid-- you can't `group by *` and you'd need to have an alias on the `select *` since you're selecting other columns. Assuming both of those are just artifacts of putting together a simplified example, you just need to use the `having` clause ``` SELECT a.*, sum(something), min(theDate) FROM table_name a GROUP BY <<list of columns in a>> HAVING min(theDate) BETWEEN sysdate - 70 AND sysdate - 50 ```
Oracle select records based on a min date where min date between
[ "", "sql", "oracle", "aggregate-functions", "" ]
Im curious if there is a way to insert into MySQL and avoid duplicates. I know there are three primary ways of accomplishing it. Either Replace, insert ignore or insert on duplicate key. I believe that each of these rely on a unique primary key which for me is a auto increment id field. This id field does not have any value for my particular inserting needs instead i want to check if a few fields contain the same value if not dont insure ex: ``` INSERT INTO table (name, team, email) VALUES (:n, :t, :e) ``` if i had three data sets like so ``` 1) array (":n" => "john doe", ":t" => 1, ":e" => john@doe.com); 2) array (":n" => "Jane doe", ":t" => 1, ":e" => john@doe.com); 3) array (":n" => "john doe", ":t" => 1, ":e" => john1@doe.com); 4) array (":n" => "john doe", ":t" => 2, ":e" => john1@doe.com); ``` Assuming there are no values in the table 1, 3 and 4 would be inserted but 2 would not as the team and email are not unique I am aware i could create a fourth column and add the email and team to it and create a unique value and use this however i would prefer not to do so as it is redundant data . Thanks
If the `id` field has no value for you, don't bother with it. The desired behavior you're describing is that of a [composite key](https://stackoverflow.com/questions/5835978/how-to-properly-create-composite-primary-keys-mysql), so make a composite key. First, scrap the one that exists and is of no value... `ALTER TABLE YourTableName DROP PRIMARY KEY;` Then create the new composite one... `ALTER TABLE YourTableName ADD PRIMARY KEY (name, team, email);`
You can create an `UNIQUE` constrain ``` ALTER TABLE `tableName` ADD UNIQUE `unique_idx`(`name`, `team`, `email`); ```
Insert ignore multiple columns data unique
[ "", "mysql", "sql", "" ]
i have example database, where i store relationships between males and females. The schema and some data are shown on image. [![enter image description here](https://i.stack.imgur.com/nmycH.png)](https://i.stack.imgur.com/nmycH.png) Is there a way to **select all man, which are not in BFF relationship**? On example the answer will be **Bob, Charlie and Nick**. I think, that actually im selecting data from A table without data from B table (with some condition). Is there way to achive this?
A question of this type suggests `not exists`. The tricky part is that the `not exists` subquery has a join: ``` select m.* from man m where not exists (select 1 from man_women_relationship mwr join type t on mwr.fk_type = t.id where mrw.fk_man = m.id and t.name = 'BFF' ); ```
I'm new to sql, so I may be wrong. But this is what I would try: select Name from Man\_Women\_Relationship join Type on Man\_Women\_Relationship.FK\_Type = Type.ID AND Type.ID = 4
Select from data from A table, with no match in B table
[ "", "mysql", "sql", "database", "join", "" ]
I am trying to extract a data from an Excel through different Excel using `ADODB.Connection` but when I am trying to retrieve data between from date to system date I am getting an automation error in Excel. I have checked various articles but I'm unable to validate those condition because I don't have SQL Server so I am directly putting into Excel coding but again same error I am getting. Please help...... ``` Sub get_data() Dim strSQL As String Dim cnn As New ADODB.Connection Dim rs As New ADODB.Recordset Dim DBPath As String Dim sconnect As String DBPath = "\\abc\Quality Report.xlsx" sconnect = "Provider=MSDASQL.1;DSN=Excel Files;DBQ=" & DBPath & ";HDR=Yes';" cnn.Open sconnect strSQL = "SELECT * FROM [Error_Log$] WHERE " If cboprocess.Text <> "" Then strSQL = strSQL & " [Process]='" & cboprocess.Text & "'" End If If cboaudittype.Text <> "" Then If cboprocess.Text <> "" Then strSQL = strSQL & " AND [Audit_Type]='" & cboaudittype.Text & "'" Else strSQL = strSQL & " [Audit_Type]='" & cboaudittype.Text & "'" End If End If If cbouser1.Text <> "" Then If cboprocess.Text <> "" Or cboaudittype.Text <> "" Then strSQL = strSQL & " AND [User_Name]='" & cbouser1.Text & "'" Else strSQL = strSQL & " [User_Name]='" & cbouser1.Text & "'" End If End If If cborptmgr.Text <> "" Then If cboprocess.Text <> "" Or cboaudittype.Text <> "" Or cbouser1.Text <> "" Then strSQL = strSQL & " AND [Reporting_Manager]='" & cborptmgr.Text & "'" Else strSQL = strSQL & " [Reporting_Manager]='" & cborptmgr.Text & "'" End If End If If cbotranstyp.Text <> "" Then If cboprocess.Text <> "" Or cboaudittype.Text <> "" Or cbouser1.Text <> "" Or cborptmgr.Text <> "" Then strSQL = strSQL & " AND [Transaction_Type]='" & cbotranstyp.Text & "'" Else strSQL = strSQL & " [Transaction_Type]='" & cbotranstyp.Text & "'" End If End If If cboperiod.Text <> "" Then If cboprocess.Text <> "" Or cboaudittype.Text <> "" Or cbouser1.Text <> "" Or cborptmgr.Text <> "" _ Or cbotranstyp.Text <> "" Then strSQL = strSQL & " AND [Period]='" & cboperiod.Text & "'" Else strSQL = strSQL & " [Period]='" & cboperiod.Text & "'" End If End If If cbolocation.Text <> "" Then If cboprocess.Text <> "" Or cboaudittype.Text <> "" Or cbouser1.Text <> "" Or cborptmgr.Text <> "" _ Or cbotranstyp.Text <> "" Or cboperiod.Text <> "" Then strSQL = strSQL & " AND [Location]='" & cbolocation.Text & "'" Else strSQL = strSQL & " [Location]='" & cbolocation.Text & "'" End If End If If cbofatnfat.Text <> "" Then If cboprocess.Text <> "" Or cboaudittype.Text <> "" Or cbouser1.Text <> "" Or cborptmgr.Text <> "" _ Or cbotranstyp.Text <> "" Or cboperiod.Text <> "" Or cbolocation.Text <> "" Then strSQL = strSQL & " AND [Fatal_NonFatal]='" & cbofatnfat.Text & "'" Else strSQL = strSQL & " [Fatal_NonFatal]='" & cbofatnfat.Text & "'" End If End If If cbostatus.Text <> "" Then If cboprocess.Text <> "" Or cboaudittype.Text <> "" Or cbouser1.Text <> "" Or cborptmgr.Text <> "" _ Or cbotranstyp.Text <> "" Or cboperiod.Text <> "" Or cbolocation.Text <> "" Or cbofatnfat.Text <> "" Then strSQL = strSQL & " AND [Remarks]='" & cbostatus.Text & "'" Else strSQL = strSQL & " [Remarks]='" & cbostatus.Text & "'" End If End If If txtfromauditdt.Text <> "" Then If cboprocess.Text <> "" Or cboaudittype.Text <> "" Or cbouser1.Text <> "" Or cborptmgr.Text <> "" _ Or cbotranstyp.Text <> "" Or cboperiod.Text <> "" Or cbolocation.Text <> "" Or cbofatnfat.Text <> "" _ Or cbostatus.Text <> "" Then strSQL = strSQL & " AND [Audit_Date] BETWEEN '" & txtfromauditdt.Text & "' AND GETDATE()" Else strSQL = strSQL & " [Audit_Date] BETWEEN '" & txtfromauditdt.Text & "' AND GETDATE()" End If End If Debug.Print strSQL Set rs.ActiveConnection = cnn rs.Open strSQL, cnn Sheet1.Range("A42").CopyFromRecordset rs rs.Close cnn.Close End Sub ``` Below is debug print from strsql ``` SELECT * FROM [Error_Log$] WHERE [Audit_Date] BETWEEN '16-Nov-2015' AND getdate() ``` Error screenshot [![enter image description here](https://i.stack.imgur.com/YTXdq.jpg)](https://i.stack.imgur.com/YTXdq.jpg)
Finally found out the error cause... Actually the error was because of the quotes `'13-nov-15'` which I was using in coding but the same was replacing after with `#13-nov-15#`. It worked. As per the @Williams answer's there was a only one correction made for another date but for parameter date the quotes was coming but now its working cool. Thanks to Tim Williams..helps a lot....
I think your problem is GetDate() Your approach to building the WHERE clause can also be much simpler: ``` Sub get_data() Dim strSQL As String, strWhere As String Dim cnn As New ADODB.Connection Dim rs As New ADODB.Recordset Dim DBPath As String Dim sconnect As String DBPath = "\\abc\Quality Report.xlsx" sconnect = "Provider=MSDASQL.1;DSN=Excel Files;DBQ=" & DBPath & ";HDR=Yes';" cnn.Open sconnect strSQL = "SELECT * FROM [Error_Log$] WHERE " strWhere = "" BuildWhere strWhere, cboprocess.Text, "Process" BuildWhere strWhere, cboaudittype.Text, "Audit_Type" BuildWhere strWhere, cbouser1.Text, "User_Name" BuildWhere strWhere, cborptmgr.Text, "Reporting_Manager" BuildWhere strWhere, cbotranstyp.Text, "Transaction_Type" BuildWhere strWhere, cboperiod.Text, "Period" BuildWhere strWhere, cbolocation.Text, "Location" BuildWhere strWhere, cbofatnfat.Text, "Fatal_NonFatal" BuildWhere strWhere, cbostatus.Text, "Remarks" If txtfromauditdt.Text <> "" Then strWhere = strWhere & IIf(strWhere <> "", " AND ", "") & "[Audit_Date] BETWEEN '" & _ txtfromauditdt.Text & "' AND #" & Format(Date, "mm/dd/yyyy") & "# " End If strSQL = strSQL & strWhere Debug.Print strSQL Set rs.ActiveConnection = cnn rs.Open strSQL, cnn Sheet1.Range("A42").CopyFromRecordset rs rs.Close cnn.Close End Sub Sub BuildWhere(ByRef strWhere As String, v As String, fld As String) If v <> "" Then strWhere = strWhere & IIf(strWhere <> "", " AND ", "") & _ "[" & fld & "] = '" & v & "'" End If End Sub ```
How to retrieve data between from date to system date using excel VBA (Automation Error)
[ "", "sql", "excel", "vba", "" ]
Let's say a have a table called people with the following columns: 1. person\_id 2. name 3. parent\_person\_id I'm trying to write a query to return the following: 1. person.person\_id as 'id' 2. person.name as 'name' 3. person.parent\_person\_id as 'parent id' 4. person.label (person.person\_id = person.person\_parent\_id from parent query) as 'parent name'. I'm having difficult with the item number 4 subquery. My question is: How do I make a reference to the person.person\_parent\_id from the parent query from within the subquery? I feel like on the code bellow if I could get people.parent\_person\_id value from the external query and use it on the inner one, I would achieve my goal. ``` SELECT people.person_id as 'Person ID', people.name as 'Person Name', people.parent_person_id as 'Parent ID', ( SELECT people.name FROM people WHERE people.parent_person_id = people.person_id; ) as 'Parent Name' FROM people; ``` I could be wrong, and I'm definitely open minded. Please share your thoughts and help this good soul move on with his quest.
You are quite close. You just need proper table aliases: ``` SELECT p.person_id as PersonID, p.name as PersonName, p.parent_person_id as ParentID, (SELECT parent.name FROM people parent WHERE p.parent_person_id = parent.person_id; ) as ParentName FROM people p; ``` Notes: * This would more normally be expressed as a `LEFT JOIN`, but the correlated subquery is also a very reasonable approach. * Notice that the query is easier to follow with table aliases -- and you need them in this case. * Don't get in the habit of using single quotes for column names. This often causes confusion and can result in hard to find bugs. * Your version doesn't work because in the condition `people.parent_person_id = people.person_id`, `people` refers to the inner `from`. You need table aliases to distinguish between the inner and outer references to `people`.
Just create an alias for each table ``` SELECT p1.person_id as 'Person ID', p1.name as 'Person Name', p1.parent_person_id as 'Parent ID', ( SELECT p2.name FROM people as p2 WHERE p2.parent_person_id = p1.person_id; ) as 'Parent Name' FROM people as p1; ```
How do I make a reference to a table column from the parent query within the subquery in SQL?
[ "", "mysql", "sql", "database", "" ]
Table structure : table name : `DETAIL_BOX` ``` ----------------------------------------------------------------------------- | sn (Number) | name (varchar2) | date1 (date) | date2 (date) | ----------------------------------------------------------------------------- | 2 | Ram | 26.11.2015 00:00:00 | NULL | ----------------------------------------------------------------------------- | 3 | Sita | NULL | 26.11.2015 00:00:00 | ----------------------------------------------------------------------------- ``` I need single SQL update statement with dynamic column name depends upon case. **Case:** If **date2** is null update date in **date1** otherwise update date value in **date2** SQL statements like below are possible or not ? * **To update row having sn=2** * There is an any query like below query ? `UPDATE DETAIL_BOX SET name='Gita', NVL(date2,date1)=to_date('2015-10-16', 'YYYY-MM-DD') WHERE sn=2` * **Expectation :** update value of date in field **date1** because **date2** field is NULL in row having sn=2. * **To update row having sn=3** * There is an any query like below query ? `UPDATE DETAIL_BOX SET name='Krish', NVL(date2,date1)=to_date('2015-10-16', 'YYYY-MM-DD') WHERE sn=3` * **Expectation :** update value of date in field **date2** because **date2** field is not NULL in row having sn=3. **Note:** Above UPDATE sql queries are not correct. I just put these query to ask any Update statements are possible to set dynamic column name to set value.
I would put two answers (by BeanFrog and Prabhat Sharma) together. In my opinion it's good solution to update column to a new value or to itself in depend on criteria. ``` update detail_box set name = decode(sn, 2, 'Gita', 3, 'Krish', name), date1 = (case when sn = 2 and date2 is null then to_date('2015-10-16', 'YYYY-MM-DD') else date1 end), date2 = (case when sn = 3 and date2 is null then to_date('2015-10-16', 'YYYY-MM-DD') else date2 end) ``` Please note, `decode` function is specific to ORACLE database. It could be changed to `case` structure if you want to have common code regardless to RDBMS vendor.
I'm not sure where sn comes in, but hopefully the general method will help. Update both fields in your update statement, but either set them to themselves or to the new date based on a case statement: ``` UPDATE DETAIL_BOX SET date2= case when date2 is null then date2 else to_date('2015-10-16', 'YYYY-MM-DD') end , date1 = case when date 2 is null then to_date('2015-10-16', 'YYYY-MM-DD') else date1 end ```
How to set dynamic column name to set value in UPDATE statement of SQL Query on the basis of condition?
[ "", "sql", "oracle", "" ]
I have a query ``` select c.CommentId ,c.CommentText , c.CommenterId , c.CommentDate , u.first_name , u.last_name , i.ImageName , i.Format from comment c join users u on c.CommenterId = u.user_id join user_profile_image i on u.user_id = i.UserId where PostId = 76 order by CommentDate desc limit 10 ``` This query returns empty results when i.ImageName field is empty in the table. I want to return the row if the ImageName field is emty. How should I do this?
`JOIN` defaults to `INNER JOIN` for MySQL - try changing `join user_profile_image i` to `LEFT join user_profile_image i` The accepted answer here has a good visual explanation: [Difference in MySQL JOIN vs LEFT JOIN](https://stackoverflow.com/questions/9770366/difference-in-mysql-join-vs-left-join)
To include the rows when the ImageName field is empty, use `LEFT JOIN`, like this: ``` SELECT c.CommentId,c.CommentText, c.CommenterId, c.CommentDate, u.first_name, u.last_name,i.ImageName,i.Format FROM comment c INNER JOIN users u ON c.CommenterId=u.user_id LEFT JOIN user_profile_image i ON u.user_id=i.UserId WHERE PostId = 76 ORDER BY CommentDate DESC LIMIT 10; ```
sql query is behaving strange
[ "", "mysql", "sql", "" ]
I have already given ranks in rank\_id column of top\_books and i want to update the same table according to count\_issue attribute while keeping the rank\_id as they are. Can this be done somehow? ``` DECLARE v_row NUMBER; CURSOR cur IS SELECT COUNT(transaction_id) counter, book_id FROM transaction_master GROUP BY book_id; BEGIN FOR rec IN cur LOOP v_row:=cur%ROWCOUNT; UPDATE top_books SET book_id = rec.book_id, count_issue = rec.counter WHERE rank_id = v_row; END LOOP; END; ``` How can i sort the above updated data in table top\_books within this code block according to 'COUNT\_ISSUE' attribute
try this you can do it for update also but approach bit confusing just change your cursor; ``` DECLARE v_row NUMBER; CURSOR cur IS select * from (SELECT COUNT(transaction_id) counter, book_id FROM transaction_master GROUP BY book_id) order by counter desc; BEGIN -- truncate table execute immediate 'truncate table top_books'; --insert fresh data FOR rec IN cur LOOP v_row := cur%rowcount; insert into top_books (rank_id,book_id,ciybt_issue) values(v_row,rec.book_id,rec.counter); END LOOP; commit; END; ```
I wouldn't do this using a cursor for loop; instead, I'd do it in a single MERGE statement: ``` merge into top_books tgt using (select row_number() over (order by cnt desc, book_id) rn, book_id, cnt from (select book_id, count(transaction_id) cnt from transaction_master group by book_id)) src on (tgt.rank_id = src.rn) when matched then update set tgt.book_id = src.book_id, tgt.count_issue = src.cnt; ``` It will be much faster than your row-by-row (aka slow-by-slow) approach. N.B. I've used the `row_number` analytic function rather than `rank` or `dense_rank` to determine the order of books sold since if two books happened to have the same count, then the update would fail. You should amend the extra ordering (here, I used book\_id to differentiate which book would come first) to achieve the results you're after.
How to sort data in a table using pl sql block
[ "", "sql", "database", "oracle", "plsql", "" ]
I have the following table named `population`: ``` ╔════════════╦════════════╦════════════════╗ β•‘ india β•‘ hyderabad β•‘ 50100 β•‘ β•‘ india β•‘ delhi β•‘ 75000 β•‘ β•‘ USA β•‘ NewYork β•‘ 25000 β•‘ β•‘ USA β•‘ california β•‘ 30000 β•‘ β•‘ india β•‘ delhi β•‘ 5000 β•‘ β•‘ USA β•‘ NewYork β•‘ 75000 β•‘ β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β• ``` I need to write a SQL query to get data in the following format: ``` ╔════════╦═════════╦══════════╗ β•‘ india β•‘ delhi β•‘ 80000 β•‘ β•‘ USA β•‘ NewYork β•‘ 100000 β•‘ β•šβ•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•β•β• ``` country name and the city with the highest population where multiple entries of the cities are summed up.
You can use: ``` SELECT * FROM ( SELECT country,city, SUM(pop) AS total FROM population GROUP BY country,city) AS sub WHERE (country, total) IN ( SELECT country, MAX(total) FROM (SELECT country,city, SUM(pop) AS total FROM population GROUP BY country,city ) as s GROUP BY country ); ``` If two cities in the same country have the same highest total population you will get two cities for that country. `SqlFiddleDemo` Output: ``` ╔══════════╦═════════╦════════╗ β•‘ country β•‘ city β•‘ total β•‘ ╠══════════╬═════════╬════════╣ β•‘ india β•‘ delhi β•‘ 80000 β•‘ β•‘ USA β•‘ NewYork β•‘ 100000 β•‘ β•šβ•β•β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β• ```
You could use a combination of GROUP\_CONCAT and FIND\_IN\_SET. This query will return a comma separated list of cities for every country, ordered by population DESC: ``` SELECT country, GROUP_CONCAT(city ORDER BY pop DESC) AS cities FROM population GROUP BY country ``` and it will return something like this: ``` | country | cities | |---------|--------------------------| | india | delhi,hyderabad,delhi | | USA | NewYok,california,NewYok | ``` then we can join this subquery back to the population table using FIND\_IN\_SET that returns the position of a city in the list of cities: ``` SELECT p.country, p.city, SUM(p.pop) FROM population p INNER JOIN ( SELECT country, GROUP_CONCAT(city ORDER BY pop DESC) AS cities FROM population GROUP BY country ) m ON p.country=m.country AND FIND_IN_SET(p.city, m.cities)=1 GROUP BY p.country, p.city ``` the join will succeed only on the city with the maximum population for every country: `FIND_IN_SET(p.city, m.cities)=1`. This will work only if there's one city with the maximum poluation, if there are more only one will be returned. This also is not standard SQL and will only work on MySQL or similar, other DBMS have window functions that will make this same query easier to write.
how to write the following SQL query involving sub queries
[ "", "mysql", "sql", "" ]
I'm trying to create my database for a website and there seems to be an error I cannot solve. For some reason it's giving me an invalid syntax problem, but I have no idea why. I looked it up everwhere. ``` mysql> SELECT WebsiteDB -> CREATE Table `MasaTable`( -> `Username` varchar(255), -> `Password` varchar(255), -> `Email` varchar(255) -> ); ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'CREATE Table `MasaTable`( `Username` varchar(255), `Password` varchar(255), `' at line 2 ``` I have no idea why it's pointing at line two. Any ideas? Thanks in advance! **EDIT:** Well thats odd, I just tried out my code in SQL Fiddle and it seemed to work. What could be the problem? I'm doing this through the MySQL Server 5.7 Command line client...
Oh boy, was that a stupid mistake. I was using `SELECT <database>` instead of `USE <database>`, and I wasn't closing it with a semicolon. Thank you Mihai! This works just fine: ``` USE WebsiteDB; CREATE Table MasaTable( Username varchar(255), Password varchar(255), Email varchar(255)); ```
Remove last comma and use backticks to quote identifiers: ``` CREATE TABLE `MasaTable`( `Username` varchar(255), `Password` varchar(255), `Email` varchar(255) ); ``` `SqlFiddleDemo`
MySQL server table creation - "invalid syntax"
[ "", "mysql", "sql", "create-table", "" ]
I want the SIDS of suppliers who supply every part. I have trouble understanding the query answer the book has given. ``` Suppliers(sid: integer, sname: string, address: string) Parts(pid: integer, pname: string, color: string) Catalog(sid: integer, pid: integer, cost: real) ``` Suppose you're given a database as the following: Table: Parts ``` PID P1 P2 P3 ``` Table : Catalog ``` SID PID S1 P1 S1 P2 S1 P3 S2 P1 ``` It should output S1, but I believe with the given query, it will output both S1 and S2. Wouldn't the last nested query satisfy the SID = 2 ? Because if C1.sid = S2 and C.sid = S2 and C1.pid = P1 and P.pid = P1 then it would satisfy the query. The book answer is given as follows: SQL Translation : "C.Sid for which Does not exist the parts that are not supplied by C.Sid" ``` SELECT C.sid FROM Catalog C WHERE NOT EXISTS (SELECT P.pid FROM Parts P WHERE NOT EXISTS (SELECT C1.sid FROM Catalog C1 WHERE C1.sid = C.sid AND C1.pid = P.pid) ) ```
A simple 'proof by contradiction' will demonstrate the query could never return S2. For the query to return S2, the first `NOT EXISTS` would demand the subquery on `Parts` to yield zero rows. Which is most certainly not the case, since that particular subquery returns all parts not supplied by S2 (i.e. P2 and P3).
You should read the query as: Give me suppliers for which there doesn't exist such a part that has no corresponding catalog for the **abovemensioned** supplier. You should read the query from top to bottom here. Imagine a loop on `Catalogs` table. Fetch first row(`S1,P1`). Do a inner loop for `Parts` and check if that part exists for `S1`. It exists. OK. Fetch second part. It exists. Fetch third part. It exists. So for first row there not exists such part that not exists in catalog for the supplier from first row. Fetch second row(`S1,P2`) etc...
SQL nested not exist understanding
[ "", "sql", "relational-division", "" ]
im trying to make an SQL table with custom date format ``` CREATE TABLE Basketbal.Bestuurslid ( `lidnummer` INT NOT NULL, `begin_datum` DATE NOT NULL, `eind_datum` DATE NULL, `functie` VARCHAR(45) NULL, PRIMARY KEY (`lidnummer`, `begin_datum`)); (I know it's in dutch) ``` im trying to make `begin_datum` & `eind_datum` in the following format: `1-jan-07 ('%e-%b-%y')` but I cant get it to work any suggestions?
`date` is an internal format in the database. You can convert it to (or from) a string using `date_format()` (and `str_to_date()`). That controls the *character* representation of the value, not the value itself. So, when you select from the table: ``` select date_format(begin_datum, '%e-%b-%y') ``` If you really feel strongly about not explicitly calling this function, you can put the date logic into a view and access the table through the view.
Once you have your data stored in your table you could pull your data in the format you want it, example: ``` CONVERT(VARCHAR(11),begin_datum,6) ```
SQL dateformat while creating table
[ "", "mysql", "sql", "create-table", "" ]
I have two tables that I’m trying to build a query with for our fundraising department but that I’m struggling with. Using SQL Server 2008. The Appeals table holds data about which members we are going to send an appeal for a donation to. The Yearly\_Gift table holds data about how much money a member donated on a yearly basis. Sometimes this table can have a zero dollar amount for a specific year. What I’m trying to do is use these two tables to come up with the last year that someone donated and how much they donated, provided that it was greater than zero, for the appeal we are sending out. Here are simplified versions of the tables and some data. I’m also including my desired output. Can anyone help me with this? ``` --Build Tables CREATE TABLE [dbo].[Appeals]( [Appeal_ID] [int] NOT NULL, [Member_ID] [int] NOT NULL, CONSTRAINT [PK_Appeals] PRIMARY KEY CLUSTERED ( [Appeal_ID] ASC, [Member_ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] CREATE TABLE [dbo].[Yearly_Gift]( [Member_ID] [int] NOT NULL, [FiscalYear] [char](4) NOT NULL, [Amount] [money] NULL, CONSTRAINT [PK_Yearly_Gift] PRIMARY KEY CLUSTERED ( [Member_ID] ASC, [FiscalYear] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] --Fill tables INSERT INTO Appeals VALUES (1,101) INSERT INTO Appeals VALUES (1,102) INSERT INTO Appeals VALUES (2,101) INSERT INTO Appeals VALUES (2,102) INSERT INTO Appeals VALUES (2,103) INSERT INTO Appeals VALUES (2,104) INSERT INTO Appeals VALUES (2,105) INSERT INTO Yearly_Gift VALUES(101,'2015',100) INSERT INTO Yearly_Gift VALUES(102,'2014',0) INSERT INTO Yearly_Gift VALUES(102,'2012',150) INSERT INTO Yearly_Gift VALUES(102,'2011',200) INSERT INTO Yearly_Gift VALUES(103,'2013',500) INSERT INTO Yearly_Gift VALUES(103,'2014',500) INSERT INTO Yearly_Gift VALUES(104,'2012',200) INSERT INTO Yearly_Gift VALUES(104,'2015',100) ``` **Desired Output** ``` Appeal_ID Member_ID FiscalYear Amount 2 101 2015 100 2 102 2012 150 2 103 2014 500 2 104 2015 100 2 105 NULL NULL ``` Thank you for any help that you can provide.
``` WITH X AS ( SELECT A.[Appeal_ID], A.[Member_ID],Y.[FiscalYear], Y.[Amount] FROM [dbo].[Appeals] A LEFT JOIN [dbo].[Yearly_Gift] Y ON Y.[Member_ID] = A.[Member_ID] ) SELECT X.[Appeal_ID], X.[Member_ID],Y.[FiscalYear], X.[Amount] FROM X LEFT JOIN ( SELECT [Member_ID], MAX([FiscalYear]) AS [FiscalYear] , MAX([Appeal_ID]) AS [Appeal_ID] FROM X WHERE Amount > 0 GROUP BY X.[Member_ID] ) Y ON X.Appeal_ID = Y.Appeal_ID AND X.Member_ID = Y.Member_ID AND X.FiscalYear = Y.FiscalYear WHERE Y.[FiscalYear] IS NULL AND X.[Amount] IS NULL OR Y.[FiscalYear] IS NOT NULL AND X.[Amount] IS NOT NULL ORDER BY X.Appeal_ID,X.Member_ID ```
Well - this query is a bit long/ugly, but it works. If I had more time, I'd look into optimizations and code reduction. ``` SELECT target_members.Appeal_ID, target_members.Member_ID, FiscalYear, Amount FROM ( SELECT MAX( Appeal_ID ) AS Appeal_ID, Member_ID FROM Appeals GROUP BY Member_ID ) AS target_members LEFT OUTER JOIN ( SELECT MAX( Appeals.Appeal_ID ) AS Appeal_ID, Yearly_Gift.Member_ID, Yearly_Gift.FiscalYear, Yearly_Gift.Amount, ROW_NUMBER() OVER ( PARTITION BY Yearly_Gift.Member_ID ORDER BY FiscalYear DESC ) AS r FROM Appeals, Yearly_Gift WHERE Yearly_Gift.Member_ID = Appeals.Member_ID AND Amount > 0 GROUP BY Yearly_Gift.Member_ID, Yearly_Gift.FiscalYear, Yearly_Gift.Amount ) AS last_donation ON last_donation.Appeal_ID = target_members.Appeal_ID AND last_donation.Member_ID = target_members.Member_ID AND r = 1 ```
Join tables using max(column) with a with another column >0
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "" ]
**My table:** ``` CREATE TABLE [dbo].[Balance] ( [Id] INT IDENTITY (1, 1) NOT NULL, [Balance] DECIMAL (18, 2) NOT NULL, [Today_Date] AS (CONVERT([char](10),getdate(),(126))), [Date_end] DATE NOT NULL, [Remaining_Days] AS (datediff(day,CONVERT([char](10),getdate(),(126)),[Date_end])), [In_Months] AS (datediff(day,CONVERT([char](10),getdate(),(126)),[Date_end]))/(30), [Amount_Monthly] AS CAST((case when ((datediff(day,CONVERT([char](10),getdate(),(126)),[Date_end]))/30) = 0 then NULL else [Balance]/((datediff(day,CONVERT([char](10),getdate(),(126)),[Date_end]))/30) end) as DECIMAL(18,2)), PRIMARY KEY CLUSTERED ([Id] ASC) ); ``` **How it looks like:** [![My data](https://i.stack.imgur.com/gUYEW.png)](https://i.stack.imgur.com/gUYEW.png) I want it to **automatically insert** the Amount\_Monthly into a new table so it would look like this: [![My Months table](https://i.stack.imgur.com/VQWzb.png)](https://i.stack.imgur.com/VQWzb.png) E.g. if it says In\_Months = 2 it should fill out January and February's Balance\_monthly to 7058,82. I want it to calculate it automatically Just like I made it automatically calculate remaining\_days depending on the input. Thank you!
@Mahesh So, @Usedbyalready's answer seemed pretty overkill, I tried making it myself with case inside an update and it works perfectly. ``` UPDATE Months SET Months.Balance_monthly = CASE WHEN Balance.In_Months > 1 THEN Amount_Monthly END FROM Balance JOIN Months ON Months.Id <= Balance.In_Months; ``` I also made a trigger that automatically inserts the values into my Months table: ``` CREATE TRIGGER [Balance_monthly] ON [dbo].[Balance] FOR INSERT, UPDATE AS BEGIN SET NOCOUNT ON UPDATE Months SET Months.Balance_monthly = ((Balance.In_Months + 12 - Months.Id) / 12) * Amount_Monthly FROM Balance CROSS JOIN Months; END ```
You need 12 rows each represents a Month number 1 to 12. I have used a simple union all query in a CTE for this but you may already have a table of numbers to use instead. Then join this where the month number is less than or equal to the [in\_Month] column. That join will automatically now multiply the rows of your table by the wanted number of months. ``` ;with m12 as ( select 1 as mn union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9 union all select 10 union all select 11 union all select 12 ) select row_number() over(order by b.id, m12.mn) as [ID] , datename(month,dateadd(month,m12.mn - 1,0)) as [Month] , b.Amount_Monthly as Balance_Monthly from Balance b inner join m12 on m12.mn <= b.in_months ``` see: <http://sqlfiddle.com/#!6/4fc6f/3> Note you will probably want to include db.balance.id as [balanceid] or similar in the new table so you can trace back to the source row id. If CTEs are a problem just use a "derived table" instead, e.g. ``` select row_number() over(order by b.id, m12.mn) as [ID] , datename(month,dateadd(month,m12.mn - 1,0)) as [Month] , b.Amount_Monthly as Balance_Monthly from Balance b inner join ( select 1 as mn union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9 union all select 10 union all select 11 union all select 12 ) as m12 on m12.mn <= b.in_months ```
Indexed views: How to insert value into other table with index views?
[ "", "sql", "sql-server", "indexing", "triggers", "indexed-views", "" ]