Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I have two identical tables that are located in two identical databases(with different name). I want to merge these two tables, but their primary keys are used in other tables, these tables look like this: **Table A** ``` id column1 column2 column3 ___ ________ _______ ________ 1 text text text 2 text text text 3 text text text ``` **Table B** ``` id column1 column2 column3 ___ ________ _______ ________ 2 text text text 3 text text text 4 text text text ``` **tables that are linked to Table A** **Link A** ``` id column1 tableA_ID ___ ________ _______ 1 text 2 2 text 3 3 text 4 ``` **Link B** ``` id column1 tableA_ID ___ ________ _______ 1 text 3 2 text 3 3 text 2 ``` Please note, the tables have identical `id`'s, this means when I do the merge, I have to change the `id`'s of the second table. Remember the *second table's primary keys are used in other tables.* I wrote this query to merge the two tables: ``` INSERT INTO db_A.`Table_A`(`column2`,`column3`) SELECT `column2`,`column3` FROM db_B.`Table_B`; ``` This query will correctly copy the records of the second table to the first table. Now I want to also move the data of the tables that are linked with `Table B`, I can use the same query, but now the foreign key will not match, because the `ID` they were linked with has been changed. How do I update them so that the `ID` will match again? **NB: I do not have the ON UPDATE CASCADE constraint on those tables** I hope this make sense, I will try to improve this question so that everyone understands it. **Database Info** ``` Type : MySQL Engine: MyISAM ```
A simple way would be to update the `TableB`'s `ID`s to unique range and then do the merge. If your foreign keys are properly set to cascade the change, your database will stay consistent through this operation. You don't need to do any changes to the database schema this way, so there is no point in time when the data is not calid. You can also be sure that the IDs won't clash. Easiest way to find unique values is to take the maximum of the `ID` in `TableA` and add that to the `ID`s in `TableB`.
You can apply `ON UPDATE CASCADE` to each table with foreign keys related to `TableB.id` in second database temporary: ``` ALTER TABLE db2.other_tables_with_fk DROP FOREIGN KEY fk_to_TableB; ALTER TABLE db2.other_tables_with_fk ADD CONSTRAINT fk_to_TableB FOREIGN KEY (TableB_id) REFERENCES TableB(id) ON UPDATE CASCADE; ``` and afterwards use the trick in [Sami's Answer](https://stackoverflow.com/a/35554229/2721611) and then remove temporary changes like this: ``` ALTER TABLE db2.other_tables_with_fk DROP FOREIGN KEY fk_to_TableB; ALTER TABLE db2.other_tables_with_fk ADD CONSTRAINT fk_to_TableB FOREIGN KEY (TableB_id) REFERENCES TableB(id); ``` Then your second database will be ready to merge with the first one. --- For **MyISM** or situations that `CASCADE` is not supported by engine you can simulate it manually by defining [**Triggers**](http://dev.mysql.com/doc/refman/5.7/en/trigger-syntax.html): ``` CREATE TRIGGER trigger1 AFTER UPDATE ON TableB FOR EACH ROW BEGIN UPDATE other_tables_with_fk1 SET TableB_id = NEW.id WHERE TableB_id = OLD.id UPDATE other_tables_with_fk2 SET TableB_id = NEW.id WHERE TableB_id = OLD.id ... END ``` --- Even if triggers are not available you can simply increase id number of rows in second database by some custom amount(any amount greater than max row id which used in first database) in all tables including foreign key parent table at a same time: ``` UPDATE TableB t SET t.id = (t.id + 10000); UPDATE related_table_1 t SET t.TableB_id = (t.TableB_id + 10000); UPDATE related_table_2 t SET t.TableB_id = (t.TableB_id + 10000); ... ``` And then you can merge those databases.
MYSQL Update all foreign key values
[ "", "mysql", "sql", "foreign-keys", "" ]
I got the following entries in my database: ``` E01234-1-1 E01234444-1-800000000 ``` I want to trim the entry so I get: ``` E01234-1 E01234444-1 ``` So basically, I want everything before the second '-' regardless of the length How can I solve it? Im using MS SQL SERVER 2012 I am using this but it only bring data from before first hyphen, not second hyphen ``` DECLARE @TABLE TABLE (STRING VARCHAR(100)) INSERT INTO @TABLE (STRING) SELECT 'E01234-1-1' UNION ALL SELECT 'E01234-1-200' UNION ALL SELECT 'E01234-1-3000' UNION ALL SELECT 'E01234-1-40000' UNION ALL SELECT 'E01234-1-500000' UNION ALL SELECT 'E01234-1-6000000' UNION ALL SELECT 'E01234-1-70000000' UNION ALL SELECT 'E01234444-1-800000000' SELECT LEFT(STRING, CHARINDEX('-',STRING)-1) STRIPPED_STRING from @TABLE ``` RETURNS `E01234 E01234 E01234 E01234 E01234 E01234 E01234 E01234444`
If you need the second `-`: ``` SELECT LEFT(STRING, CHARINDEX('-', @test, CHARINDEX('-', @test) + 1) -1) STRIPPED_STRING FROM @TABLE ``` Explanation: `CHARINDEX` will get you the index of the `-` - doing it twice (+ 1) specifies that the outter `CHARINDEX` should start at the spot after the first `-` in the string. If you want to chop off everything after the last `-` instead (regardless of whether it's second or not): ``` SELECT LEFT(STRING, LEN(STRING) - CHARINDEX('-', REVERSE(STRING))) STRIPPED_STRING FROM @table ``` This time, you get the `CHARINDEX` of the last (reverse the string) `-`, and subtract that from the length of the whole string.
Try this: ``` DECLARE @STR NVARCHAR(MAX) = 'E01234444-1-800000000'; SELECT LEFT(@STR, CHARINDEX('-', @STR, CHARINDEX('-', @STR)) - 1) ```
Get everything before a certain character in SQL
[ "", "sql", "sql-server", "t-sql", "" ]
So I have two Tables: `Customers` and `Calls`. There is a `one to many` relationship between these tables. *i.e. One Customer can have Many Calls* I am trying to create a left join so that I have an output where the `Customers` are listed **only once** with the most recent `CallDate`from the `Calls` table. **Using this diagram:** [![enter image description here](https://i.stack.imgur.com/nP5TR.jpg)](https://i.stack.imgur.com/nP5TR.jpg) I have constructed the following `SQL` statement: ``` Select Customers.*, Calls.CallDate From Customers Left Join Calls on Customers.Id=Calls.CustomerId ``` But this gives me a separate `Customer` row for each `Call` How do I get **just one** row for each `Customer` based on the most recent `CallDate`?
A simple way is to use [`Outer Apply`](https://technet.microsoft.com/en-us/library/ms175156(v=sql.105).aspx): ``` Select c.*, ca.* From Customers c outer apply (select top 1 ca.* from Calls ca where c.id = ca.CustomerId order by CallDate desc ) ca; ``` However, if you just want the most recent call date, then aggregation is the typical approach. One method: ``` select c.*, max_callDate from customers c left join (select CustomerId, max(CallDate) as max_callDate from calls group by CustomerId ) ca on c.id = ca.CustomerId; ```
You can use [**`ROW_NUMBER`**](https://msdn.microsoft.com/en-us/library/ms186734.aspx) window function: ``` Select Customers.*, c.CallDate From Customers Left Join ( SELECT CustomerId, CallDate, ROW_NUMBER() OVER (PARTITION BY CustomerId ORDER BY CallDate DESC) AS rn FROM Calls ) AS c on Customers.Id = c.CustomerId AND c.rn = 1 ``` `ROW_NUMBER` with a `PARTITION BY` clause enumerates records within `CustomerId` partitions. Number 1 is assigned to the record having the maximum `CallDate` value, due to `ORDER BY CallDate DESC` clause.
How do I select Just one row for each row in a Left Join
[ "", "sql", "sql-server", "left-join", "" ]
Hey I have this query, ``` SELECT item_type.id, item_type.item_type, (SELECT COUNT(*) FROM item WHERE item.sale_transaction_id IS NULL) as stock_qty, (SELECT COUNT(*) FROM item WHERE item.sale_transaction_id IS NOT NULL) as sold_qty FROM item JOIN item_type ON item.item_type_id = item_type.id GROUP BY item.item_type_id ``` This gives me a result: ``` | id | item_type | stock_qty | sold_qty| ---------------------------------------- | 1 | Book | 12 | 12 | | 2 | Pencil | 12 | 12 | | ........... # etc ``` But this does not work as intended, I need to do it like this to make it work: ``` SELECT item_type.id, item_type.item_type, COUNT(item.purchase_transaction_id) - COUNT(item.sale_transaction_id) as stock_qty, COUNT(item.sale_transaction_id) as sold_qty FROM item JOIN item_type ON item.item_type_id = item_type.id GROUP BY item.item_type_id ``` and the result is what I want and this is the **correct/expected** output: ``` | id | item_type | stock_qty | sold_qty| ---------------------------------------- | 1 | Book | 1 | 0 | | 2 | Pencil | 0 | 5 | | ........... # etc ``` In my Table structure, each item that has `sale_transaction_id` is marked as sold. My question is why the first one is not working as intended? and how do I make it to work as 2nd one? Is it actually possible using subquery for this type of query?
You need to add correlation to the subqueries: ``` SELECT item_type.id, item_type.item_type, (SELECT COUNT(item.purchase_transaction_id) - COUNT(item.sale_transaction_id) FROM item WHERE item.item_type_id = i.item_type_id) as stock_qty, (SELECT COUNT(item.sale_transaction_id) FROM item WHERE item.item_type_id = i.item_type_id ) as sold_qty FROM item AS i JOIN item_type ON i.item_type_id = item_type.id GROUP BY i.item_type_id ``` The subqueries are now correlated: they are executed for each `item_type_id` of the outer query and return results for this exact value each time. But this seems like an overkill, since you can get the same result applying aggregation in the outer query, just like you do in the second query of your question.
``` SELECT item_type.id, item_type.item_type, SUM(case when item.sale_transaction_id IS NULL then 1 else 0 end) as stock_qty, SUM(case when item.sale_transaction_id IS NOT NULL then 1 else 0 end) as sold_qty FROM item JOIN item_type ON item.item_type_id = item_type.id GROUP BY item_type.id, item_type.item_type ``` Is this what you need?
Sql Count Where Groupby SubQuery
[ "", "sql", "sqlite", "" ]
I have model: ``` class M(Model): re = CharacterField(max_length=50, blank=true) ``` For example in table I have: ``` table m ---------------- | id | re | ---------------- | 1 | \d+ | | 2 | \:abc | ---------------- ``` I want to find some object which will match my input string (`inp`) by regexp stored in `re` field, see example: ``` inp = ":abc" for m in M.objects.all(): if re.match(m.re, inp) print("{} matched".format(m.id)) # 2 matched ``` But is it possible to perform `match` on DB server? So replace `.all()` to '.filter' with some expression?
``` for m in M.objects.filter().extra(where=["'{}' RLIKE `m`.`re`".format(inp)]) print("{} matched".format(m.id)) ```
For the regex matching you need to use `__iregex` after fieldname in `filter` call: ``` M.objects.filter(re__iregex=inp) ``` Take a look at [official documentation](https://docs.djangoproject.com/en/1.9/ref/models/querysets/#iregex) to get more information --- **EDIT** If you want the reverse operation (to check if any regex saved on database is matching your value) you cannot use simple `filter` but you can define your custom [Manager](https://docs.djangoproject.com/en/1.9/topics/db/managers/) ``` class CurrentManager(models.Manager): def match(self, value): objects = super(CurrentManager, self).get_query_set().all() #here your code objects = [o for o in objects if re.match(o, value)] return objects class M(Model): re = CharacterField(max_length=50, blank=true) objects = RegexManager() #usage matched = M.objects.match('123') ``` Take a look at this [question](https://stackoverflow.com/questions/17778561/django-custom-manager-with-filter-parameter) also.
Django regexp in field
[ "", "mysql", "sql", "regex", "django", "mariadb", "" ]
Sequence should return values 1,2,3 etc starting for 1 for every day. current\_date should used for day determination. For example, calling today first time it shoudl return 1, in second time 2 etc. Tomorrow, first call shoud return again 1, second call 2 etc. Postgres 9.1 is used.
Use a table to keep the sequence: ``` create table daily_sequence ( day date, s integer, primary key (day, s) ); ``` This function will retrieve the next value: ``` create or replace function daily_sequence() returns int as $$ insert into daily_sequence (day, s) select current_date, coalesce(max(s), 0) + 1 from daily_sequence where day = current_date returning s ; $$ language sql; select daily_sequence(); ``` Be prepared to retry in case of an improbable `duplicate key value` error. If previous days' sequences are not necessary delete them to keep the table and the index as light as possible: ``` create or replace function daily_sequence() returns int as $$ with d as ( delete from daily_sequence where day < current_date ) insert into daily_sequence (day, s) select current_date, coalesce(max(s), 0) + 1 from daily_sequence where day = current_date returning s ; $$ language sql; ```
You just need to think of cronjob as running a shell command at a specified time or day. Shell Command for running cron job psql --host host.domain.com --port 32098 --db\_name databaseName < my.sql You can then just add this to your crontab (I recommend you use crontab -e to avoid breaking things) ``` # It will run your command at 00:00 every day # min hour wday month mday command-to-run 0 0 * * * psql --host host.domain.com --port 32098 --db_name databaseName < my.sql ```
How to create sequence which start from 1 in each day
[ "", "sql", "postgresql", "plpgsql", "postgresql-9.1", "" ]
I am trying get information of 2 lowest salary holders from default "scott" table. This is the query I am trying: ``` SELECT TOP 2 * FROM emp ORDER BY sal ASC ``` But I'm getting this error: > ORA-00923: FROM keyword not found where expected Screenshot: [![enter image description here](https://i.stack.imgur.com/TaaQY.png)](https://i.stack.imgur.com/TaaQY.png)
In the most recent versions of Oracle, you can use the ANSI standard: ``` SELECT emp.* FROM emp ORDER BY sal ASC FETCH FIRST 2 ROWS ONLY; ``` In older versions: ``` SELECT e.* FROM (SELECT emp.* FROM emp ORDER BY sal ASC ) e WHERE rownum <= 2; ```
You can use ROWNUM in Oracle to get two first rows of your query results. ``` SELECT EMP1.* FROM (SELECT * FROM EMP ORDER BY SAL ASC) EMP1 WHERE ROWNUM < 3; ```
Select 2 lowest values from a query in SQL
[ "", "sql", "oracle", "" ]
The database is here: <http://sqlfiddle.com/#!9/bf0171> I want to find all students in "Prof.David" class. However, the lookup result is: ``` Select Student.name from Student, Teacher where Teacher.s_id =Student.id and Teacher.name="Prof. David"; +------+ | name | +------+ | Tom | | John | | Mary | | Tom | | John | | Mary | +------+ ``` I suppose the result should be "Tom" and "John" only. What's the problem?
Without a join criteria between `Student` and `Teacher` you'll get a [Cartesian product](http://www.tutorialspoint.com/sql/sql-cartesian-joins.htm) (all records from `Student` combined with all records from `Teacher`). You probably meant: ``` SELECT Student.name FROM Student s JOIN Teacher t ON t.s_id = s.id WHERE Teacher.name="Prof. David"; ``` For new learners of SQL, I would strongly recommend explicitly using using `JOIN`, `LEFT JOIN`, etc instead of implicitly joining in the `WHERE` clause. It will help reduce the number of accidental Cartesian products you execute.
The issue is that you have two tables and you are performing a cartesian join on those tables with your query. Therefore, you are getting 3x2 = 6 rows in your results, where for each teacher you are showing the names of all 3 students. You must join your tables in a logical way based on the foreign key relationships in your schema. For example: `Select A.field1, B.field2 from A join B on A.id = B.a_id`
A SQL lookup on two tables;
[ "", "mysql", "sql", "" ]
I figured out this code which does half the job: ``` SELECT @SEQ = Isnull(@SEQ,0) ``` But how can I make this set the `@SEQ` to be incremented by one if it is not `null`?
You can just do ``` SELECT @SEQ = Isnull(@SEQ+1,0) ``` As adding 1 to null would still yield null.
Incrementing a `null` will result in `null`, so this can be accomplished with a `coalesce` expression: ``` SELECT @SEQ = COALESCE(@SEQ + 1, 0) ```
How can I set a variable to be equal to 0 if null or increment it if it is not null?
[ "", "sql", "sql-server", "t-sql", "" ]
I need to query a taken room which is present in all the dates from the table. My table data is ``` ----------------------------------- |Room No | Date | Type | ----------------------------------- |1 | 1 JAN 2016 | AC | |2 | 1 JAN 2016 | AC | |3 | 1 JAN 2016 | Non AC | |1 | 2 JAN 2016 | AC | |3 | 2 JAN 2016 | AC | |1 | 3 JAN 2016 | AC | |2 | 3 JAN 2016 | AC | |3 | 3 JAN 2016 | Non AC | ``` Now I want the result Like ``` ------------------------ | RoomNo | Type | ------------------------ | 1 | AC | | 3 | NON AC | ``` See the above example room no 2 is not present on 2nd jan so it's not required.
One way to approach this problem is to compare the number of unique dates a room has to the total number of unique dates: ``` SELECT room_no, type FROM (SELECT room_no, type, COUNT(DISTINCT date) AS cnt FROM rooms GROUP BY room_no, type) r JOIN (SELECT COUNT(DISTINCT date) AS cnt FROM rooms) c ON r.cnt = c.cnt ```
there are many ways,as it represent islands and gaps problem,assuming you have three rooms,you just want to give some value if there is gap ``` select row_number() ovver (partition by date order by roomno) as rn, roomno,type from yourtable ``` now you can select all the rooms which are booked each day using ``` with cte as ( select row_number() ovver (partition by date order by roomno) as rn, roomno,type from yourtable ) select * from cte where rn=3 ``` not booked rooms using ``` with cte as ( select row_number() ovver (partition by date order by roomno) as rn, roomno,type from yourtable ) select * from cte where rn<3 ```
Sql Query Filter
[ "", "sql", "sql-server", "sql-server-2008", "select", "" ]
I have an ERP Database and it is big. One of the table gets updated by an SP, TRIGGER, FUNCTION or something else. Because, I watched the Profiler to find UPDATE or INSERT statements but I couldn't find ant UPDATE or INSERT. Therefore, the table should be updated by SP, TRIGGER, FUNCTION or something else. Is there a helper to find in which SP,FUNCTION OR TRIGGERS the table is used? I want to give the table name and it will tell me where the table name is used?
In SSMS do the following > Server->Database->Tables-> tablename ->right click -> view > dependencies select **Object that depends** on radio button to view the object's that were using your table
Export all script objects to a file and search the file. You can do this from SQL Server Management Studio. Right Click the database and go to Tasks > Generate Scripts.
How to find an object in all T-SQL scripts
[ "", "sql", "sql-server", "" ]
``` Product Tier1 Tier2 Tier3 value A ANZ - - 2 A ANZ DE - 3 A ANZ DE Outlet 9 B EMEA - - 4 B EMEA IT - 6 ``` I am every time interested in the last value defined for a Product. For example for Product A i should get the value 9 and for Product B i should get the value 6. Is it possible i can get result directly using sql query or i load the entire product rows and perform operations in backend code to get the final value. What is more preferable. I am using MYsql. ``` ANZ 2 | DE 3 | OUTLET - 9 last defined level value is 9 EMEA 4 | IT 6 | ? No value defined so last defined level value is 6 ``` cases can be case 1 ``` Product Tier1 Tier2 Tier3 value A ANZ - - 2 A ANZ DE - 3 A ANZ DE Outlet 9 Desired result = 9 ``` case2 ``` Product Tier1 Tier2 Tier3 value A ANZ DE Outlet 9 Desired result = 9 ``` case3 ``` Product Tier1 Tier2 Tier3 value A ANZ - - 2 A ANZ DE Outlet 9 Desired result = 9 ``` case4 ``` Product Tier1 Tier2 Tier3 value A ANZ - - 2 A ANZ DE - 3 Desired result = 3 ``` case5 ``` case4 Product Tier1 Tier2 Tier3 value A ANZ - - 2 Desired result = 2 ```
Here comes a solution verified with data provided. SQL: ``` SELECT product, tier1, tier2, tier3, value FROM tbl tbl2 WHERE ( tbl2.product, ((tier1 IS NOT NULL) + (tier2 IS NOT NULL ) + (tier3 IS NOT NULL)) ) IN ( SELECT product, MAX( (tier1 IS NOT NULL) + (tier2 IS NOT NULL ) + (tier3 IS NOT NULL) ) maxTierCnt FROM tbl WHERE tbl.product = tbl2.product ); ``` Output: ``` mysql> SELECT * FROM tbl; +---------+-------+-------+--------+-------+ | product | tier1 | tier2 | tier3 | value | +---------+-------+-------+--------+-------+ | A | ANZ | NULL | NULL | 2 | | A | ANZ | DE | NULL | 3 | | A | ANZ | DE | Outlet | 9 | | B | EMEA | NULL | NULL | 4 | | B | EMEA | IT | NULL | 6 | +---------+-------+-------+--------+-------+ 5 rows in set (0.00 sec) mysql> mysql> SELECT -> product, -> tier1, -> tier2, -> tier3, -> value -> FROM -> tbl tbl2 -> WHERE -> ( tbl2.product, ((tier1 IS NOT NULL) + (tier2 IS NOT NULL ) + (tier3 IS NOT NULL)) ) IN -> ( -> SELECT -> product, -> MAX( (tier1 IS NOT NULL) + (tier2 IS NOT NULL ) + (tier3 IS NOT NULL) ) maxTierCnt -> FROM -> tbl -> WHERE -> tbl.product = tbl2.product -> ); +---------+-------+-------+--------+-------+ | product | tier1 | tier2 | tier3 | value | +---------+-------+-------+--------+-------+ | A | ANZ | DE | Outlet | 9 | | B | EMEA | IT | NULL | 6 | +---------+-------+-------+--------+-------+ 2 rows in set (0.00 sec) ```
I think the following should do what you want: ``` SELECT value FROM product WHERE product.Product = 'A' ORDER BY Tier1 DESC, Tier2 DESC, Tier3 DESC LIMIT 1 ``` …using whatever product you are actually looking for in place of `'A'`, of course.
Retrieve last value defined for a tree data in Mysql
[ "", "mysql", "sql", "" ]
I have a table which their values are **`NUMERIC(16,4)`** Example: ``` 12.4568 13.2 14.05 ``` I want to display the value with only 2 digits after dot without rounding. Expected result is: ``` 12.45 13.2 14.05 ``` What I did is: ``` Select price::Numeric(16,2) from prices ``` It works however I am not sure if this is the correct way to do it. I think it's better to use some kind of display edit rather then casting?
you can do this in following way: ``` select round(cast(your_float_column as decimal(10,2)), 2, 1) from your_table ``` If you just want to skip round off then ``` select round(12333.347, 2, 1) ``` hope this will work for you
Did you try: `Select round(price,2) from prices`
How to display 2 digits after dot in PostgreSQL?
[ "", "sql", "postgresql", "number-formatting", "numeric", "" ]
FOR Example if I have: ``` DECLARE @Day int = 25 DECLARE @Month int = 10 DECLARE @Year int = 2016 ``` I want to return ``` 2016-10-25 ``` As Date or datetime
In SQL Server 2012+, you can use `datefromparts()`: ``` select datefromparts(@year, @month, @day) ``` In earlier versions, you can cast a string. Here is one method: ``` select cast(cast(@year*10000 + @month*100 + @day as varchar(255)) as date) ```
In SQL Server 2012+, you can use [`DATEFROMPARTS`](https://learn.microsoft.com/en-us/sql/t-sql/functions/datefromparts-transact-sql)(): ``` DECLARE @Year int = 2016, @Month int = 10, @Day int = 25; SELECT DATEFROMPARTS (@Year, @Month, @Day); ``` In earlier versions, one method is to create and convert a string. There are a few string date formats which SQL Server reliably interprets regardless of the date, language, or internationalization settings. > A six- or eight-digit string is always interpreted as ymd. The month > and day must always be two digits. <https://learn.microsoft.com/en-us/sql/t-sql/data-types/datetime-transact-sql> So a string in the format 'yyyymmdd' will always be properly interpreted. (ISO 8601-- `YYYY-MM-DDThh:mm:ss`-- also works, but you have to specify time and therefore it's more complicated than you need.) While you can simply `CAST` this string as a date, you must use `CONVERT` in order to specify a style, and you [must specify a style in order to be deterministic](https://msdn.microsoft.com/en-us/library/ms178091(SQL.90).aspx) (if that matters to you). The "yyyymmdd" format is style 112, so your conversion looks like this: ``` DECLARE @Year int = 2016, @Month int = 10, @Day int = 25; SELECT CONVERT(date,CONVERT(varchar(50),(@Year*10000 + @Month*100 + @Day)),112); ``` And it results in: > 2016-10-25 Technically, the ISO/112/yyyymmdd format works even with other styles specified. For example, using that text format with style 104 (German, dd.mm.yyyy): ``` DECLARE @Year int = 2016, @Month int = 10, @Day int = 25; SELECT CONVERT(date,CONVERT(varchar(50),(@Year*10000 + @Month*100 + @Day)),104); ``` Also still results in: > 2016-10-25 Other formats are not as robust. For example this: ``` SELECT CASE WHEN CONVERT(date,'01-02-1900',110) = CONVERT(date,'01-02-1900',105) THEN 1 ELSE 0 END; ``` Results in: > 0 As a side note, with this method, beware that nonsense inputs can yield valid but incorrect dates: ``` DECLARE @Year int = 2016, @Month int = 0, @Day int = 1025; SELECT CONVERT(date,CONVERT(varchar(50),(@Year*10000 + @Month*100 + @Day)),112); ``` Also yields: > 2016-10-25 `DATEFROMPARTS` protects you from invalid inputs. This: ``` DECLARE @Year int = 2016, @Month int = 10, @Day int = 32; SELECT DATEFROMPARTS (@Year, @Month, @Day); ``` Yields: > Msg 289, Level 16, State 1, Line 2 Cannot construct data type date, > some of the arguments have values which are not valid. Also beware that this method does not work for dates prior to 1000-01-01. For example: ``` DECLARE @Year int = 900, @Month int = 1, @Day int = 1; SELECT CONVERT(date,CONVERT(varchar(50),(@Year*10000 + @Month*100 + @Day)),112); ``` Yields: > Msg 241, Level 16, State 1, Line 2 Conversion failed when converting > date and/or time from character string. That's because the resulting string, '9000101', is not in the 'yyyymmdd' format. To ensure proper formatting, you'd have to pad it with leading zeroes, at the sacrifice of some small amount of performance. For example: ``` DECLARE @Year int = 900, @Month int = 1, @Day int = 1; SELECT CONVERT(date,RIGHT('000' + CONVERT(varchar(50),(@Year*10000 + @Month*100 + @Day)),8),112); ``` Results in: > 0900-01-01 There are other methods aside from string conversion. Several are provided in answers to "[Create a date with T-SQL](https://stackoverflow.com/questions/266924/create-a-date-with-t-sql)". [A notable example](https://stackoverflow.com/a/267016/2266979) involves creating the date by adding years, months, and days to the "zero date". (This answer was inspired by [Gordon Linoff's](https://stackoverflow.com/users/1144035/gordon-linoff) [answer](https://stackoverflow.com/a/35577009/2266979), which I expanded on and provided additional documentation and notes.)
How to create a Date in SQL Server given the Day, Month and Year as Integers
[ "", "sql", "sql-server", "" ]
I have a table "move" with one column "move\_doc" which is a CLOB. The json stored inside has the structure: ``` { moveid : "123", movedate : "xyz", submoves: [ { submoveid: "1", ... }, { submoveid : "2", ... } ] } ``` I know I can run an Oracle 12c query to access the submoves list with: `select move.move_doc.submoves from move move` How do I access particular submoves of the array? And the attributes inside a particular submove?
You have to use Oracle functions `json_query` and/or `json_value` like this: ``` SELECT json_value(move_doc, '$.submoves[0].submoveid' RETURNING NUMBER) FROM move; ``` returns `1`. ``` SELECT json_query(move_doc, '$.submoves[1]') FROM move; ``` would return the second JSON element, i.e. something like ``` { submoveid : "2", ... } ``` `json_value` is used to retrieve a scalar value, `json_query` is used to retrieve JSON values. You might also want to have a look at `json_table` which returns an SQL result table and thus can be used in Joins. See [this Oracle Doc](https://docs.oracle.com/database/121/ADXDB/json.htm#ADXDB6277) for more examples
Beda here from the Oracle JSON team. We have added a new multi-value index in release 21c allowing you to index values from a JSON array. Obviously, 21c is brand new and you want to know how to do this in older releases: Functional indexes (using JSON\_Value function) are limited to a single value per JSON document and therefore are not capable to index array values. But: there is a 'JSON search index' which indexes your entire JSON document and therefore also values in the array. Another solution is to use a materialized view usign JSON\_Table. This will expand the array values into separate rows.Then you can add a regular B-Tree index on that column. Sample code here: JSON indexing with functional indexes and JSON search index <https://livesql.oracle.com/apex/livesql/file/content_HN507PELCEEJGVNW4Q61L34DS.html> JSON and materialized views <https://livesql.oracle.com/apex/livesql/file/content_HYMB1YBP4CPMG6T6MXY5G9X5L.html>
How do you index an array inside a JSON with an Oracle 12c query?
[ "", "sql", "json", "oracle", "indexing", "oracle12c", "" ]
I have a practical question regarding naming table fields in a database. For example, I have two tables: ``` student (id int; name varchar(30)) teacher (id int, s_id int; name varchar(30)) ``` There are both 'id' and "name" in two tables. In SQL statement, it will be ambiguous for the two if no table names are prefixed. Two options: 1. use Table name as prefix of a field in SQL 'where' clause 2. use prefixed field names in tables so that no prefix will be used in 'where' clause. Which one is better?
For queries that aren't ad-hoc, you should *always* prefix every field with either the table name or table alias, even if the field name isn't ambiguous. This prevents the query from breaking later if someone adds a new column to one of the tables that introduces ambiguity. So that would make "id" and "name" unambiguous. But I still recommend naming the primary key with something more specific than "id". In your example, I would use `student_id` and `teacher_id`. This helps prevent mistakes in joins. You will need more specific names anyway when you run into tables with more than one unique key, or multi-part keys. It's worth thinking these things through, but in the end consistency may be the more important factor. I can deal with tables built around `id` instead of `student_id`, but I'm currently working with an inconsistent schema that uses all of the following: `id`, `sid`, `systemid` and specific names like `taskid`. That's the worst of both worlds.
Without a doubt, go with option 1. This is valid sql in any type of database and considered the proper and most readable format. It's good habit to prefix the table name to a column, and very necessary when doing a join. The only exception to this I've most often seen is prefixing the id column with the table name, but I still wouldn't do that. If you go with option 2, seasoned DBA's will probably point and laugh at you. For further proof, see #2 here: <https://www.periscopedata.com/blog/better-sql-schema.html> And here. Rule 1b - <http://www.isbe.net/ILDS/pdf/SQL_server_standards.pdf> As TT mentions, you'll make your life much easier if you learn how to use an [alias](https://en.wikipedia.org/wiki/Alias_(SQL)) for the table name. It's as simple as using `SomeTableNameThatsWayTooLong as long_table` in your query, such as: ``` SELECT LT.Id FROM SomeTableNameThatsWayTooLong AS LT ```
Table field naming convention and SQL statements
[ "", "sql", "" ]
I have a table: ``` id name 1 a 2 a 3 a 4 b 5 b 6 c ``` I am looking for an update statement that will update `name` column to: ``` id name 1 a 2 a-2 3 a-3 4 b 5 b-2 6 c ``` In `SQL Server` I would use: ``` ;with cte as(select *, row_number() over(partition by name order by id) rn from table) update cte set name = name + '-' + cast(rn as varchar(10)) where rn <> 1 ``` I am not strong in `MySQL` nonstandard queries. Can I do something like this in `MySQL`?
You can do this: ``` UPDATE YourTable p JOIN(SELECT t.id,t.name,count(*) as rnk FROM YourTable t INNER JOIN YourTable s on(t.name = s.name and t.id <= s.id) GROUP BY t.id,t.name) f ON(p.id = f.id) SET p.name = concat(p.name,'-',f.rnk) WHERE rnk > 1 ``` This will basically use join and count to get the same as ROW\_NUMBER() , and update only those who have more then 1 result(meaning the second,third ETC excluding the first)
In MySQL you can use variables in order to simulate `ROW_NUMBER` window function: ``` SELECT id, CONCAT(name, IF(rn = 1, '', CONCAT('-', rn))) AS name FROM ( SELECT id, name, @rn := IF(name = @n, @rn + 1, IF(@n := name, 1, 1)) AS rn FROM mytable CROSS JOIN (SELECT @rn := 0, @n := '') AS vars ORDER BY name, id) AS t ``` To `UPDATE` you can use: ``` UPDATE mytable AS t1 SET name = ( SELECT CONCAT(name, IF(rn = 1, '', CONCAT('-', rn))) AS name FROM ( SELECT id, name, @rn := IF(name = @n, @rn + 1, IF(@n := name, 1, 1)) AS rn FROM mytable CROSS JOIN (SELECT @rn := 0, @n := '') AS vars ORDER BY name, id) AS t2 WHERE t1.id = t2.id) ``` [**Demo here**](http://sqlfiddle.com/#!9/b0e2d/1) You can also use `UPDATE` with `JOIN` syntax: ``` UPDATE mytable AS t1 JOIN ( SELECT id, rn, CONCAT(name, IF(rn = 1, '', CONCAT('-', rn))) AS name FROM ( SELECT id, name, @rn := IF(name = @n, @rn + 1, IF(@n := name, 1, 1)) AS rn FROM mytable CROSS JOIN (SELECT @rn := 0, @n := '') AS vars ORDER BY name, id) AS x ) AS t2 ON t2.rn <> 1 AND t1.id = t2.id SET t1.name = t2.name; ``` The latter is probably faster than the former because it performs less `UPDATE` operations.
Update duplicate rows
[ "", "mysql", "sql", "" ]
SQL in Oracle: Dataset Example: ``` Search_date Value 01-Feb-2016 5 02-Feb-2016 4 03-Feb-2016 9 04-Feb-2016 10 05-Feb-2016 12 06-Feb-2016 10 07-Feb-2016 7 ``` So...... If I set a where condition of search\_date = '04-Feb-2016' it would return the following: **Search\_date, Before\_Period, After\_Period,** **04-Feb-2016** **18** (9 + 4 + 5)..., **29** (10 + 12 + 7) But I would like to pull this for every distinct date
Use the `SUM()` [analytical (windowed) function](http://docs.oracle.com/cd/E11882_01/server.112/e41084/functions004.htm#SQLRF06174): **Oracle Setup**: ``` CREATE TABLE DataSet ( Search_date, Value ) AS SELECT DATE '2016-02-01', 5 FROM DUAL UNION ALL SELECT DATE '2016-02-02', 4 FROM DUAL UNION ALL SELECT DATE '2016-02-03', 9 FROM DUAL UNION ALL SELECT DATE '2016-02-04', 10 FROM DUAL UNION ALL SELECT DATE '2016-02-05', 12 FROM DUAL UNION ALL SELECT DATE '2016-02-06', 10 FROM DUAL UNION ALL SELECT DATE '2016-02-07', 7 FROM DUAL; ``` **Query**: ``` SELECT search_date, COALESCE( SUM( value ) OVER ( ORDER BY Search_date ROWS BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING ), 0 ) AS before_period, COALESCE( SUM( value ) OVER ( ORDER BY Search_date ROWS BETWEEN 1 FOLLOWING AND UNBOUNDED FOLLOWING ), 0 ) AS after_period FROM DataSet; ``` **Output**: ``` SEARCH_DATE BEFORE_PERIOD AFTER_PERIOD ----------- ------------- ------------ 01-FEB-16 0 52 02-FEB-16 5 48 03-FEB-16 9 39 04-FEB-16 18 29 05-FEB-16 28 17 06-FEB-16 40 7 07-FEB-16 50 0 ``` If you want just the 3 days before/after then change `UNBOUNDED` to `3`.
Try [aggregate (windowing clause)](http://docs.oracle.com/cd/E11882_01/server.112/e41084/functions004.htm#i97640) functions: ``` select dt, sum(val) over(order by dt rows between 3 preceding and 1 preceding) as prev, sum(val) over(order by dt rows between 1 following and 3 following) as next from t order by dt; ``` Output: ``` DT PREV NEXT 02/01/2016 18 02/01/2016 5 23 02/02/2016 10 31 02/03/2016 14 32 02/04/2016 18 29 02/05/2016 23 17 02/06/2016 31 7 02/07/2016 32 ```
Multiple dates in column 'date'. Numbers in column 'Value'. Pull a sum of 3 days of 'Value' before and after EACH specific 'Date'
[ "", "sql", "oracle", "" ]
I need a query which will output a list of dates between two given dates. For example, if my start date is 23/02/2016 and end date is 02/03/2016, I am expecting the following output: ``` Date ---- 23/02/2016 24/02/2016 25/02/2016 26/02/2016 27/02/2016 28/02/2016 29/02/2016 01/03/2016 02/03/2016 ``` Also, I need the above using SQL only (without the use of 'WITH' statement or tables). Please help.
I am using ,ostly DB2 for iSeries, so I will give you an SQL only solution that works on it. Currently I don't have an access to the server, so the query is not tested but it should work. **EDIT** Query is already tested and working ``` SELECT d.min + num.n DAYS FROM -- create inline table with min max date (VALUES(DATE('2015-02-28'), DATE('2016-03-01'))) AS d(min, max) INNER JOIN -- create inline table with numbers from 0 to 999 ( SELECT n1.n + n10.n + n100.n AS n FROM (VALUES(0),(1),(2),(3),(4),(5),(6),(7),(8),(9)) AS n1(n) CROSS JOIN (VALUES(0),(10),(20),(30),(40),(50),(60),(70),(80),(90)) AS n10(n) CROSS JOIN (VALUES(0),(100),(200),(300),(400),(500),(600),(700),(800),(900)) AS n100(n) ) AS num ON d.min + num.n DAYS<= d.max ORDER BY num.n; ``` if you don't want to execute the query only once, you should consider creating a real table with values for the loop: ``` CREATE TABLE dummy_loop AS ( SELECT n1.n + n10.n + n100.n AS n FROM (VALUES(0),(1),(2),(3),(4),(5),(6),(7),(8),(9)) AS n1(n) CROSS JOIN (VALUES(0),(10),(20),(30),(40),(50),(60),(70),(80),(90)) AS n10(n) CROSS JOIN (VALUES(0),(100),(200),(300),(400),(500),(600),(700),(800),(900)) AS n100(n) ) WITH DATA; ALTER TABLE dummy_loop ADD PRIMARY KEY (dummy_loop.n); ``` It depends on the reason for which you like to use it, but you could even create table for lets say for 100 years. It will be only 100\*365 = 36500 rows with just a date field, so the table will be quite small and fast for joins. ``` CREATE TABLE dummy_dates AS ( SELECT DATE('1970-01-01') + (n1.n + n10.n + n100.n) DAYS AS date FROM (VALUES(0),(1),(2),(3),(4),(5),(6),(7),(8),(9)) AS n1(n) CROSS JOIN (VALUES(0),(10),(20),(30),(40),(50),(60),(70),(80),(90)) AS n10(n) CROSS JOIN (VALUES(0),(100),(200),(300),(400),(500),(600),(700),(800),(900)) AS n100(n) ) WITH DATA; ALTER TABLE dummy_dates ADD PRIMARY KEY (dummy_dates.date); ``` And the select query could look like: ``` SELECT * FROM dummy_days WHERE date BETWEEN(:startDate, :endDate); ``` **EDIT 2**: Thanks to @Lennart suggestion I have changed TABLE(VALUES(..,..,..)) to VALES(..,..,..) because as he said TABLE is a synonym to LATERAL that was a real surprise for me. **EDIT 3**: Thanks to @godric7gt I have removed TIMESTAMPDIFF and will remove from all my scripts, because as it is said in the documentation: > These assumptions are used when converting the information in the second argument, which is a timestamp duration, to the interval type specified in the first argument. The **returned estimate may vary by a number of days**. For example, if the number of days (interval 16) is requested for the difference between '1997-03-01-00.00.00' and '1997-02-01-00.00.00', the result is 30. This is because the difference between the timestamps is 1 month, and the assumption of 30 days in a month applies. It was a real surprise, because I was always trust this function for days difference.
For generating rows recusive SQL will needed. Usually this looks like this in DB2: ``` with temp (date) as ( select date('23.02.2016') as date from sysibm.sysdummy1 union all select date + 1 day from temp where date < date('02.03.2016') ) ``` select \* from temp For whatever reason a CTE (using WITH) should be avoided. A possible workaround would be setting ``` db2set DB2_COMPATIBILITY_VECTOR=8 ``` which enables the use of the Oracle style recusion with CONNECT BY ``` SELECT date('22.02.2016') + level days as dt FROM sysibm.sysdummy1 CONNECT BY date('22.02.2016') + level days <= date('02.03.2016') ``` Please note: after setting the DB2\_COMPATIBILITY\_VECTOR a instance restart is necessary.
IBM DB2: Generate list of dates between two dates
[ "", "sql", "db2", "db2-400", "db2-luw", "" ]
I can find the items common between two given users similar to the answer [here](https://stackoverflow.com/questions/10359903/oracle-sql-find-common-items-purchased-between-two-users): ``` SELECT items.name FROM users JOIN requests ON users.user_id = requests.user_id JOIN items ON requests.item_id = items.item_id WHERE users.name = 'jane' INTERSECT SELECT items.name FROM users JOIN requests ON users.user_id = requests.user_id JOIN items ON requests.item_id = items.item_id WHERE users.name = 'zaku'; ``` I guess I could keep adding more intersect statements to include additional users and that's hardly a good solution. How would I find any and all item(s) common among ALL users? In my e.g., the common item among all users is "pc" but it could as well be any other item(s). [See my code on SQL Fiddle](http://www.sqlfiddle.com/#!7/78aab/2). Thanks.
To get the items, you could simply do: ``` select r.item_id from requests r group by r.item_id having count(distinct r.user_id) = (select count(*) from users); ``` Getting the name is essentially the same thing: ``` select i.name from requests r join items i on r.item_id = i.item_id group by i.name having count(distinct r.user_id) = (select count(*) from users); ```
You can try something like this: ``` SELECT items.name FROM users JOIN requests ON users.user_id = requests.user_id JOIN items ON requests.item_id = items.item_id GROUP BY items.name having count(items.name) = (select count(distinct user_id) from users) ``` The idea is to take the count of the item from your query, and to compare it to the total count of users. If its equals then it means that all users has it.
sql - find items that ALL users have in common
[ "", "sql", "sqlite", "" ]
I have 2 tables as follows: Table 1: ``` ID FName 1 Basics 2 Machine1 3 master 4 Machine2 15 Machine3 16 Machine16 ``` Table 2: ``` ParentID Name InitialValue 1 Active 1 2 MachineName Entrylevel 2 Active 1 3 Active 1 4 MachineName Midlevellevel 4 Active 1 15 MachineName Endlevel 15 Active 1 16 MachineName Miscellenious 16 Active 0 ``` Here, ID of Table 1 is referred as Parent ID at Table 2. I want "Initial Value" of Table 2 for MachineName Rows (of Table 2) provided "InitialValue" of Table 2 for Active Rows (of Table 2) are 1 Result should be like ``` ID InitialValue 2 Entrylevel 4 Midlevellevel 15 Endlevel ```
You could join the second table twice, once for MachineName, and once for Active: ``` SELECT t.ID, machine.InitialValue FROM table1 t INNER JOIN table2 machine ON t.ID = machine.ParentId AND machine.Name = 'MachineName' INNER JOIN table2 active ON t.ID = active.ParentId AND active.Name = 'Active' AND active.InitialValue = 1; ``` ## About Joins The `JOIN` syntax allows you to link records to the previous table in your `FROM` list, most of the time via a relationship of foreign key - primary key. In a distant past, we used to do that with a `WHERE` condition, but that really is outdated syntax. In the above query, that relationship of primary key - foreign key is expressed with `t.ID = machine.ParentId` in the first case. Note the alias that was defined for table2, so we can refer to it with `machine`. Some extra condition(s) are added to the join condition, such as `machine.Name = 'MachineName'`. Those could just as well have been placed in a `WHERE` clause, but I like it this way. Then the same table is joined again, this time with another alias. This time it filters the "Active" 1 records. Note that if the *ID* in table1 does not have a matching record with those conditions, that parent record will be excluded from the results. So now we have the table1 records with a matching "MachineName" record and are sure there is an "Active" 1 record for it as well. This is what needs to be output.
``` SELECT t1.ID, t2.InitialValue FROM table1 t1 join table2 t2 on t1.ID=t2.ParentID WHERE t2.name LIKE 'MachineName'AND t1.ID= ANY(SELECT t22.ParentID FROM table2 t22 WHERE t22.InitialValue=1) ``` I think this should work //slightly changed the condition in WHERE clausule (t2.parentID changed to t1.ID)
After Table Joining, need specific row value based on another Row with same ID
[ "", "sql", "" ]
Can any TSQL expert help me with following issue: I have a database with a table named "Data" and this table has following columns: ``` ID| RunId| AccountStaf_1 |AccountStaf_2 |AccountNr| ------------------------------------------- ``` The table will contain following data: ``` ID| RundId| AccountStaf_1 | AccountStaf_2 | AccountNr| ---------------------------------------------- 1 | A | xxx |NULL | 123456 | 2 | A | yyy |NULL | 123456 | 3 | A | | zzz | 123456 | 4 | A | fff | NULL | 123444 | 5 | B | NULL | hhh | 666666 | 6 | B | bbb | NULL | 666666 | ``` Can anyone help me to define a TSQL query to find all the accountNr which has a AccountStaf\_1 and AccountStaf\_2 under the same "RunId". The outcome of the result should be this: ``` ID| RunId| AccountStaf_1 | AccountStaf_2 | AccountNr| ---------------------------------------------- 1 | A | xxx |NULL | 123456 | 2 | A | yyy |NULL | 123456 | 3 | A | | zzz | 123456 | 5 | B | NULL | hhh | 666666 | 6 | B | bbb | NULL | 666666 | ``` E.g if you say "Where RunId = A", the outcome of the result should be this: ``` ID| RunId| AccountStaf_1 | AccountStaf_2 | AccountNr| ---------------------------------------------- 1 | A | xxx |NULL | 123456 | 2 | A | yyy |NULL | 123456 | 3 | A | | zzz | 123456 | ``` I appreciate any help that you can provide.
If you are wanting to return results if there are two unique AccountStaff regardless of them being in AccountStaf\_1 or AccountStaf\_2 fields, then you can combine the two fields using a UNION and then count UNIQUE values. If greater than 1 then INNER JOIN on the RunId and AccountNr fields. ``` ;WITH AccountStaff AS ( SELECT RunId, AccountStaf_1 AS AccountStaff, AccountNr FROM Data WHERE AccountStaf_1 IS NOT NULL UNION SELECT RunId, AccountStaf_2 AS AccountStaff, AccountNr FROM Data WHERE AccountStaf_2 IS NOT NULL ), AccountList AS ( SELECT RunID, AccountNr FROM AccountStaff GROUP BY RunID, AccountNr HAVING COUNT(DISTINCT AccountStaff) > 1 ) SELECT Data.ID, Data.RunId, Data.AccountStaf_1, Data.AccountStaf_2, Data.AccountNr FROM Data INNER JOIN AccountList ON Data.RunId = AccountList.runId AND Data.AccountNr = AccountList.AccountNr ```
You can group your data by RunId and AccountNr and use an aggregate to see if your columns contain any values: ``` WITH Accounts AS ( SELECT RunId, AccountNr FROM Data WHERE RunId = 'A' GROUP BY RunId, AccountNr HAVING MAX(AccountStaf_1) IS NOT NULL AND MAX(AccountStaf_2) IS NOT NULL ) SELECT D.ID, D.RunId, D.AccountStaf_1, D.AccountStaf_2, D.AccountNr FROM Data D INNER JOIN Accounts A ON A.AccountNr = D.AccountNr AND A.RunId = D.RunId; ```
TSQL query for data control in different columns
[ "", "sql", "sql-server", "database", "t-sql", "sql-server-2008-r2", "" ]
I have a table of `coupons` and a second table called `coupon_uses`. `coupons` has a `max_uses` integer attribute that limits how many times a coupon can be used. Each time a coupon is used, a `coupon_uses` record is created. ``` ------------ --------------- | coupons | | coupon_uses | |----------| |-------------| | id | | coupon_id | | code | | ... | | max_uses | --------------- ----------- ``` How would I construct a query whereby I could filter the coupons table based on how many corresponding coupon\_uses there are for a particular coupon? For example, I want all the coupons that have been used fewer times than their `max_uses` attribute permits.
``` --- The coupons which are already used to its max limit SELECT count(cu.*), cp.id, cp.max_uses FROM coupons cp LEFT JOIN coupon_uses cu ON cp.id = cu.coupon_id GROUP BY cp.id, cp.max_uses HAVING count(cu.*) = cp.max_uses -- The coupons which are remaining SELECT count(cu.*), cp.id, cp.max_uses , cp.max_uses - count(cu.*) AS remaing_usage FROM coupons cp LEFT JOIN coupon_uses cu ON cp.id = cu.coupon_id GROUP BY cp.id, cp.max_uses HAVING count(cu.*) < cp.max_uses ```
This should give you what you are looking for; ``` SELECT c.max_uses , c.id , c.code , COUNT(coupon_id) AS 'uses' FROM coupon_uses cu LEFT JOIN coupons c ON c.ID = cu.ID GROUP BY c.max_uses , c.id , c.code Having c.max_uses > count(coupon_id) ```
How to filter one table by group by count in another table
[ "", "sql", "postgresql", "" ]
I have got a table that looks like this: ``` ID | name | details --------------------------- 1.3.1-3 | Jack | a 5.4.1-2 | John | b 1.4.5 | Alex | c ``` And what to split it like this: ``` ID | name | details --------------------------- 1.3.1 | Jack | a 1.3.2 | Jack | a 1.3.3 | Jack | a 5.4.1 | John | b 5.4.2 | John | b 1.4.5 | Alex | c ``` How can I do it in postgresql?
``` CREATE TABLE tosplit ( id text NOT NULL , name text , details text ); INSERT INTO tosplit( id , name , details ) VALUES ( '1.3.1-3' , 'Jack' , 'a' ) ,( '5.4.1-2' , 'John' , 'b' ) ,( '1.4.5' , 'Alex' , 'c' ) WITH zzz AS ( SELECT id , regexp_replace(id, '([0-9\.]+\.)([0-9]+)-([0-9]+)', e'\\1', e'g') AS one , regexp_replace(id, '([0-9\.]+\.)([0-9]+)-([0-9]+)', e'\\2', e'g') AS two , regexp_replace(id, '([0-9\.]+\.)([0-9]+)-([0-9]+)', e'\\3', e'g') AS three , name , details FROM tosplit ) SELECT z1.id -- , z1.one , z1.one || generate_series( z1.two::integer, z1.three::integer)::text AS four , z1.name, z1.details FROM zzz z1 WHERE z1.two <> z1.one UNION ALL SELECT z0.id -- , z0.one , z0.one AS four , z0.name, z0.details FROM zzz z0 WHERE z0.two = z0.one ; ``` --- Result: ``` CREATE TABLE INSERT 0 3 id | four | name | details ---------+-------+------+--------- 1.3.1-3 | 1.3.1 | Jack | a 1.3.1-3 | 1.3.2 | Jack | a 1.3.1-3 | 1.3.3 | Jack | a 5.4.1-2 | 5.4.1 | John | b 5.4.1-2 | 5.4.2 | John | b 1.4.5 | 1.4.5 | Alex | c ```
``` with elements as ( select id, regexp_split_to_array(id, '(\.)') as id_elements, name, details from the_table ), bounds as ( select id, case when strpos(id, '-') = 0 then 1 else split_part(id_elements[cardinality(id_elements)], '-', 1)::int end as start_value, case when strpos(id, '-') = 0 then 1 else split_part(id_elements[cardinality(id_elements)], '-', 2)::int end as end_value, case when strpos(id, '-') = 0 then id else array_to_string(id_elements[1:cardinality(id_elements)-1], '.') end as base_id, name, details from elements ) select b.base_id||'.'||c.cnt as new_id, b.name, b.details, count(*) over (partition by b.base_id) as num_rows from bounds b cross join lateral generate_series(b.start_value, b.end_value) as c (cnt) order by num_rows desc, c.cnt; ``` The first CTE simply splits the ID based on the `.`. The second CTE then calculates the start and end value for each ID and "strips" the range definition from the actual ID value to get the base that can be concatenated with the actual row index in the final select statement. With this test data: ``` insert into the_table values ('1.3.1-3', 'Jack', 'details 1'), ('5.4.1-2', 'John', 'details 2'), ('1.4.5', 'Alex', 'details 3'), ('10.11.12.1-5', 'Peter', 'details 4'), ('1.4.10-13', 'Arthur','details 5'), ('11.12.13.14.15.16.2-7','Zaphod','details 6'); ``` The following result is returned: ``` new_id | name | details | num_rows --------------------+--------+-----------+--------- 11.12.13.14.15.16.2 | Zaphod | details 6 | 6 11.12.13.14.15.16.3 | Zaphod | details 6 | 6 11.12.13.14.15.16.4 | Zaphod | details 6 | 6 11.12.13.14.15.16.5 | Zaphod | details 6 | 6 11.12.13.14.15.16.6 | Zaphod | details 6 | 6 11.12.13.14.15.16.7 | Zaphod | details 6 | 6 10.11.12.1 | Peter | details 4 | 5 10.11.12.2 | Peter | details 4 | 5 10.11.12.3 | Peter | details 4 | 5 10.11.12.4 | Peter | details 4 | 5 10.11.12.5 | Peter | details 4 | 5 1.4.10 | Arthur | details 5 | 4 1.4.11 | Arthur | details 5 | 4 1.4.12 | Arthur | details 5 | 4 1.4.13 | Arthur | details 5 | 4 1.3.1 | Jack | details 1 | 3 1.3.2 | Jack | details 1 | 3 1.3.3 | Jack | details 1 | 3 5.4.1 | John | details 2 | 2 5.4.2 | John | details 2 | 2 1.4.5.1 | Alex | details 3 | 1 ``` --- The use of `cardinality(id_elements)` requires Postgres 9.4. For earlier versions this needs to be replaced with `array_length(id_elements, 1))` --- A final note: This would be a **lot** easier if you stored the start and end value in separate (integer) columns, rather then appending them to the ID itself. This model violates basic database normalization (first normal form). This solution (or any solution in the answers given) will fail badly if an is stored that contains e.g. `10.12.13.A-Z` (non-numeric values) which can be prevented by properly normalizing the data.
PostgreSQL- splitting rows
[ "", "sql", "regex", "postgresql", "" ]
I have a Power table that stores building circuit details. A circuit can be 1 phase or 3 phase but is always represented as 1 row in the circuit table. I want to insert the details of the circuits into a join table which joins panels to circuits My current circuit table has the following details ``` CircuitID | Voltage | Phase | PanelID | Cct | 1 | 120 | 1 | 1 | 1 | 2 | 208 | 3 | 1 | 3 | 3 | 208 | 2 | 1 | 8 | ``` Is it possible to create a select where by when it sees a 3 phase row it selects 3 rows (or 2 select 2 rows) and increments the Cct column by 1 each time or do I have to create a loop? ``` CircuitID | PanelID | Cct | 1 | 1 | 1 | 2 | 1 | 3 | 2 | 1 | 4 | 2 | 1 | 5 | 3 | 1 | 8 | 3 | 1 | 9 | ```
You can do this with a one recursive cte. ``` WITH cte AS ( SELECT [CircuitID], [Voltage], [Phase], [PanelID], [Cct], [Cct] AS [Ref] FROM [Power] UNION ALL SELECT [CircuitID], [Voltage], [Phase], [PanelID], [Cct] + 1, [Ref] FROM cte WHERE [Cct] + 1 < [Phase] + [Ref] ) SELECT [CircuitID], [PanelID], [Cct] FROM cte ORDER BY [CircuitID] ```
Here is one way to do it First generate numbers using tally table(best possible way). Here is one excellent article about generating number without loops. [Generate a set or sequence without loops](http://sqlperformance.com/2013/01/t-sql-queries/generate-a-set-1) Then join the numbers table with yourtable where `phase` value of each record should be greater than sequence number in number's table ``` ;WITH e1(n) AS ( SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 ), -- 10 e2(n) AS (SELECT 1 FROM e1 CROSS JOIN e1 AS b), -- 10*10 e3(n) AS (SELECT 1 FROM e1 CROSS JOIN e2), -- 10*100 numbers as ( SELECT n = ROW_NUMBER() OVER (ORDER BY n) FROM e3 ) SELECT CircuitID, PanelID, Cct = Cct + ( n - 1 ) FROM Yourtable a JOIN numbers b ON a.Phase >= b.n ```
Select statement that duplicates rows based on N value of column
[ "", "sql", "sql-server", "t-sql", "select", "" ]
I have the following table employee: ``` Name Percentage --------------- rad 80% deepak 20% kavita 30% ``` I want to write a SQL select query using SQL Server 2008 to returns these results: ``` rad 80 deepak 20 kavita 30 ``` How can this be achieved?
**To Replace single Character in select query:** You can simply use the Replace Function whenever you want to strip any character here is how the general syntax will look like ``` REPLACE ( StringExpression, StringPattern, ReplacementString) ``` so in your case its gonna be something as follows ``` SELECT name, REPLACE(percentage,'%',''); GO ``` Note that you can also use another method for same purpose i.e. COLLATE Function ``` SELECT REPLACE('percentage' COLLATE Latin1_General_BIN,'%', ''); GO ``` If you want to know More About Using COLLATE, take a look at this [LINK](https://msdn.microsoft.com/en-us/library/ms184391.aspx) Hope my answer will help you to achieve what you want.
Not sure what you need, Replace will not work SELECT REPLACE('rad 80% deepak 20% kavita 30%','%',''); **Edit** ``` Select name, REPLACE(percentage,'%',''); ```
How to remove percentage symbol while extracting records
[ "", "sql", "sql-server-2008", "" ]
I need to only pull records that have have a date span of 2 or more days. I have over 8K records I need to search through. What's the best way to do this please? I'm using SQL Server 2014. [SQL Fiddle](http://sqlfiddle.com/#!6/011e4/1/0/ "SQL Fiddle") In case SQL Fiddle doesn't work: (I've had issues getting it to work lately.) ``` create table #DD ( Event varchar(100), ResponseBegin date, ResponseEnd date ) insert into #DD (Event, ResponseBegin, ResponseEnd) values ('Det', '20150201', '20150202'), ('Adm', '20160201', '20160204'), ('MM', '20120201', '20120205'), ('Det', '20160201', '20160207'), ('Det', '20160201', '20160201') ``` Data [![enter image description here](https://i.stack.imgur.com/Gu0kU.png)](https://i.stack.imgur.com/Gu0kU.png) Desired Results [![enter image description here](https://i.stack.imgur.com/DKEed.png)](https://i.stack.imgur.com/DKEed.png) Thank you.
You can do the following: ``` SELECT * FROM DD WHERE DateAdd(DAY, 2, ResponseBegin) <= ResponseEnd ```
Try ``` SELECT * FROM DD WHERE ResponseEnd <= DATEADD(day, 2, ResponseBegin) ```
How do I return results of a date range of >=2 days?
[ "", "sql", "sql-server", "t-sql", "" ]
Is there any performance difference between the following? ``` NAME LIKE '%EXPRESSION_1%' OR NAME LIKE '%EXPRESSION_2%' ... OR NAME LIKE '%EXPRESSION_N%' ``` VS ``` NAME IN (ACTUAL_VALUE_1,ACTUAL_VALUE_2,.., ACTUAL_VALUE_N) ```
The `IN` version is potentially much, much faster. The two versions do *not* do the same thing. But, if either meets your needs, the `IN` version can take advantage of an index on `NAME`. The `LIKE` version cannot, because the pattern starts with a wildcard. You could write this as: ``` WHERE NAME LIKE 'EXPRESSION_%' ``` If this meets your needs, it can also take advantage of an index on `NAME`.
You can simply try to use ``` NAME LIKE '%EXPRESSION_%' ``` As far as the performance is concerned then IN is comparatively faster than OR. Also you can confirm the performance using the execution plan of your two queries. Also as commented above, the two queries which you are showing are different. The first query: ``` NAME LIKE '%EXPRESSION_1%' OR NAME LIKE '%EXPRESSION_2%' ... OR NAME LIKE '%EXPRESSION_N%' ``` will try to fetch the result which has sample data like ``` EXPRESSION_123 XXXEXPRESSION_1234 EXPRESSION_2323 EXPRESSION_2...... ``` whereas your second query will match the records which are exactly matching to ``` ACTUAL_VALUE_1,ACTUAL_VALUE_2..... ```
SQL: LIKE with OR vs IN
[ "", "sql", "oracle11g", "hql", "query-performance", "" ]
Im having a dataframe that looks like this: ``` city <- c("Alkmaar", "Den Haag") df <- data.frame(city) ``` What I would like to do now is write values into a mysql database. Therefore I do the following: ``` library(DBI) con <- dbConnect(RMySQL::MySQL(), dbname = "match_predictions", host = "109.237.221.34", port = 3306, user = "**", password = "***") for(i in 1:nrow(df)){ var1 <- city[i] dbGetQuery(con, "INSERT INTO weather(city) VALUES(var1)") } ``` But this trows an error: ``` Error in .local(conn, statement, ...) : could not run statement: Unknown column 'var1' in 'field list' ``` Any thoughts on how i can insert a variable into the database?
Just use paste0 to write the query for each city. I added in the single quotes and then you just need to make sure that you escape any single quotes in the city names if that occurs. ``` for(i in 1:nrow(df)){ var1 <- city[i] # excape single quotes var2 <- gsub("\\'", "\\'\\'", var1) dbGetQuery(con, paste0("INSERT INTO weather(city) VALUES('", var2, "')")) } ```
try this: ``` library(DBI) con <- dbConnect(RMySQL::MySQL(), dbname = "match_predictions", host = "109.237.221.34", port = 3306, user = "**", password = "***") for(i in 1:nrow(df)){ var1 <- city[i] dbGetQuery(con, cat("INSERT INTO weather(city) VALUES(", var1 <- city[i], ")")) } ```
Insert a variable into a SQL statement
[ "", "mysql", "sql", "r", "" ]
I have a table of users who have certain roles. A user is entered into the table one time for each role they have. I need to get a count of all users who have certain roles but I need to exclude any duplicate record that also has another role. Below is what is populated in my table ``` Name Role Steve ROLE_8 Steve ROLE_9 Steve ROLE_1 ``` And this is the query I have to select users who have certain roles. What I need to do is check to see if a user has ROLE\_1 but also check if there is another instance of that user who has a role that I do not wish to include and exclude that user from the return set. ``` SELECT COUNT(DISTINCT c.user_id) FROM users c WHERE email_addr != '' AND email_addr IS NOT NULL AND EXISTS (SELECT r.role_id FROM ROLE r, user_role ur WHERE c.user_id = ur.user_id AND ur.role_id = r.role_id AND (r.name = 'ROLE_1' OR r.name = 'ROLE_2' OR r.name = 'ROLE_3' OR r.name = 'ROLE_4' OR r.name = 'ROLE_5' OR r.name = 'ROLE_6')) ```
Using the `DISTINCT` in your `COUNT` makes the `EXISTS` unnecessary - you can just join to the `user_role` table. At that point you just need to exclude those users who also have one of the roles that you don't want: ``` SELECT COUNT(DISTINCT U.user_id) FROM Users U INNER JOIN User_Role UR ON UR.user_id = U.user_id AND INNER JOIN Role R ON R.role_id = UR.role_id AND R.name IN ('ROLE_USER_ADMIN', 'ROLE_1'...) WHERE U.email_addr IS NOT NULL AND U.email_addr <> '' AND NOT EXISTS ( SELECT * FROM User_Role UR2 INNER JOIN Role R2 ON R2.role_id = UR2.role_id AND R2.name IN ('Some_Excluded_Role') WHERE UR2.user_id = U.user_id ) ``` If you want to exclude any user who has **any** role outside of your list then you can do the following: ``` SELECT COUNT(DISTINCT U.user_id) FROM Users U INNER JOIN User_Role UR ON UR.user_id = U.user_id AND INNER JOIN Role R ON R.role_id = UR.role_id AND R.name IN ('ROLE_USER_ADMIN', 'ROLE_1'...) WHERE U.email_addr IS NOT NULL AND U.email_addr <> '' AND NOT EXISTS ( SELECT * FROM User_Role UR2 INNER JOIN Role R2 ON R2.role_id = UR2.role_id AND R2.name NOT IN ('ROLE_USER_ADMIN', 'ROLE_1'...) WHERE UR2.user_id = U.user_id ) ```
I believe this will get you what you're looking for (without knowing exactly what you expect the results to look like). The concept being to first find the ones that match the 'good' roles and then filter with the other sub-select the ones that also have 'bad' roles. ``` SELECT COUNT(DISTINCT c.[user_id]) FROM users AS c INNER JOIN user_role AS ur ON ur.[user_id] = c.[user_id] WHERE c.email_addr != '' AND c.email_addr IS NOT NULL AND ur.role_id IN (SELECT sub_r.role_id FROM [role] AS sub_r WHERE sub_r.name IN ('ROLE_USER_ADMIN', 'ROLE_1') ) AND c.[user_id] NOT IN (SELECT sub2_ur.[user_id] FROM user_role AS sub2_ur INNER JOIN [role] AS sub2_r ON sub2_r.role_id = sub2_ur.role_id WHERE sub2_r.name IN ('ROLE_NOT_TO_USE','ANOTHER_NOT_TO_USE') AND sub2_ur.[user_id] = c.[user_id] ) ```
Exclude record if duplicate record has certain value
[ "", "sql", "sql-server-2012", "" ]
How can i pivot a table with single column. result set of my select query contains single column and 3 rows. i.e my select query looks like :- ``` contactList table have following columns. ContactID, PhNumbers, PhTYpe, ContactPersonID select PhNumbers,PhType from contactList where ContactPersonID=3 PhNumbers PhType 1234567890 1 3456789013 2 4545466756 3 ``` these 3 rows corresponds to 3 types of phone numbers i need an output like this ``` homePhone MobilePhone WorkPhone 1234567890 3456789013 4545466756 ```
Got the answer. Thanks for all your help guys. ``` SELECT [1] as HomePhone, [2] as MobilePhone,[3] as WorkPhone, FROM (select PhNumbers,PhType from contactList where ContactPersonID=3) AS SourceTable PIVOT ( MAX(PhNumbers) FOR PhType IN ([1],[2],[3]) ) AS PivotTable; ```
``` DECLARE @SQL varchar(MAX), @ColumnList varchar(MAX) SELECT @ColumnList= COALESCE(@ColumnList + ',','') + QUOTENAME(PhNumbers) FROM ( SELECT DISTINCT PhNumbers FROM contactList ) T SET @SQL = ' WITH PivotData AS ( SELECT PhNumbers FROM contactList ) SELECT ' + @ColumnList + ' FROM PivotData PIVOT ( MAX(PhNumbers) FOR PhNumbers IN (' + @ColumnList + ') ) AS PivotResult' EXEC (@SQL) ```
Pivot table with custom Name as columnNames
[ "", "sql", "sql-server", "pivot", "" ]
I need to split results of the union statement in the different columns regarding to attribute 'Label' value, I can't use 'JOIN' statements because I need to get data and counts, for all cases, not only for cases where join conditions is meet. I have a query like this: ``` Select 'Label_1' as Label, ATTR_1, ATTR2, ATTR3, COUNT (AATR1) from table where ATTR4 = '10' GROUP by 'Label_1' as Label, ATTR_1, ATTR2, ATTR3 UNION ALL Select 'Label_2' as Label, ATTR_1, ATTR2, ATTR3, COUNT (AATR1) from table where ATTR4 = '20' GROUP by 'Label_2' as Label, ATTR_1, ATTR2, ATTR3 ``` So I get results like : ``` Label | ATTR_1| ATTR2| ATTR3| COUNT (AATR1)| ------------------------------------------- Label_1 | xxxxx |xxxxxx|xxxxxx|xxxxxxx | Label_2 |yyyyyy |yyyyyy|yyyyyy|yyyyyyy | ``` And I want to get: ``` Label | ATTR_1_1| ATTR2_1| ATTR3_1| COUNT (AATR1)_1|Label | ATTR_1_2| ATTR2_2| ATTR3_2| COUNT (AATR1)_2| Label_1 | xxxxx |xxxxxx |xxxxxx |xxxxxxx |Label_2|yyyyyy |yyyyyy |yyyyyy |yyyyyyy | ```
If you have two result sets with 1 row each, you can simply do a `CROSS JOIN` to set them side-by-side, like this: ``` select Label_1, ATTR_1_1, ATTR_2_1, ATTR_3_1, AATR1_Count_1, Label_2, ATTR_1_2, ATTR_2_2, ATTR_3_2, AATR1_Count_2 from ( Select 'Label_1' as Label_1, ATTR_1 as ATTR_1_1, ATTR2 as ATTR2_1, ATTR3 as ATTR3_1, COUNT(AATR1) as AATR1_Count_1 from table where ATTR4='10' GROUP by ATTR_1, ATTR2, ATTR3 ) t1 cross join ( Select 'Label_2' as Label_2, ATTR_1 as ATTR_1_2, ATTR2 as ATTR2_2, ATTR3 as ATTR3_2, COUNT(AATR1) as AATR1_Count_2 from table where ATTR4='20' GROUP by ATTR_1, ATTR2, ATTR3 ) t2 ``` The other option to translate rows into columns (if cross join is not suitable) is to do a pivot query. Do a google search for "Oracle pivot query example" and you'll see plenty of tactics.
Try ``` SELECT Label, n, ATTR_1, ATTR2, ATTR3, COUNT(ATTR1) FROM( Select 'Label_1' as Label, row_number() OVER (ORDER BY ATTR_1, ATTR2, ATTR3) n, ATTR_1, ATTR2, ATTR3, COUNT (AATR1) from table where ATTR4='10' GROUP by 'Label_1' as Label, ATTR_1, ATTR2, ATTR3 ) As firstTable FULL OUTER JOIN SELECT Label, m, ATTR_1, ATTR2, ATTR3, COUNT(ATTR1) FROM( Select 'Label_2' as Label, row_number() OVER (ORDER BY ATTR_1, ATTR2, ATTR3) m, ATTR_1, ATTR2, ATTR3, COUNT (AATR1) from table where ATTR4='20' GROUP by 'Label_1' as Label, ATTR_1, ATTR2, ATTR3 ) AS secondTable ON firstTable.n = secondTable.m ``` This will give each row a number, then join on the numbers, and so will work regardless of how many rows you have.
Display union statement in columns rather than rows
[ "", "sql", "oracle", "" ]
I tried searching this forum for an answer but could not find one that fit mine dilemma exactly. I have a list of claims that can be in different statuses. I want a distinct count of claims where the status is open. The example below details three columns; Claim, ClaimLine, and Status ``` Claim | ClaimLine | Status ------+-----------+-------- 1 | 1 | Open 1 | 2 | Open 1 | 3 | Open 2 | 1 | Enroute 2 | 2 | Enroute 3 | 1 | Closed 4 | 1 | Open 5 | 1 | Open 5 | 2 | Open 5 | 3 | Open ``` Desired Output: ``` Open 3 ```
This should do it: Sample Data: ``` CREATE TABLE #temp (Claim int , Claim_Line int , Status VARCHAR(20)) INSERT INTO #temp VALUES (1 ,1 ,'Open'), (1 ,2 ,'Open'), (1 ,3 ,'Open'), (2 ,1 ,'En-route'), (2 ,2 ,'En-route'), (3 ,1 ,'Closed'), (4 ,1 ,'Open'), (5 ,1 ,'Open'), (5 ,2 ,'Open'), (5 ,3 ,'Open') ``` Query: ``` SELECT Status, COUNT(DISTINCT Claim) FROM #temp WHERE Status = 'Open' GROUP BY Status ``` Results: [![enter image description here](https://i.stack.imgur.com/QGJGl.png)](https://i.stack.imgur.com/QGJGl.png)
This way you don't have to group, a simplified version below this example : ``` select count(distinct claim),'Open' from ( select 1 as Claim, 1 as Claim_Line, 'Open' as Status union all select 1, 2, 'Open' union all select 1, 3, 'Open' union all select 2, 1, 'En-route' union all select 2, 2, 'En-route' union all select 3, 1, 'Closed' union all select 4, 1, 'Open' union all select 5, 1, 'Open' union all select 5, 2, 'Open' union all select 5, 3, 'Open')sunquery where status = 'Open' ``` simplified version: ``` select count(distinct claim),'Open' from Claims where status = 'Open' ```
Distinct Count SQL
[ "", "sql", "distinct", "" ]
Before this design, my database model didnt have `orderProducts` which meant that there was a many to many relationship between `products` and `orders` as all of the fields in 'orderProducts' were in orders.. Have I resolved this properly using the junction table? Also I am unsure how to associate the `userType` table with the tables. Thanks. [![enter image description here](https://i.stack.imgur.com/WYkwi.png)](https://i.stack.imgur.com/WYkwi.png)
The resolution with the orderProducts table makes good sense, though you can leave productName out of it (since that can come from the Product table) to reduce data redundancy. Likewise, price can be in the product table. As for relating users to the existing tables, consider the relation between users and orders - presumably each user can have multiple orders, but each order will only have one user, so you have a one-to-many relationship from user to order, and can have user\_ID as a FK in the order table.
Add something like the UserID into the orderProducts Table then you should be able to use SQL JOIN as needed. [![enter image description here](https://i.stack.imgur.com/tjmjq.png)](https://i.stack.imgur.com/tjmjq.png)
Validity of my database model
[ "", "mysql", "sql", "database-design", "relational-database", "" ]
I've written this bit of SQL: it returns the Items sold past the month of November. However, I want to show *all* items that were sold and just to stick a 0 next to the one where it didn't sell in that month. Would appreciate some guidance: ``` SELECT ItemID, SUM(Quantity) FROM orderitems WHERE OrderNumber IN( SELECT OrderNumber FROM `order` WHERE OrderDate > "2015-11-31") GROUP BY ItemID; ```
Try this: ``` SELECT ItemID, COALESCE(SUM(CASE WHEN o2.OrderDate > '2015-11-31' THEN Quantity END), 0) AS Sum_Of_Quantity FROM orderitems AS o1 LEFT JOIN (SELECT OrderNumber, OrderDate FROM `order` WHERE OrderDate > '2015-11-31') AS o2 ON o1.OrderNumber = o2.OrderNumber GROUP BY ItemID; ```
``` SELECT ItemID, ISNULL(SUM(CASE WHEN c2.OrderDate > '2015-11-31' THEN Quantity END), 0) Sum_Of_Quantity FROM orderitems AS o1 LEFT JOIN (SELECT OrderNumber, OrderDate FROM `order` WHERE OrderDate > '2015-11-31') AS o2 ON o1.OrderNumber = o2.OrderNumber GROUP BY ItemID; ```
Sort orders by quantity and even show where quantity was 0
[ "", "mysql", "sql", "" ]
I am having a problem getting my "Rounding to 2 decimal places" to behave as I would expect. Try this trivial example ``` declare @num numeric (18,2) declare @vat numeric (18,2) set @num = 11729.83 set @vat = 1.14 select round(@num/@vat,2) ``` I am getting an answer of 10289.320000 but I should be getting 10289.33 . The full un rounded number is 10289.324561403508771929824561404 (unless my maths is completely off)
Try this ``` select cast(round(@num/@vat,3) as decimal(18,2)) ```
Round can either return a value lower than the original, or a value upper than the original. In fact it returns the value closest to the original. If you want to systematically round a number to its lower or upper value, you could then use FLOOR or CEILING (Thanks @GarethD for refreshing my memory on CEILING...) ``` select round(floor(100*@num/@vat)/100,2) -> lower value select round(ceiling(100*@num/@vat)/100,2) -> upper value ``` Otherwise round will indeed return 10289.32 when the value is strictly lower than 10289.325 (which is the case here)
Sql server rounding issue down to 2 decimal places
[ "", "sql", "sql-server", "" ]
I am doing a date comparison in oracle. When I execute the code below ``` select tr.x, tbi.y, to_char(tr.UPDATED_AT,'dd-mon-yyyy') from tbi, tb, tr where tbi.id=tb.id and tb.id=tr.id and tr.updated_at = to_date('23/02/2016', 'dd/MM/yyyy') ``` No row is selected But when I execute the following query ``` select tr.x, tbi.y, to_char(tr.UPDATED_AT,'dd-mon-yyyy') from tbi, tb, tr where tbi.id=tb.id and tb.id=tr.id and tr.updated_at > to_date('23/02/2016', 'dd/MM/yyyy') ``` I get this result ``` trx.x tbi.y TO_CHAR(TR.UPDATED_AT,'DD-MM-YYYY') 123456 0 23-02-2016 12345 0 23-02-2016 123 0 23-02-2016 123123 0 23-02-2016 ``` Why is the `>` operator showing dates which are equal and `=` is not showing the equal dates ?
You should use TRUNC() ``` select tr.x, tbi.y, to_char(tr.UPDATED_AT,'dd-mon-yyyy') from tbi, tb, tr where tbi.id=tb.id and tb.id=tr.id and trunc(tr.updated_at) = to_date('23/02/2016', 'dd/MM/yyyy') ``` The problem with your query(I'm guessing by the look of it) your updated\_at format is dd/mm/yyyy hh24:mi:ss . So, 23/02/2016 is not equal to 23/02/2016 20:00:05 . (A default hh24:mi:ss of a date is 00:00:00) Trunc() makes the date formated like dd/mm/yyyy and ignore the hours
**problem** ``` tr.updated_at = to_date('23/02/2016', 'dd/MM/yyyy') ``` returns you only the results where `updated_at` equals `23/02/2016 00:00:00` **solution** Try the following instead: ``` trunc(tr.updated_at) = to_date('23/02/2016', 'dd/MM/yyyy') ``` cf [trunc function documentation](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions201.htm).
Issue with date comparison in Oracle
[ "", "sql", "oracle", "date", "" ]
I am stuck in a requirement. It might be simple but i am not getting through. I have one audit table Audit\_Info which captures the audit information of all tables. A table could be run multiple times on the same business date. My requirement is to get the latest business date record for each month upto last 5 months. It may happen that for one particular month the table was not run. Table is like ``` table_name business_date src_rec tgt_rec del_rec load_timestamp abc 25/10/2015 10 10 0 23/01/2016 03:06:56 abc 25/10/2015 10 10 0 23/01/2016 05:23:34 abc 07/09/2015 10 10 0 23/10/2015 05:37:30 abc 05/08/2015 10 10 0 23/09/2015 05:23:34 abc 15/06/2015 10 10 0 23/07/2015 05:23:34 abc 25/04/2015 10 10 0 23/05/2015 05:23:34 ``` similary there are other tables in this. I need it for 5 tables. Thanks for your help. Regards, Amit [![Please see the highlighted](https://i.stack.imgur.com/fDTwM.jpg)](https://i.stack.imgur.com/fDTwM.jpg)
Based on your expected result this should be close: ``` select * from tab where -- last five months business_date >= add_months(trunc(current_date),-5) qualify row_number() over (partition by trunc(business_date) -- every month order by business_date desc, load_timestamp desc) -- latest date ```
Hmmm, if I understand correctly you can use `row_number()` with some date arithmetic: ``` select ai.* from (select ai.*, row_number() over (partition by table_name, extract(year from business_date), extract(month from business_date) order by business_date desc ) as seqnum from audit_info ai where timestamp >= current timestamp - interval '5' month ) ai where seqnum = 1; ```
Need to pick latest 5 records among the partitions
[ "", "sql", "teradata", "" ]
*I'm currently having the following two tables in my **Oracle** database.* ``` Table Continent with fields: | CONTINENT | CONTINENTNAME | Table Land with fields: | LANDCODE | LANDNAME | CONTINENT | NUMBEROFLANGUAGES | ``` I want to display the continentname, the landname and the number of languages as the result. Under the following condition: **display from each continent the landname with the highest number of languages**. **Current result** *i got this query which will show the country with the most languages in the world like so:* ``` CONTINENTNAME | LANDNAME | NUMBEROFLANGUAGES -------------------------------------------- Asia | India | 26 ``` Only i end up here with only one continent instead of all continents in the world. Am i using the wrong approach here or am i close in solving this query? I would love to know how to solve this SQL puzzle. **Used Query** ``` SELECT c.CONTINENTNAME, l.LANDNAME, l.NUMBEROFLANGUAGES FROM land l INNER JOIN continent c ON c.CONTINENT = l.CONTINENT WHERE l.NUMBEROFLANGUAGES = ( SELECT MAX(l.NUMBEROFLANGUAGES) FROM land l ); ```
You were close in concept. I just moved your SELECT MAX() query in as a join and joined on based on that to the original land table. ``` SELECT c.CONTINENTNAME, l.LANDNAME, l.NUMBEROFLANGUAGES FROM land l INNER JOIN continent c ON l.CONTINENT = c.CONTINENT INNER JOIN ( SELECT l2.CONTINENT, MAX(l2.NUMBEROFLANGUAGES) maxLang FROM land l2 group by l2.CONTINENT) preQuery ON l.CONTINENT = preQuery.CONTINENT AND l.NUMBEROFLANGUAGES = preQuery.maxLang ```
Sorry, I had to correct ``` with max_language as ( SELECT l.CONTINENT, max(l.NUMBEROFLANGUAGES) language_count FROM land l group by l.CONTINENT ) select * from land l1, continent c, max_language ml where c.CONTINENT = l1.CONTINENT and l1.continent = ml.continent and l1.NUMBEROFLANGUAGES = ml.language_count; ```
SQL get max values under certain conditions when using two tables
[ "", "sql", "oracle", "" ]
I have a table with columns `ID` and `Val`. For each value of `ID` we can have either same or different values of `Val`. ``` ID Val 1 A 1 NULL 2 00 2 00 2 00 2 00 3 00 3 A 4 A 5 00 5 00 5 A 6 A 6 A 6 NULL 6 00 ``` From above table, I am looking for IDs which has different values in Val column. If for any given ID all values of Val column are same then it should not come in result. So result would be something like. ``` D Val 1 A 1 NULL 3 00 3 A 5 00 5 00 5 A 6 A 6 A 6 NULL 6 00 ``` Id 2 should not come in result because for Id 2, Val column has same data. Similarly ID 4 will not come in result as ID 4 has only one row. For each ID if we have more than one value in Val column then is it should show in result. Thanks for the Help!
Try this: ``` SELECT ID, Val FROM mytable WHERE ID IN (SELECT ID FROM mytable GROUP BY ID HAVING COUNT(DISTINCT CASE WHEN Val IS NULL THEN '' ELSE Val END) > 1 ``` I've made the assumption that `Val` field is of type `VARCHAR` and that it can be either `NULL` or `<> ''`.
For the ids that meet the condition of having different values: ``` select id from t group by id having min(id) <> max(id); ``` You can then incorporate this into a query as: ``` select t.* from t join (select id from t group by id having min(id) <> max(id) ) tt on t.id = tt.id; ``` Or, you can use window functions: ``` select t.id, t.val from (select t.*, min(val) over (partition by id) as minval, max(val) over (partition by id) as maxval from t ) t where minval <> maxval; ```
Different values of column 1 based on column 2 SQL Server
[ "", "sql", "sql-server", "" ]
I need to know how to fetch the numerical values from the outputformula column which is in varchar datatype. I use this outputformula column in a dynamic sql query to perform mathematical calculations. I have the below code for the table and the select query to fetch the values. ``` Create table #table1 ( IDNUM int, outputformula varchar(60) ) insert into #table1 values (1, '67349.000 +78343.000'), (2, 'a +78343.000'), (3, '45783.00+b'), (4, '6152.000 +1524.000'), (5, 'cda +7151.000'), (6, '67349.000 +78343.000') Select outputformula from #table1 where ISNUMERIC(outputformula) =1 ``` When I execute the code I get no values is there a way to get the output of only the numerical values apart from the alphabets in the column.... I should get the output column as ``` 67349.000 +78343.000 6152.000 +1524.000 67349.000 +78343.000 ```
Unfortunately, `ISNUMERIC` does not evaluate formulae. You can try to write your own simple parser in a SQL function, but functions don't allow for any kind of error handling, so when you try to evaluate one of the non-numeric equations you'll run into issues. Your best bet is likely to use whatever you're using to evaluate these equations to also perform this task - I assume that's in some sort of front end. Since you aren't using a front end, here's the best that I can come up with for a straight SQL approach. First, create a stored procedure that can evaluate numeric only equations. This would be better as a function, but unfortunately functions don't allow `TRY..CATCH` blocks, which is crucial for handling non-numeric equations. Here's an example of such a stored procedure: ``` CREATE PROCEDURE dbo.Check_Dynamic_Formula @formula VARCHAR(60), @result DECIMAL(10, 4) OUTPUT AS BEGIN DECLARE @sql NVARCHAR(100) BEGIN TRY SELECT @sql = 'SELECT @inner_result = ' + @formula EXEC sp_executesql @sql, N'@inner_result DECIMAL(10, 4) OUTPUT', @inner_result = @result OUTPUT END TRY BEGIN CATCH SELECT @result = NULL END CATCH END ``` Once you have that you can set up a `CURSOR` to go through your table one row at a time (which is why a scalar function would have been much better since you could then avoid a `CURSOR`). Check the output variable for each row in your table. If it's a `NULL` then the stored procedure couldn't evaluate it. Some important caveats... There may be some instances where the evaluation becomes a numeric unintentionally - see @Sean Lange's comment to your question. **IMPORTANT** This is highly susceptible to injection. I would not run this against any data that was available for a user to generate. For example, if you have users entering the formulae then they could make a SQL injection attack. Finally, if any other error occurs in the `TRY..CATCH` block, it will make it appear as if the row was a non-numeric. We're counting on the code failing to prove that it's non-numeric, which is a brittle approach. Any error could give you a false negative.
If all your strings are in the format {token1}blank{token2} one way is to define a [Split](https://stackoverflow.com/a/21992461/2780791) function in your database and use the following code: ``` Select T.*, T1.StringValue, ISNUMERIC(T1.StringValue), T2.StringValue, ISNUMERIC(T2.StringValue), ISNUMERIC(T1.StringValue) * ISNUMERIC(T2.StringValue) AS IsNumeric from #table1 T cross apply dbo.Split(T.outputformula, ' ') T1 cross apply dbo.Split(T.outputformula, ' ') T2 where T1.Ordinal = 1 and T2.Ordinal = 2 ``` However, you should take into consideration that `SQL ISNUMERIC` function has some limitations and provides [some false positives](https://www.simple-talk.com/blogs/2009/06/11/the-hidden-aspects-of-isnumeric/). One way to have more specific numeric checking is to use [TRY\_CONVERT](https://msdn.microsoft.com/en-us/library/hh230993.aspx) function.
How to use Isnumeric in dynamic sql?
[ "", "sql", "sql-server", "sql-server-2012", "" ]
I have the following table ``` CREATE TABLE #tbl ( RoomTypeId INT NOT NULL, DayNo TINYINT NOT NULL, RoomNo TINYINT NOT NULL, IsDormitory BIT, AdultsNo TINYINT NOT NULL, Tmp TINYINT) insert into #tbl (RoomTypeId, DayNo, RoomNo, IsDormitory, AdultsNo, Tmp) values(2486, 98, 1, 1, 1, 0) insert into #tbl (RoomTypeId, DayNo, RoomNo, IsDormitory, AdultsNo, Tmp) values(2486, 99, 1, 1, 1, 0) insert into #tbl (RoomTypeId, DayNo, RoomNo, IsDormitory, AdultsNo, Tmp) values(2486, 98, 2, 1, 2, 0) insert into #tbl (RoomTypeId, DayNo, RoomNo, IsDormitory, AdultsNo, Tmp) values(2486, 99, 2, 1, 2, 0) insert into #tbl (RoomTypeId, DayNo, RoomNo, IsDormitory, AdultsNo, Tmp) values(2487, 98, 1, 0, 2, 0) insert into #tbl (RoomTypeId, DayNo, RoomNo, IsDormitory, AdultsNo, Tmp) values(2487, 99, 1, 0, 2, 0) ``` Only for those rows with IsDormitory = 1, I want to update the field Tmp with values as follows: for each room having a certain RoomNo, the column Tmp = sum of all AdultsNo of all rooms having RoomNo <= current RoomNo. Example: * for room with RoomNo = 1, Tmp = 1 adult, * for room with RoomNo = 2, Tmp = 3 adults, i.e. 1 adult (corresponding to RoomNo = 1) + 2 adults (corresponding to RoomNo = 2). The result will be: ``` RoomTypeId DayNo RoomNo IsDormitory AdultsNo Tmp 2486 98 1 1 1 1 2486 99 1 1 1 1 2486 98 2 1 2 3 2486 99 2 1 2 3 2487 98 1 0 2 0 2487 99 1 0 2 0 ``` The following did not work: ``` update #tmp2 set TmpAdults = Adults from #tmp2 t inner join (select RoomTypeId, DayNo, sum(AdultsNo) as Adults from #tmp2 x where x.RoomNo < t.RoomNo) s on t.RoomTypeId = s.RoomTypeId and t.DayNo = s.DayNo ;with CTE as ( select RoomTypeId, DayNo, sum(AdultsNo) as Adults from #tmp2 group by RoomTypeId, DayNo ) update #tmp2 set TmpAdults = c.Adults from #tmp2 t inner join CTE c on t.RoomTypeId = c.RoomTypeId and t.DayNo = c.DayNo ``` I am using Microsoft SQL Server 2014. I do not know how to use CTEs, but from what I read, this should be the answer. Thank you. UPDATE: I added 2 new rows: ``` insert into #tbl (RoomTypeId, DayNo, RoomNo, IsDormitory, AdultsNo, Tmp) values(2486, 98, 3, 1, 1, 0) insert into #tbl (RoomTypeId, DayNo, RoomNo, IsDormitory, AdultsNo, Tmp) values(2486, 99, 3, 1, 1, 0) ``` but unfortunately the answer is not as I expect. I expect for RoomNo = 3 the Tmp = 1+2+1=4, instead I get 2 ``` RoomTypeId DayNo RoomNo IsDormitory AdultsNo Tmp 2486 98 3 1 1 4 2486 99 3 1 1 4 ```
This CTE should get you your results then it's simple enough to `JOIN` and perform the update. Your use of multiple temporary tables concerns me though... ``` ;WITH MyCTE AS ( SELECT RoomTypeID, DayNo, RoomNo, IsDormitory, AdultsNo, SUM(CASE WHEN IsDormitory = 1 THEN AdultsNo ELSE 0 END) OVER (PARTITION BY RoomTypeID, DayNo ORDER BY RoomNo ROWS UNBOUNDED PRECEDING) AS Tmp FROM #tbl ) SELECT * FROM MyCTE ```
You can use a `CTE` in order to update: ``` ;WITH ToUpdate AS ( SELECT t1.AdultsNo + COALESCE(t3.AdultsNo,0) AS AdultsNo, Tmp FROM #tbl AS t1 OUTER APPLY ( SELECT DISTINCT AdultsNo FROM #tbl AS t2 WHERE t2.RoomNo < t1.RoomNo) AS t3 WHERE IsDormitory = 1 ) UPDATE ToUpdate SET Tmp = AdultsNo ``` The `CTE` uses an `OUTER APPLY` operation in order to fetch the `AdultsNo` values of previous records (if any).
sql sum previous rows based on value of current row
[ "", "sql", "sql-update", "common-table-expression", "sql-server-2014", "" ]
I have this table: ``` CREATE TABLE `test1` ( `id` int(4) NOT NULL, `company_Id_Company` varchar(10) NOT NULL, `head_Id_Head` int(4) NOT NULL, `detail_Id_Detail` int(4) NOT NULL, PRIMARY KEY (`id`,`company_Id_Company`,`head_Id_Head`), KEY `fk_test1_detail` (`id`,`head_Id_Head`,`detail_Id_Detail`), CONSTRAINT `fk_test1_detail` FOREIGN KEY (`id`, `head_Id_Head`, `detail_Id_Detail`) REFERENCES `detail` (`id`, `head_Id_Head`, `id_Detail`) ON DELETE NO ACTION ON UPDATE NO ACTION, CONSTRAINT `fk_test1_company` FOREIGN KEY (`id`, `company_Id_Company`) REFERENCES `company` (`id`, `id_Company`) ON DELETE NO ACTION ON UPDATE NO ACTION ) ENGINE=InnoDB ``` and I have this data in the table: ``` id | company_Id_Company | head_Id_Head | detail_Id_Detail ----------------------------------------------------------- 1 01 1 7 1 02 1 7 1 01 3 2 1 03 2 9 1 05 4 1 1 01 5 4 1 02 5 4 1 03 5 4 ``` **I need to find all data that meets the two mandatory conditions simultaneously:** ``` select * from test1 where head_Id_Head = 1 and detail_Id_Detail = 7 AND select * from test1 where head_Id_Head = 5 and detail_Id_Detail = 4 ``` If the two queries are running, I should be compañoa: 01 and 02 How I could make this query?
this would assume you dont have duplicates.. ``` SELECT * FROM test1 t1 WHERE [company_Id_Company] IN (SELECT [company_Id_Company] FROM test1 t2 WHERE ((t2.head_Id_Head = 1 AND t2.detail_Id_Detail = 7) OR (t2.head_Id_Head = 5 AND t2.detail_Id_Detail = 4)) GROUP BY t2.[company_Id_Company] HAVING COUNT(*) = 2) ```
you can do by `OR` not `AND`: ``` select your_table.company_Id_Company from your_table inner join ( select company_Id_Company from your_table where head_Id_Head = 5 AND detail_Id_Detail = 4) as temp on temp.company_Id_Company your_table.company_Id_Company where head_Id_Head = 1 AND detail_Id_Detail = 7 ```
Multi-select same table
[ "", "mysql", "sql", "select", "" ]
I have durations times table which have two columns sd\_time (start time) and sd\_etime (end time) i want to select all times which not start and end between (11:30 - 12:05) so if sd\_stime = 12:00 and sd\_etime = 1:00 it will returned but if it st\_stime = 11:00 and sd\_eitme = 12:00 it will not returned ``` SELECT * FROM d_services_type_durations dsd FROM d_services_type_durations dsd WHERE (dsd.sd_stime <= '11:30' OR dsd.sd_stime >= '12:05') AND (dsd.sd_etime <= '11:30' OR dsd.sd_etime >= '12:05') ``` and this is the schema <http://sqlfiddle.com/#!9/55f09/20>
Try this: ``` SELECT * FROM d_services_type_durations dsd WHERE dsd.sd_stime < '11:30' OR dsd.sd_stime > '12:05' OR dsd.sd_etime < '11:30' OR dsd.sd_etime > '12:05' ``` [**Demo here**](http://sqlfiddle.com/#!9/55f09/3)
Is this what you mean? ``` SELECT * FROM d_services_type_durations AS dsd WHERE (dsd.sd_stime > '00:00' AND dsd.sd_stime < '11:30' AND dsd.sd_etime > '00:00' AND dsd.sd_etime < '11:30') OR (dsd.sd_stime > '12:05' AND dsd.sd_stime < '23:59' AND dsd.sd_etime > '12:05' AND dsd.sd_etime < '23:59') ```
How to get accurate time from start to end
[ "", "mysql", "sql", "" ]
I'm trying to add one more column that is a count for the amount of rows left, I don't want include rows where my planned quantity matches completed quantity (plan\_qty != cmp\_qty) ``` SELECT od_f.ob_oid Order, Sum(plan_qty) Plan_Units, sum(cmp_qty) Completed_Units, Round(((Sum(cmp_qty)/sum(plan_qty)) * 100.00),2)Percent_Cmp, total_value Value, SUM(round(cmp_qty * unit_price,2)) cmp_value FROM od_f, om_f WHERE od_f.ob_oid = om_f.ob_oid GROUP BY od_f.ob_oid, total_value ORDER BY Percent_Cmp desc ``` Heres a query that returns the new column I want ``` SELECT count(od_rid) FROM od_f WHERE od_f.plan_qty != od_f.cmp_qty GROUP BY od_f.ob_oid ``` I can't add the above where statement because it effects my results. I'm really not sure what to do, to combined these queries a subquery? some sorta union? I'm lost on how to do this. Thank you for any help
For what I understand you only want a count of rows where `plan_qty` is not equal to `cmp_qty`, try this: ``` SELECT t.ob_oid AS order, SUM(plan_qty) AS plan_units, SUM(cmp_qty) AS completed_units, ROUND(((SUM(cmp_qty)/SUM(plan_qty)) * 100),2) AS percent_cmp, total_value AS value, SUM(ROUND(cmp_qty * unit_price,2)) AS cmp_value, SUM(DECODE(plan_qty, cmp_qty, 0, 1)) AS rows_left FROM od_f t INNER JOIN om_f ON t.ob_oid = om_f.ob_oid GROUP BY t.ob_oid, total_value ORDER BY percent_cmp DESC; ```
You can use conditional aggregation for what you want. Although `COUNT()` with `CASE` might be sufficient, my best guess is `COUNT(DISTINCT)`: ``` SELECT od_f.ob_oid Order, Sum(plan_qty) Plan_Units, sum(cmp_qty) Completed_Units, Round(((Sum(cmp_qty)/sum(plan_qty)) * 100.00),2)Percent_Cmp, total_value Value, SUM(round(cmp_qty * unit_price,2)) cmp_value, COUNT(DISTINCT CASE WHEN od_f.plan_qty <> od_f.cmp_qty THEN od_f. od_rid END) as newcol FROM od_f JOIN om_f ON od_f.ob_oid = om_f.ob_oid GROUP BY od_f.ob_oid, total_value ORDER BY Percent_Cmp desc; ``` You should also learn to use proper, explicit `JOIN` syntax. Simple rule: *Never* use commas in the `FROM` clause.
Trying to add two queries together
[ "", "sql", "informix", "" ]
I have tried some of the various solutions posted on Stack for this issue but none of them keep null values (and it seems like the entire query is built off that assumption). I have a table with 1 million rows. There are 10 columns. The first column is the id. Each id is unique to "item" (in my case a sales order) but has multiple rows. Each row is either completely null or has a single value in one of the columns. No two rows with the same ID have data for the same column. I need to merge these multiple rows into a single row based on the ID. However, I need to keep the null values. If the first column is null in all rows I need to keep that in the final data. Can someone please help me with this query I've been stuck on it for 2 hours now. ``` id - Age - firstname - lastname 1 13 null null 1 null chris null ``` should output ``` 1 13 chris null ```
As some others have mentioned, you should use an aggregation query to achieve this. ``` select t1.id, max(t1.col1), max(t1.col2) from tableone t1 group by t1.id ``` This should return nulls. If you're having issues handling your nulls, maybe implement some logic using ISNULL(). Make sure your data fields really are nulls and not empty strings. If nulls aren't being returned, check to make sure that EVERY single row that has a particular ID has ONLY nulls. If one of them returns an empty string, then yes, it will drop the null and return anything else over the null.
It sounds like you want an aggregation query: ``` select id, max(col1) as col1, max(col2) as col2, . . . from t group by id; ``` If all values are `NULL`, then this will produce `NULL`. If one of the rows (for an `id`) has a value, then this will produce that value.
Turning multiple rows into single row based on ID, and keeping null values
[ "", "sql", "sql-server", "t-sql", "" ]
Is there any difference, with regards to performance, when there are many queries running with (different) constant values inside a where clause, as opposed to having a query with declared parameters on top, where instead the parameter value is changing? Sample query with with constant value in where clause: ``` select * from [table] where [guid_field] = '00000000-0000-0000-000000000000' --value changes ``` Proposed (improved?) query with declared parameters: ``` declare @var uniqueidentifier = '00000000-0000-0000-000000000000' --value changes select * from [table] where [guid_field] = @var ``` Is there any difference? I'm looking at the execution plans of something similar to the two above queries and I don't see any difference. However, I seem to recall that if you use constant values in SQL statements that SQL server won't reuse the same query execution plans, or something to that effect that causes worse performance -- but is that actually true?
It is important to distinguish between parameters and variables here. Parameters are passed to procedures and functions, variables are declared. Addressing variables, which is what the SQL in the question has, when compiling an ad-hoc batch, SQL Server compiles each statement within it's own right. So when compiling the query with a variable it does not go back to check any assignment, so it will compile an execution plan optimised for an unknown variable. On first run, this execution plan will be added to the plan cache, then future executions can, and will reuse this cache for all variable values. When you pass a constant the query is compiled based on that specific value, so can create a more optimum plan, but with the added cost of recompilation. So to specifically answer your question: > However, I seem to recall that if you use constant values in SQL statements that SQL server won't reuse the same query execution plans, or something to that effect that causes worse performance -- but is that actually true? Yes it is true that the same plan cannot be reused for different constant values, but that does not necessarily cause worse performance. It is possible that a more appropriate plan can be used for that particular constant (e.g. choosing bookmark lookup over index scan for sparse data), and this query plan change may outweigh the cost of recompilation. So as is almost always the case regarding SQL performance questions. The answer is **it depends**. For parameters, the default behaviour is that the execution plan is compiled based on when the parameter(s) used when the procedure or function is first executed. I have answered similar questions before in much more detail with examples, that cover a lot of the above, so rather than repeat various aspects of it I will just link the questions: * [Does assigning stored procedure input parameters to local variables help optimize the query?](https://stackoverflow.com/questions/14468603/does-assigning-stored-procedure-input-parameters-to-local-variables-help-optimiz/14469603#14469603) * [Ensure cold cache when running query](https://stackoverflow.com/questions/19133537/ensure-cold-cache-when-running-query/19134216#19134216) * [Why is SQL Server using index scan instead of index seek when WHERE clause contains parameterized values](https://stackoverflow.com/questions/27564852/why-is-sql-server-using-index-scan-instead-of-index-seek-when-where-clause-conta/27566881#27566881)
There are so many things involved in your question and all has to do with statistics.. SQL compiles execution plan for even Adhoc queries and stores them in plan cache for Reuse,if they are deemed safe. ``` select * into test from sys.objects select schema_id,count(*) from test group by schema_id --schema_id 1 has 15 --4 has 44 rows ``` **First ask:** we are trying a different literal every time,so sql saves the plan if it deems as safe..You can see second query estimates are same as literla 4,since SQL saved the plan for 4 ``` --lets clear cache first--not for prod dbcc freeproccache select * from test where schema_id=4 ``` **output:** [![enter image description here](https://i.stack.imgur.com/k9OLx.png)](https://i.stack.imgur.com/k9OLx.png) ``` select * from test where schema_id=1 ``` **output:** [![enter image description here](https://i.stack.imgur.com/Yf4Ap.png)](https://i.stack.imgur.com/Yf4Ap.png) **second ask :** Passing local variable as param,lets use same value of 4 ``` --lets pass 4 which we know has 44 rows,estimates are 44 whem we used literals declare @id int set @id=4 select * from test ``` As you can see below screenshot,using local variables estimated less some rough 29.5 rows which has to do with statistics .. **output:** [![enter image description here](https://i.stack.imgur.com/AvbKg.png)](https://i.stack.imgur.com/AvbKg.png) So in summary ,statistics are crucial in choosing query plan(nested loops or doing a scan or seek) ,from the examples,you can see how estimates are different for each method.further from a plan cache bloat perspective You might also wonder ,what happens if i pass many adhoc queries,since SQL generates a new plan for same query even if there is change in space,below are the links which will help you **Further readings:** <http://www.sqlskills.com/blogs/kimberly/plan-cache-adhoc-workloads-and-clearing-the-single-use-plan-cache-bloat/> <http://sqlperformance.com/2012/11/t-sql-queries/ten-common-threats-to-execution-plan-quality>
SQL - any performance difference using constant values vs parameters?
[ "", "sql", "sql-server", "sql-server-2012", "" ]
I am running this query in MySQL: ``` SELECT sequence from prices WHERE match_description <> '' AND 'SIP Trunk: 123456 (1 Channel)' LIKE CONCAT(match_description, '%') ORDER BY length(match_description) desc LIMIT 1; ``` I have rows that have a `match_description` of: ``` SIP Trunk (1 Channel) SIP Trunk (2 Channels) SIP Trunk (3 Channels) ``` etc... when running the above query, the string looks like `SIP Trunk: 123456 (x Channels)` how can i match these with my rows correctly? bearing in mind, the query i have works with other strings so i cannot change it too much. for example, other queries that run are like: ``` SELECT sequence from prices WHERE match_description <> '' AND 'Seat 200' LIKE CONCAT(match_description, '%') ORDER BY length(match_description) desc LIMIT 1; ``` this will match a row with `match_description` of `Seat` the queries are run when i upload a CSV file. this includes product names that are supplied to me so they are uploaded as they are sent to me. some example rows would be: ``` Seat 200 Seat 201 Seat 202 Call Queue 200 Call Group 201 Geographic Number 01234 567890 SIP Trunk: 123456 (2 Channels) SIP Trunk: 654321 (5 Channels) ``` So, using my query, I am able to match all except for the last two. for the ones I can match, i have the following in `match_description` in my table: ``` Seat Seat Seat Call Queue Call Group Geographic Number ``` i am just unsure how to match these? (the 123456 and 654321 are different for each product) ``` SIP Trunk: 123456 (2 Channels) SIP Trunk: 654321 (5 Channels) ```
You could match the first 11 characters to `SIP Trunk:`, then the last 9/ 10 characters to `channel)` or `channels)` then that way, if your match\_description is of the form: `SIP Trunk: yyyyyy (x Channels)` it will pick it up. The `MID` statements will make sure that `x` and `yyyyyy` are integers and the opening bracket is there. Also, thanks to @dwjv who points out you won't be able to index on the match\_description column. ``` SELECT sequence FROM prices WHERE match_description <> '' AND LEFT(match_description, 11) = 'SIP Trunk: ' AND (RIGHT(match_description, 9) = ' channel)' OR RIGHT(match_description, 10) = ' channels)') AND MID(match_description, 12,6) REGEXP '^-?[0-9]+$' AND MID(match_description, 18,3) = ' ( ' AND MID(match_description, 20,1) REGEXP '^-?[0-9]+$' ORDER BY length(match_description) desc LIMIT 1; ```
Try something like this ``` SELECT sequence from prices WHERE match_description <> '' AND match_description LIKE '%SIP Trunk: 123456 (% Channel%)%' ORDER BY length(match_description) desc LIMIT 1; ```
SQL Query to find rows based on descriptions
[ "", "mysql", "sql", "" ]
**Student table** ``` Student Id Student Name 1 Vijay 2 Ram ``` **Student Detail Table** ``` Student ID Code StudentIdentityNumber 1 Primary 143 1 Secondary 143 1 Teritary 143 2 Primary 123 2 Secondary 123 2 Teritary 126 ``` Output required ``` StudentID PrimaryIdentity SecondaryIdentity TeritaryIdentity 2 123 123 126 ``` I just want this output. The output doesnt have StudentID 1 because for him primary secondary and teritary Numbers are same. Hope it is clear Need simple solution. Yes Code column is Only three. Static only
Please find the below query: Hope it helps you. ``` WITH cte as (SELECT StudentID , [Primary],[Secondary],[Teritary] FROM (SELECT StudentID , Code, StudentIdentityNumber FROM StudentDetail) s Pivot ( max(StudentIdentityNumber) for Code in ( [Primary],[Secondary],[Teritary]) )as pvt ) SELECT * FROM cte where cte.[Primary]<>cte.[Secondary] or cte.[Primary]<> cte.Teritary ```
``` SELECT sd1.Student, sd1.StudentIdentityNumber as Primary, sd2.StudentIdentityNumber as Secondary, sd3.StudentIdentityNumber as Teritary FROM StudentDetail sd1 JOIN StudentDetail sd2 ON sd1.StudentID = sd2.StudentID AND sd1.Code = 'Primary' AND sd2.Code = 'Secondary' JOIN StudentDetail sd3 ON sd2.StudentID = sd3.StudentID AND sd2.Code = 'Teritary' WHERE sd1.Primary <> sd2.Secondary or sd1.Primary <> sd3.Teritary ```
Display Mismatched Rows belonging to Same table
[ "", "sql", "sql-server", "" ]
Below is my code. ``` DECLARE @msg NVARCHAR(MAX) = NULL ;WITH CTE AS ( SELECT 'A' AS Message UNION SELECT 'B' AS Message UNION SELECT 'C' AS Message UNION SELECT 'D' AS Message ) SELECT @msg = COALESCE(ISNULL(@msg,'Attachements') + ', ','') + Message FROM CTE SELECT @msg + ' are missing.' ``` It is generating output :- ``` Attachments, A, B, C, D are missing. ``` How can I avoid first comma after word "`Attachments`" ? Please help. Other techniques to satisfy the requirement would also be welcome. Thanks.
Instead of using the [undocumented/unsupported way of concatenating strings](https://sqlblog.org/2011/03/08/t-sql-tuesday-16-this-is-not-the-aggregate-youre-looking-for), use `FOR XML PATH` instead: ``` DECLARE @msg NVARCHAR(MAX) = NULL ;WITH CTE AS ( SELECT 'A' AS Message UNION SELECT 'B' AS Message UNION SELECT 'C' AS Message UNION SELECT 'D' AS Message ) SELECT @msg = 'Attachments ' + STUFF(( SELECT ', ' + Message FROM CTE ORDER BY Message FOR XML PATH(''), TYPE).value('.[1]', 'NVARCHAR(MAX)') , 1, 2, ' ') SELECT @msg + ' are missing.' ```
Try this: ``` SELECT @msg = ISNULL(@msg + ', ', 'Attachements ') + Message FROM CTE ``` and check out `FOR XML` approach for concatenating string
Appending messages in a variable using COALESCE function
[ "", "sql", "sql-server", "" ]
I have got a table A and B. Table A has got: ``` Id | artist | artist_id(that is empty now) -------------------------------------------- 1 | John | 2 | Jack | ``` Table B has: ``` artist_id | artist ------------------- 34 | John 56 | Jack 57 | Mike ``` I would like to set artist\_id in table A where artist is the same as in B. So result would be: ``` Id | artist | artist_id -------------------------------------------- 1 | John | 34 2 | Jack | 56 ``` How to do it in postgresql?
Switching from names to ids is a good idea. Assuming that you have no duplicates, this is easily done using a `from` clause or correlated subquery: ``` update a set artist_id = b.artist_id from b where a.name = b.name; ```
this should answer, using `UPDATE` yourtable A and `FROM` the table to JOIN (B) ``` UPDATE TABLEA AS A SET A.artist_id = B.artist_id FROM TABLEB as B WHERE A.artist = B.artist ```
Changing record value into its id from antoher table
[ "", "sql", "postgresql", "" ]
I've got these tables: ``` home_table ---------- home_id | home_name user_table --------- user_id | user_name user_home_rel_table ---------- id | user_id | home_id ``` (Many users may belong to many homes) I have an api call called /homes/2/users which i want to return all the users of the home with id 2. Along with the call, is the user id of the current user. If the user belongs to the home (in this case id 2), then he is allowed to see the other users. The user belongs to the home if he exists in user\_home\_rel\_table as such: ``` user_home_rel_table ---------- id | user_id | home_id ---------------------- 1 | $currentID | 2 ```
I believe that this will give you what you are looking for. I put in `$currentid` and `$home_id` as placeholders for your parameters. I also used slightly different names for the tables since I refuse to use table names with `_table` appended on the end... ;) ``` SELECT U.user_id, U.user_name FROM User_Homes UH INNER JOIN User_Homes UH2 ON UH2.home_id = UH.home_id INNER JOIN Users U ON U.user_id = UH2.user_id WHERE UH.user_id = $currentid AND UH.home_id = $home_id -- 2 ```
This will only return results if the `$desiredUserID` is part of the home requested with `$desiredHomeID`: ``` SELECT u.user_id, u.user_name FROM home_table h LEFT JOIN user_home_rel_table uh ON (uh.home_id = h.home_id) LEFT JOIN user_table u ON (uh.user_id = u.user_id) LEFT JOIN user_home_rel_table up ON (uh.home_id = up.home_id AND up.user_id = $desiredUserID) WHERE h.home_id = $desiredHomeID AND up.user_id IS NOT NULL ```
Is there any way to do this with a single query?
[ "", "mysql", "sql", "join", "" ]
I have a query to update some field on some condition. Conditions > The time difference is not more than 1 hour and the date can be same. ``` select * from Table where user_cd = 'HARSHIT' and to_char(sysdate, 'dd/mm/yyyy') = to_char(brth_dt, 'dd/mm/yyyy'); ``` But one condition is also there like at night the user tries to update at 23:30 and after that the he tries next day at 00:15 so the difference is 45 min but the update must execute ``` select brth_dt from Table where user_cd = 'HARSHIT'; select sysdate from dual; select brth_dt from Table where user_cd = 'HARSHIT' and sysdate-(1/24) < BRTH_DT; ``` Result of above query ## BRTH\_DT 25/02/2016 12:30:00 1 row selected. ## SYSDATE 24/02/2016 16:7:58 1 row selected. ## BRTH\_DT 25/02/2016 12:30:00 1 row selected.
I see no reason to convert a date to a string ... if you need to check 2 dates are within an hour of each other, just do the math on the date, and compare : ``` select * from sir_people where user_cd = 'HARSHIT' and BRTH_DT BETWEEN sysdate-(1/24) AND sysdate; ``` to\_char on a date, for purposes of comparisons, is fundamentally flawed logic and should be avoided. [edit] based on example provided: it appears you want to exclude future dates, and only include those dates between now, and an hour earlier. query updated to accomodate that additional requirement.
``` to_char(col_name, 'yyyy-mm-dd hh24:mi:ss') ``` just use 24-hour format, I think that should do the work.
Update query with compare of date and time
[ "", "sql", "oracle", "" ]
I have two tables: The first one `table1` is created on the fly (as the user submits data from a web server) and is usually ~50K rows. The second table is a lookup table `table2` and has ~10Mil rows. I'm trying to join both tables on four columns as follows: ``` SELECT t.id FROM table1 t JOIN table2 m ON (t.name = m.name AND t.pos = m.pos AND t.ref = m.ref AND t.alt = m.alt); ``` I've indexed columns `name` (VARCHAR), `pos` (INT), `ref` (CHAR) and `alt` (CHAR) in `table2`, but the query still takes way too long to complete. Any pointers on what could be going wrong here? Thanks --- Output of `EXPLAIN`: ``` id select_type table partitions type possible_keys key key_len ref rows filtered Extra 1 SIMPLE t1 NULL ALL NULL NULL NULL NULL 49329 100.00 Using where 1 SIMPLE t2 NULL ref table2_name,table2_pos,table2_ref,table2_alt table2_name 32 my_db.t1.NAME 2488 0.00 Using index condition; Using where ```
Create a compound index on `name, pos, ref, alt` like ``` INDEX theIndex (name,pos,ref, alt) ``` Also, 4 single indices will help a little bit - see <http://dev.mysql.com/doc/refman/5.7/en/index-merge-optimization.html> - but not as much as a compound index.
Two things to try here: 1. The first is the easiest -- change the order of your join clause to move the varchar column to the end. > SELECT t.id FROM table1 t JOIN table2 m ON (t.pos = m.pos AND t.ref = > m.ref AND t.alt = m.alt AND t.name = m.name); 2. This is a little more work, but add a new computed column that generates a numeric hash based on the 4 columns. Remove the indexes on pos, ref, alt and name and add a new index to the the hash column. Then include the hash column in your join clause. > SELECT t.id FROM table1 t JOIN table2 m ON (t.hash = m.hash AND t.pos = m.pos AND t.ref = > m.ref AND t.alt = m.alt AND t.name = m.name); Edit: Without looking at your database and the query execution plan, it's hard to troubleshoot this, but my guess is that MySQL is having a hard time joining on the VARCHAR column. Can you update your question with the results of > EXPLAIN SELECT t.id FROM table1 t JOIN table2 m ON (t.name = m.name > AND t.pos = m.pos AND t.ref = m.ref AND t.alt = m.alt)
Joining two tables MySQL is slow even with indexing
[ "", "mysql", "sql", "database", "join", "" ]
I've got a table with columns: ``` Acct_no, PSTL_CODE, NAME, phone ``` I'm trying to get rid of **all** rows that share the same PSTL\_CODE **and** phone (i.e. dump the ones where there's his & hers accounts, and similar scenarios) I've pulled together the following which I *think* should only return rows with a unique PSTL\_CODE: ``` SELECT * FROM Sheet1 WHERE PSTL_CODE IN (SELECT PSTL_CODE FROM SHEET1 GROUP BY PSTL_CODE HAVING COUNT(PSTL_CODE) =1) ORDER BY phone ``` and it's *close-ish*, but it's still returning one row where there are two accounts at the same PSTL\_CODE. and I'm stuck with Access 2007, so I can't do: ``` SELECT * FROM Sheet1 EXCEPT (SELECT PSTL_CODE FROM SHEET1 GROUP BY PSTL CODE HAVING COUNT(*) >1) ORDER BY phone ``` in order to just scythe away the multiples. Help!
Try using `Exists` Considering that your unique combination is `PSTL_CODE` and `phone` ``` SELECT * FROM sheet1 a WHERE EXISTS (SELECT 1 FROM sheet1 b WHERE a.pstl_code = b.pstl_code AND a.phone = b.phone HAVING Count(*) = 1) ORDER BY phone ``` It is better to define a `Unique Constraint` on `PSTL_CODE` and `phone` columns which helps you to avoid duplicate records in table
If I understand the business logic correctly you are attempting to eliminate couples living together from the query. you are grouping by phone also which seems to be causing the error. Is there a logical business reason for two people to live in the same place but have different phone numbers. eg mobile phones?
SQL: Removing ALL rows that have repeated values
[ "", "sql", "ms-access-2007", "" ]
I am trying to fetch record from database by getting and parsing year and month from date parameter. Here is the code in `Where` clause in my stored procedure: ``` (MONTH(ai.InstallmentDueDate)) = (MONTH(@DueDate)) AND (YEAR(ai.InstallmentDueDate)) = (YEAR(ai.InstallmentDueDate)) ``` The month clause works perfectly, but it returns all the years data of that month. Please tell me what to do. So that it returns data of that specific year.
Is this what you required? ``` (MONTH(ai.InstallmentDueDate)) = (MONTH(@DueDate)) AND (YEAR(ai.InstallmentDueDate)) = (YEAR(@DueDate)) ``` But I think simply passing the parameter without applying any function can achieve what you want. ``` ai.InstallmentDueDate = @DueDate ``` (Be careful when you use functions in where clause. It's a performance hit)
You are not using the parameter in the year, that's why it's not working. Check your original post, you're comparing the ai.InstallmentDueDate to itself. You need to compare it against the year of the passed parameter ``` (MONTH(ai.InstallmentDueDate)) = (MONTH(@DueDate)) AND (YEAR(ai.InstallmentDueDate)) = (YEAR(@DueDate)) ```
Fetch record from database using stored procedure with month and year condition
[ "", "sql", "sql-server", "t-sql", "stored-procedures", "" ]
I have query to find out the duplicates in my data but that query also encountering the data which are not duplicates but my query sees it as a duplicate because my query reading them same. For example 'AABWcFABmAAAyWJAAb' and 'AABWcFABmAAAyWJAAB' but in actual they are unique since this both id holds different data. I have used Collate function but it didn't help. Please let me know if there is any built in function I can use or any logic. Thank you in advance for the help. ``` select distinct npa ,npanxx_row_id ,count() from kjm.audit_309 where npanxx_row_id COLLATE Latin1_General_CS_AS in (npanxx_row_id) --and NPANXX_ROW_ID = 'AABWcFABmAAAyWJAAB' group by npa,npanxx_row_id having count() >1 order by npa ```
Another option in SQL Server would potentially be BINARY\_CHECKSUM(). This will detect differences in case. ``` select your_column , BINARY_CHECKSUM(your_column) , COUNT(*) FROM your_table GROUP BY your_column , BINARY_CHECKSUM(your_column) HAVING count(*) >1 ```
If you're looking for duplicates, then this should work: ``` CREATE TABLE #case_sensitivity_training (my_str VARCHAR(40) NOT NULL) INSERT INTO #case_sensitivity_training (my_str) VALUES ('AABWcFABmAAAyWJAAb'), ('AABWcFABmAAAyWJAAB') SELECT my_str COLLATE SQL_Latin1_General_CP1_CS_AS, COUNT(*) FROM #case_sensitivity_training GROUP BY my_str COLLATE SQL_Latin1_General_CP1_CS_AS HAVING COUNT(*) > 1 ```
How can I differentiate small and big letters in sql
[ "", "sql", "sql-server", "reporting-services", "" ]
I need Oracle SQL function which sums comma separated values. For example, this function needs to return 100 from string: ``` 0,4,2,88,6 ```
a solution without regular expressions: ``` create or replace function calc(i_str in varchar2) return number is l_result number; begin execute immediate 'select ' || i_str || ' from dual' into l_result; return l_result; exception when others then return null; end; select calc(replace('0,4,2,88,6', ',', '+')) from dual --> 100 ```
``` select sum(regexp_substr('0,4,2,88,6', '[^,]+', 1, level)) as result from dual connect by regexp_substr('0,4,2,88,6', '[^,]+', 1, level) is not null; ```
Sum comma separated values in Oracle SQL
[ "", "sql", "oracle", "" ]
I'm finding in Postgresql and I want to remove `+9` UTC value in my query. **For example:** In to\_timestamp column, I want to remove `+09`, and only keep `2016-02-26 00:23:44` value. This is my query: `select t,to_timestamp(v) from c1.hour where f = 22 and date_trunc('day',t - '1 hour'::INTERVAL) = '2016-02-26 00:00:00' order by t ;` Here my result: [![enter image description here](https://i.stack.imgur.com/idWw4.png)](https://i.stack.imgur.com/idWw4.png) And this is my result when I didn't use to\_timestamp: [![enter image description here](https://i.stack.imgur.com/ZhnOn.png)](https://i.stack.imgur.com/ZhnOn.png) Can I get some help on this please?
Use [`to_char()`](http://www.postgresql.org/docs/9.1/static/functions-formatting.html) function to format `timestamp` to any desired string representation. Example: ``` SELECT to_char(now(), 'YYYY-MM-DD HH24:MI:SS') ``` will return `2016-02-26 09:37:49` (without a timezone) Your query have to be this: ``` SELECT t, to_char(to_timestamp(v), 'YYYY-MM-DD HH24:MI:SS') FROM c1.hour WHERE f = 22 AND date_trunc('day',t - '1 hour'::INTERVAL) = '2016-02-26 00:00:00' ORDER BY t ; ```
Specify UTC: ``` select to_timestamp(0)::timestamptz at time zone 'UTC' ``` I am in timezone +08 and get the following result: > 1970-01-01 00:00:00 With the time zone specification, I get: > 1970-01-01 08:00:00+08 An advantage is you don't have to specify the date-time format, which can cause unexpected results for users. The system specified formatting is preserved.
How to remove TimeZone in query to_timestamp Postgresql
[ "", "sql", "postgresql", "postgresql-9.1", "" ]
I will post only the part of the sql statament that contains `WHERE` clause: ``` //Together WHERE `energy_extra`.`PacketFk` IN('7') OR `energy_extra`.`house_extra_id` IN('4') //Not together (separate where statements) AND `People_from` <= '4' AND `People_to` >= '4' AND `Size` >= '100' AND `Size` <= '200' AND `houseconstructiontypelist`.`ConstructionTypeFk` IN('1', '2') ``` I can't change the order of these lines because they are a part of a much larger query written in codeigniter active record with conditional statements. What I can change is the SQL itself. I don't get the desired result because `WHERE foo IN ('7')` `OR bar IN ('4')` is followed by `AND` thus SQL thinks those further conditions are a part of the `OR bar IN()` which they are not. They need to start with a new `WHERE` statement and every line after that can start with `AND` again. I tried this but it doesn't work: ``` //Together WHERE `energy_extra`.`PacketFk` IN('7') OR `energy_extra`.`house_extra_id` IN('4') //Not together (separate where statements) WHERE `People_from` <= '4' AND `People_to` >= '4' AND `Size` >= '100' AND `Size` <= '200' AND `houseconstructiontypelist`.`ConstructionTypeFk` IN('1', '2') ``` Logic is as follows: ``` find rows where energy_extra.PacketFk is equal to 7 or where energy_extra.house_extra_id is equal to 4 then check all the other conditions ```
Logical `OR` operator has less priority than `AND` operator - thus you're getting this result. You need just enclose your first condition in brackets: ``` WHERE ( `energy_extra`.`PacketFk` IN('7') OR `energy_extra`.`house_extra_id` IN('4') ) AND .... ```
You could also put the first condition (containing or) to last as below : - ``` WHERE `People_from` <= '4' AND `People_to` >= '4' AND `Size` >= '100' AND `Size` <= '200' AND `houseconstructiontypelist`.`ConstructionTypeFk` IN('1', '2') And `energy_extra`.`PacketFk` IN('7') OR `energy_extra`.`house_extra_id` IN('4') ``` Or just enclose the or statement in bracket to separate it from other statements as commented in comment section.
WHERE clause followed by OR then WHERE again
[ "", "mysql", "sql", "" ]
I need a SQL query that searches column 2 through column 3 if a record is found (either an exact match or a value between the value of column 2 and column 3) return corresponding value from column 1 for example ``` Column1 Column2 Column3 Jane Doe 123456 123459 John Doe 123460 123460 Frank Doe 123461 123482 ``` if I type in `123457` I need it to show me `Jane Doe` if I type in `123460` I need it to show me `John Doe` thank you,
``` SELECT Column1 FROM TableName WHERE 123460 BETWEEN column2 AND column3; ```
To take care of the case when column3 < column2, do `BETWEEN SYMMETRIC`: ``` select Column1 from tablename where 123460 between symmetric column2 and column3 ``` Will also return the row (Matt Doe, 123465, 123455)!
SQL query that searches two columns (2 & 3) if record is found display column 3
[ "", "sql", "" ]
Hi lets say i have the following tow tables first table is `Orders` which consist of following columns ``` Order_ID OrderDate 1 2016-01-20 2 2016-01-21 ``` and the second table is `OrderDetails` which consist of following columns ``` Order_ID ProductID UnitPrice Quantity 1 1 5.00 1 1 3 20.00 3 2 2 10.00 2 ``` How can i get the following out put ``` Order_ID OrderDate OrderTotalamount 1 2016-01-20 65.00 2 2016-01-21 20.00 ```
Would you mind trying something like this out? ``` SELECT ORD.Order_ID, ORD.OrderDate, SUM(ORDD.UnitPrice * ORDD.Quanity) AS OrderTotalAmount FROM Orders ORD INNER JOIN OrderDetails ORDD ON ORDD.Order_ID = ORD.Order_ID GROUP BY ORD.Order_ID, ORD.OrderDate ```
You can try this ``` SELECT a.Order_ID, a.OrderDate, SUM(Quantity*UnitPrice) as OrderTotalamout FROM Orders a JOIN OrderDetains ON (a.Order_ID = OrderDetails.Order_ID) GROUP BY 1, 2; ``` I hope it works fine for you.
Getting total amount of orders using sql query
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I'm fairly new to SQL Server and am running into what seems like a basic problem. Essentially my table `COPY_TYPE` has a primary key `C_NAME` and a constraint on `C_NAME`: `VT, VCD, DVD`. How would I go about having that written out? My bad attempt is what follows: ``` create table COPY_TYPE ( C_NAME char(5) primary key, constraint PK_C_NAME check ( 'VT', 'VCD', 'DVD') ); ``` I know this is wrong, but I don't yet know how to start thinking in this language.
You're very close - I tend to like to *explicitly name* my constraint, so I'd write it out like this: ``` CREATE TABLE dbo.COPY_TYPE ( C_NAME char(5) NOT NULL CONSTRAINT PK_Copy_Type PRIMARY KEY CLUSTERED CONSTRAINT CHK_C_NAME CHECK (C_NAME IN ('VT', 'VCD', 'DVD')), ... (other columns to follow) .... ); ```
Here is an example for you to follow: ``` CREATE TABLE Persons ( P_Id int NOT NULL, LastName varchar(255) NOT NULL, FirstName varchar(255), Address varchar(255), City varchar(255), CONSTRAINT pk_PersonID PRIMARY KEY (P_Id, LastName) ) ```
Making a new Primary Key also have constraints
[ "", "sql", "constraints", "primary-key", "sql-server-2014", "" ]
I am using Microsoft SQL Server Management Studio, I have minimal expertise in SQL Server but I have used MySQL in the past and pretty good at it. My question is: I have a rather large database with over 130m+ records. I wish to only get the URL column but every single record in this column. Microsoft SQL Server Management Studio jams up when saving to a file due to one error or another so I have come to the conclusion that I need to do this in chunks. I am running this command at the moment and saving to file ``` SELECT TOP 20000000 [URL] FROM [dbo].[siteentry] WHERE [Content] LIKE '' ``` However when this command finishes I have 0 clue how to skip the first 20,000,000 and continue onto the next batch of 20,000,000 without killing the server. Any help would be great.
With SQL Server **2012**, you can use the `OFFSET...FETCH` commands: ``` SELECT [URL] FROM [dbo].[siteentry] WHERE [Content] LIKE '' ORDER BY (some column) OFFSET 20000 ROWS FETCH NEXT 20000 ROWS ONLY ``` For this to work, you **must** order by some column in your table - which you should *anyway*, since a `TOP ....` without an `ORDER BY` is useless - you'll get back arbitrary rows
You can use an "offset" to skip the rows. Check <https://technet.microsoft.com/en-us/library/gg699618(v=sql.110).aspx> for more information.
SQL Server : large DB Query In Chunks
[ "", "sql", "sql-server", "sql-server-2012", "" ]
Say you have a table "table\_with\_100\_columns." And you want to add one more column with a simple join... without changing all of the column names. In other words, you wish to write something like ``` SELECT a.* as <a's columns without prefix>, additional_field FROM [table_with_100_columns] a JOIN [table_with_2_columns] b ON a.col1 = b.key ``` You should be able to do this to generate a new table with 101 columns, without having to rename every single column by hand. Right now the only way I know how to do this as follows: ``` SELECT a.col1 as col1, a.col2 as col2, a.col3 as col3, ... a.col100 as col100, b.additional_field as additional_field FROM [table_with_100_columns] a JOIN [table_with_2_columns] b ON a.col1 = b.key ``` Having to write 100 unnecessary lines of code simply to add one more column to a table is unbelievably inefficient - so I'm hoping there is a better way to preserve column names while joining? ## UPDATE It appears this is not yet possible in BigQuery. It is very easy to implement and I suggest the following to the Google BigQuery team: ``` if no fields share a name in SELECT clause: if no subtable reference names given: Do not rename fields after JOIN ``` This will not break any current functionality and adds simple support for a very useful feature.
I think this problem is specific to BigQuery Legacy SQL. If you will use Big Standard SQL - you will not have this issue - see example below ``` #standardSQL WITH table_with_100_columns AS ( SELECT 11 AS col1, 21 AS col2, 31 AS col3 UNION ALL SELECT 12 AS col1, 22 AS col2, 32 AS col3 UNION ALL SELECT 13 AS col1, 23 AS col2, 33 AS col3 UNION ALL SELECT 14 AS col1, 24 AS col2, 34 AS col3 UNION ALL SELECT 15 AS col1, 25 AS col2, 35 AS col3 ), table_with_2_columns AS ( SELECT 11 AS key, 17 AS additional_field UNION ALL SELECT 12 AS key, 27 AS additional_field UNION ALL SELECT 13 AS key, 37 AS additional_field UNION ALL SELECT 14 AS key, 47 AS additional_field UNION ALL SELECT 15 AS key, 57 AS additional_field ) SELECT a.*, additional_field FROM `table_with_100_columns` AS a JOIN `table_with_2_columns` AS b ON a.col1 = b.key ``` See [Migrating from legacy SQL](https://cloud.google.com/bigquery/docs/reference/standard-sql/migrating-from-legacy-sql) in case if you need rewrite the rest of the query to be in Standard SQL The output will be as below with original column names (w/o prefixes) ``` col1 col2 col3 additional_field 13 23 33 37 11 21 31 17 15 25 35 57 12 22 32 27 14 24 34 47 ```
I don’t know of any option here available now rather than having those `100 unnecessary lines` to be part of the code. So you are down to `how to actually make it in most optimal way` for your particular use case It can be many I think, but I see most obvious two below – they are more-less trivial, but I put it here for the sake of completeness of my answer: > Option 1 –one off action/need Just take output for below statement into any spreadsheet, transpose it and dress upto expected SQL (at least the portion of it between SELECT and FROM in second query of your question) ``` SELECT * FROM table_with_100_columnsoutput WHERE false ``` in another words - you do this quite manually with whatever most friendly office tool for such manipulation you have in your hands > Option 2 – you need this on more-less frequent basis or as a part of some process Generate SQL code using any [language/client](https://cloud.google.com/bigquery/client-libraries) of your choice by retrieving schema with [Tables:get API](https://cloud.google.com/bigquery/docs/reference/v2/tables/get) and looking for [schema.fields[]](https://cloud.google.com/bigquery/docs/reference/v2/tables#resource) After sql code is assembled - you execute it using [API of your choice](https://cloud.google.com/bigquery/docs/reference/v2/jobs) Can be `get` or `insert` or whatever fit into your implementation logic > Option 3 – BigQuery Mate “Add Fields” Button Step 1 – select table in Navigation bar so you can see table’s schema in content panel Step 2 – set cursor within Query Editor at position where fields needs to be inserted [![enter image description here](https://i.stack.imgur.com/hC9HS.png)](https://i.stack.imgur.com/hC9HS.png) Step 3 – click on “Add Fields” button [![enter image description here](https://i.stack.imgur.com/DEDWh.png)](https://i.stack.imgur.com/DEDWh.png) Deployed Option 3 with support for alias use. Available now in web store
Google BigQuery SQL: Prevent column prefix renaming after join
[ "", "sql", "join", "google-bigquery", "prefix", "" ]
Pretty new to SQL and I'm not completely familiar with the proper syntax. I have a table that has a column which has a one to many relationship. I need to filter the column for one value, but not the others. Here's what the data looks like: ``` | USER KEY | USER ID | | ------ | ------- | | 11672 | SSO | | 11672 | SSO | | 11672 | DIRECT | | 11203 | SSO | | 11205 | SSO | | 11206 | DIRECT | ``` So you can see a user key can have both SSO and DIRECT or just SSO or just Direct . I need to filter the user key so it: A. Includes those USER KEYS with only DIRECT B. Exclude all those USER KEYS that have both SSO and DIRECT I started with this select statement: ``` SELECT * FROM USER_ID WHERE USER_ID_TYPE = 'DIRECT' OR (USER_ID_TYPE != 'SSO'); ``` This gives me back ALL the users with Direct regardless of whether they have both SSO and DIRECT. I don't know what the syntax is to filter out the other USER\_KEY that has both. Most of the tutorials I've read are pretty basic and don't go this deep into complex filtering like this. Any help would be appreciated. P
Your query is performing both tests against a single row, not for all the rows with the same `USER_KEY`. You need something like this: ``` SELECT * FROM USER_ID AS u1 WHERE USER_ID_TYPE 'DIRECT' AND NOT EXISTS ( SELECT * FROM USER_ID AS u2 WHERE u1.USER_KEY = u2.USER_KEY AND u2.USER_ID_TYPE = 'SSO') ``` or the equivalent `LEFT JOIN` ``` SELECT u1.* FROM USER_ID AS u1 LEFT JOIN USER_ID AS u2 ON u1.USER_KEY = u2.USER_KEY AND u2.USER_ID_TYPE = 'SSO' WHERE u1.USER_ID_TYPE = 'DIRECT' AND u2.USER_KEY IS NULL ```
You can achieve your desired result using some aggregation also ``` select USER_KEY from USER_ID group by USER_KEY having sum(USER_ID_TYPE = 'DIRECT') > 0 and sum(USER_ID_TYPE = 'SSO') = 0 ``` [`DEMO`](http://sqlfiddle.com/#!9/55f49e/4)
SQL filtering on single column with multiple conditions
[ "", "mysql", "sql", "" ]
i have a problem with supply parameter to store procedure with `odbc`, this is my procedure in `module form Public cmd As OdbcCommand` ``` Private Sub cmdapprove_Click(sender As Object, e As EventArgs) Handles cmdapprove.Click cmd = New OdbcCommand("select * from mk_cuti where mk_nik='" & txtnik.Text & "'", conn) rd = cmd.ExecuteReader rd.Read() rd.Close() Call opendb() If txtstatus.Text = 1 Then Using (conn) cmd.Connection = conn cmd.CommandType = CommandType.StoredProcedure cmd.CommandText = "sp_update_data_trans_cuti_terbawa" cmd.Parameters.AddWithValue("@mk_nik", Me.txtnik.Text) cmd.ExecuteNonQuery() End Using Dim updatestatus_hrd As String = "Update input_cuti set status_hrd=1 " & _ "where no_input='" & txtnoinput.Text & "'" cmd = New OdbcCommand(updatestatus_hrd, conn) cmd.ExecuteNonQuery() Call datacutikaryawan() Else Dim updatestatus_hrd As String = "Update input_cuti set status_hrd=1 " & _ "where no_input='" & txtnoinput.Text & "'" cmd = New OdbcCommand(updatestatus_hrd, conn) cmd.ExecuteNonQuery() Call datacutikaryawan() End If End Sub ``` when i run this procedure, i got massage this > ERROR [42000] [Microsoft][ODBC SQL Server Driver][SQL Server]Procedure > or function 'sp\_update\_data\_trans\_cuti\_terbawa' expects parameter > '@mk\_nik', which was not supplied. I think anyone can help me? please
Here's one with your class: ``` Dim conn As New OdbcConnection(sConnString) conn.Open() Dim sqlCommand As String = "sp_update_data_trans_cuti_terbawa @mk_nik='" & Me.txtnik.Text & "'" Dim command As New OdbcCommand(sqlCommand) command.CommandType = CommandType.StoredProcedure command.Connection = conn command.ExecuteNonQuery() ``` Aight, I'm off to the nearest pub.
``` conn.execute("sp_update_data_trans_cuti_terbawa @mk_nik='" & Me.txtnik.Text & "'") ```
Parameter with odbc error
[ "", "sql", "sql-server", "vb.net", "odbc", "" ]
My two tables are: ``` PkID | HouseFk | house_extra_id | Price | discount_id 1 | 5 | 6 | 1205 | 0 PkID | HouseFk | PacketFk | Price | discount_id 1 | 6 | 7 | 500 | 0 ``` How can I combine those two into a new table which is not stored in a database but only used for output. I already tried `join` and `union all` but I can't get the desired table structure. I want the columns of the first and the second table, with `NULL` values where needed like so: ``` PkID | HouseFk | house_extra_id | Price | discount_id | PacketFk 1 | 5 | 6 | 1205 | 0 | NULL 1 | 6 | NULL | 500 | 0 | 7 ``` If I use `join` on `HouseFk` I only get combined rows where `HouseFk` value is present in both tables and `union all` leaves out some of my columns!
use `union all`, and select `NULL` value where you want to add extra values like this: ``` select PkID , HouseFk , house_extra_id , Price , discount_id, NULL AS PacketFk from table_1 union all select PkID, HouseFk, NULL AS house_extra_id , Price , discount_id, PacketFk from table_2 ```
it will work ``` select pkid,housefk,house_extra_id,price,discount_id,null as packetfk from T1 union all select pkid,housefk,null as house_extra_id,price,discount_id,packetfk from t2 ```
Combine two tables in a new table
[ "", "mysql", "sql", "database", "" ]
I got 3 tables a, b and c. Table a and b will be standalone, while table c will receive the id from both table a and b. What I would like is a query that would select the id from both table a and b. But it wouldn't select them **IF** they already(both) exist in table c. Note: There would be multiple entries in all tables Edit: Table a and b will have one column, whilst table c will have too(thought it was implied, my bad). A will have id, b will have id. But table c will have a\_id and b\_id example tables: <http://sqlfiddle.com/#!9/714a6/11>
``` Select a.id ,b.id from A a,B b WHERE NOT EXISTS( SELECT 1 FROM C WHERE a_id = a.id AND b_id = b.id ) ```
I think you mean this: ``` SELECT `id`, 'a' AS `tName` FROM `a` WHERE `ID` NOT IN ( SELECT `ID` FROM `c`) UNION ALL SELECT `id`, 'b' AS `tName` FROM `b` WHERE `ID` NOT IN ( SELECT `ID` FROM `c`); ```
MySql - Select 3 tables and exclude if
[ "", "mysql", "sql", "" ]
I have the following query: ``` SELECT WC_WARD_CATEGORY, MAX(CASE WHEN months = 'JAN' THEN BOR END) JAN, MAX(CASE WHEN months = 'FEB' THEN BOR END) FEB, MAX(CASE WHEN months = 'MAR' THEN BOR END) MAR, MAX(CASE WHEN months = 'APR' THEN BOR END) APR, MAX(CASE WHEN months = 'MAY' THEN BOR END) MAY, MAX(CASE WHEN months = 'JUN' THEN BOR END) JUN, MAX(CASE WHEN months = 'JUL' THEN BOR END) JUL, MAX(CASE WHEN months = 'AUG' THEN BOR END) AUG, MAX(CASE WHEN months = 'SEP' THEN BOR END) SEP, MAX(CASE WHEN months = 'OCT' THEN BOR END) OCT, MAX(CASE WHEN months = 'NOV' THEN BOR END) NOV, MAX(CASE WHEN months = 'DEC' THEN BOR END) DEC FROM bor2 GROUP BY WC_WARD_CATEGORY ``` It is giving me the following output: [![enter image description here](https://i.stack.imgur.com/86rhG.png)](https://i.stack.imgur.com/86rhG.png) All I want is the NULL values to go away and show me only the columns which have proper data. So I want OCT,NOV and DEC to be present in the output but not the other columns. Is it possible using SQL?
Also using dynamic SQL, but with a pivot table: ``` declare @pc varchar(max) select @pc = isnull(@pc + ',', '') + mnths from ( select distinct '[' + months + ']' mnths, convert(date, '1 ' + months + ' 1') ord from (select months from bor2 group by months, WC_WARD_CATEGORY having max(bor) is not null) as a ) as b order by ord declare @sql varchar(max) select @sql = ' select * from ( select WC_WARD_CATEGORY, months, bor from bor2 ) as SourceTable PIVOT ( max(bor) for months in (' + @pc + ') ) as PivotTable;' execute(@sql) ```
If `sql-server` then just copy the result set what you are getting to a temp table and then try the following query. Use dynamic sql. Compare the total count of rows with total rows having null. If both the counts are same then exclude that particular column else include. **Query** ``` SELECT WC_WARD_CATEGORY, MAX(CASE WHEN months = 'JAN' THEN BOR END) JAN, MAX(CASE WHEN months = 'FEB' THEN BOR END) FEB, MAX(CASE WHEN months = 'MAR' THEN BOR END) MAR, MAX(CASE WHEN months = 'APR' THEN BOR END) APR, MAX(CASE WHEN months = 'MAY' THEN BOR END) MAY, MAX(CASE WHEN months = 'JUN' THEN BOR END) JUN, MAX(CASE WHEN months = 'JUL' THEN BOR END) JUL, MAX(CASE WHEN months = 'AUG' THEN BOR END) AUG, MAX(CASE WHEN months = 'SEP' THEN BOR END) SEP, MAX(CASE WHEN months = 'OCT' THEN BOR END) OCT, MAX(CASE WHEN months = 'NOV' THEN BOR END) NOV, MAX(CASE WHEN months = 'DEC' THEN BOR END) DEC INTO #temp FROM bor2 GROUP BY WC_WARD_CATEGORY; ``` Then, ``` declare @strsql varchar(max) set @strsql = 'select ' set @strsql += (select case when (select COUNT(*) from #temp where JAN is null ) <> (select count(*) from #temp ) then 'JAN, ' else '' end) set @strsql += (select case when (select COUNT(*) from #temp where FEB is null) <> (select count(*) from #temp ) then 'FEB, ' else '' end) set @strsql += (select case when (select COUNT(*) from #temp where MAR is null) <> (select count(*) from #temp ) then 'MAR, ' else '' end) set @strsql += (select case when (select COUNT(*) from #temp where APR is null) <> (select count(*) from #temp ) then 'APR, ' else '' end) set @strsql += (select case when (select COUNT(*) from #temp where MAY is null) <> (select count(*) from #temp ) then 'MAY, ' else '' end) set @strsql += (select case when (select COUNT(*) from #temp where JUN is null) <> (select count(*) from #temp ) then 'JUN, ' else '' end) set @strsql += (select case when (select COUNT(*) from #temp where JUL is null) <> (select count(*) from #temp ) then 'JUL, ' else '' end) set @strsql += (select case when (select COUNT(*) from #temp where AUG is null) <> (select count(*) from #temp ) then 'AUG, ' else '' end) set @strsql += (select case when (select COUNT(*) from #temp where SEP is null) <> (select count(*) from #temp ) then 'SEP, ' else '' end) set @strsql += (select case when (select COUNT(*) from #temp where OCT is null) <> (select count(*) from #temp ) then 'OCT, ' else '' end) set @strsql += (select case when (select COUNT(*) from #temp where NOV is null) <> (select count(*) from #temp ) then 'NOV, ' else '' end) set @strsql += (select case when (select COUNT(*) from #temp where DEC is null) <> (select count(*) from #temp ) then 'DEC, ' else '' end) set @strsql = LEFT(@strsql,len(@strsql) -1) set @strsql += ' from #temp' exec (@strsql) ```
How to ignore a column in select statement if it has null vales
[ "", "sql", "sql-server", "" ]
in my department paramater my contain a value or one/more department id. ``` SELECT DEPARTMENT,c.DEPTNAME as DEPARTMENT_DESCRIPTION, c.LOCATION as LOCATION, sum(b.qty) as QTY, sum(b.cost) as 'COST', sum(b.nbv) AS 'NBV' FROM FIX_ASSET A right join .SAMPLING b on ASSET_NUMBER=b.ASSET_NUMBER left join DEPARTMENT C on A.DEPARTMENT=C.DEPTID where BUSINESS_UNIT=1227 and b.year_audit=2016 and taggable='YES' group by department, c.DEPTNAME, c.LOCATION ``` What I want is to add a case/if statement inside in `where` clause ``` case when (select count(*) from #tempdept) > 0 then and fix_asset.department in (select item from #tempdept) ``` if the tempdept data is greater than 0 I would add the condition `And department in(select item from #tempdept)`
You don't need a case statement for this. Just make your WHERE clause ``` where BUSINESS_UNIT=1227 and b.year_audit=2016 and taggable='YES' and ( ( (select count(*) from #tempdept) > 0 and fix_asset.department in (select item from #tempdept) ) or (select count(*) from #tempdept) = 0 ) ```
You can try with this syntax, with two nested `CASE` statements: ``` case when (select count(*) from #tempdept) > 0 then case when fix_asset.department in (select item from #tempdept) then 'dosomething' else 'dosomething' end else 'dosomething' end AS 'something' ```
if/case statement in where clause
[ "", "sql", "" ]
I have a table of driver licenses per person and know when a person acquired the drivers license. The validity of the drivers license can either be X days from the day you acquired it or a specific date. ``` acquired relative specific_date valid_type expiration_date ---------------------------------------------------------------------------------- 2015-02-05 500 null days 2015-02-05 null 2016-03-05 date 2015-02-05 200 null days 2015-02-05 null 2016-02-22 date ``` My query right now would be: ``` SELECT acquired, relative_date, specific_date, valid_type FROM person_drivers_license WHERE (valid_type = 'days' AND (EXTRACT(epoch FROM acquired) - EXTRACT(epoch FROM now()))/86400 + relative_date > 0) OR (valid_type = 'DATE' AND specific_date >= now())); ``` I am trying to add an expiration\_date column with the select statement above. If it is a specific date, just take the date and put it in expiration\_date and if it is a relative date, calculate the expiration date with the help of the acquired date. Is this possible in PSQL?
First - there is a simpler way to do date math in postgres. You can use something like: ``` acquired + relative_date * interval '1 day' >= current_date ``` or ``` acquired + relative_date >= current_date -- any integer can be treated as interval in days for date mathematics is SQL ``` For the question - try one of this: ``` CASE WHEN valid_type = 'days' THEN acquired + relative_date * interval '1 day' WHEN valid_type = 'date' THEN specific_date --ELSE ??? you may specify something here END ``` or ``` COALESCE(specific_date, acquired + relative_date * interval '1 day') ``` The query may look like: ``` SELECT acquired, relative_date, specific_date, valid_type, COALESCE(specific_date, acquired + relative_date * interval '1 day') as valid_date FROM person_drivers_license WHERE COALESCE(specific_date, acquired + relative_date * interval '1 day') >= current_date ```
Try this: ``` SELECT acquired, relative_date, specific_date, valid_type CASE valid_type WHEN 'days' THEN acquired + relative_date WHEN 'date' THEN specific_date ELSE NULL END AS expiration_date FROM person_drivers_license; ```
Add additional column in a SELECT statement based on values of other columns PSQL
[ "", "sql", "postgresql", "select", "aggregate-functions", "psql", "" ]
I am new to SQL and I am no good with more advanced queries and functions. So, I have this 1 table with sales: ``` id date seller_name buyer_name ---- ------------ ------------- ------------ 1 2015-02-02 null Adrian 1 2013-05-02 null John B 1 2007-11-15 null Chris F 2 2014-07-12 null Jane A 2 2011-06-05 null Ted D 2 2010-08-22 null Maryanne A 3 2015-12-02 null Don P 3 2012-11-07 null Chris T 3 2011-10-02 null James O ``` I would like to update the `seller_name` for each id, by putting the `buyer_name` from previous sale as seller\_name to newer sale date. For example, for on id 1 John B would then be seller in `2015-02-02` and buyer in `2013-05-02`. Does that make sense? P.S. This is the perfect case, the table is big and the ids are not ordered so neat.
``` merge into your_table a using ( select rowid rid, lead(buyer_name, 1) over (partition by id order by date desc) seller from your_table ) b on (a.rowid = b.rid ) when matched then update set a.seller_name= b.seller; ``` Explanation : `Merge into` statement performs different operations based on matched or not matched criterias. Here you have to merge into your table, in the `using` having the new values that you want to take and also the rowid which will be your matching key. The `lead` function gets the result from the next n rows depending on what number you specify after the comma. After specifying how many rows to jump you also specify on what part to work, which in your case is partitioned by `id` and ordered by date so you can get the seller, who was the previous buyer. Hope this clears it up a bit.
Either of the below query can be used to perform the desire action ``` merge into sandeep24nov16_2 table1 using(select rowid r, lag(buyer_name) over (partition by id order by "DATE" asc) update_value from sandeep24nov16_2 ) table2 on (table1.rowid=table2.r) when matched then update set table1.seller_name=table2.update_value; ``` or ``` merge into sandeep24nov16_2 table1 using(select rowid r, lead(buyer_name) over (partition by id order by "DATE" desc) update_value from sandeep24nov16_2 ) table2 on (table1.rowid=table2.r) when matched then update set table1.seller_name=table2.update_value; ```
update in oracle sql : multiple rows in 1 table
[ "", "sql", "oracle", "sql-update", "" ]
I'm trying to solve a SQL problem online that has 2 tables as following, **Employee** table The Employee table holds all employees. ``` +----+-------+--------+--------------+ | Id | Name | Sa1ary | DepartmentId | +----+-------+--------+--------------+ | 1 | Joe | 70000 | 1 | | 2 | Henry | 80000 | 2 | | 3 | Sam | 60000 | 2 | | 4 | Max | 90000 | 1 | | 5 | Janet | 69000 | 1 | | 6 | Randy | 85000 | 1 | +----+-------+--------+--------------+ ``` **Department** table [![enter image description here](https://i.stack.imgur.com/xeQhS.png)](https://i.stack.imgur.com/xeQhS.png) The SQL should return the following data [![enter image description here](https://i.stack.imgur.com/JvkTx.png)](https://i.stack.imgur.com/JvkTx.png) I have the SQL queries as following, ``` SELECT D.Name AS Department, E.Name AS Employee, E.Salary AS Salary FROM Employee E INNER JOIN Department D ON E.DepartmentId = D.Id WHERE (SELECT COUNT(DISTINCT(Salary)) FROM Employee WHERE DepartmentId = E.DepartmentId AND Salary > E.Salary) < 3 ORDER by E.DepartmentId, E.Salary DESC; ``` The SQL is good, but, I'm little confused about the `< 3` part. Shouldn't it be = 3 as in the question it was asked about top 3 salaries ? I appreciate an explanation of the SQL and will be helpful.
If you added a column `Count employees who earn more` your table would look like this ``` +----+-------+--------+--------------+-------------------------------+ | Id | Name | Sa1ary | DepartmentId | Count employees who earn more | +----+-------+--------+--------------+-------------------------------+ | 1 | Joe | 70000 | 1 | 2 | | 2 | Henry | 80000 | 2 | 0 | | 3 | Sam | 60000 | 2 | 1 | | 4 | Max | 90000 | 1 | 0 | | 5 | Janet | 69000 | 1 | 3 | | 6 | Randy | 85000 | 1 | 1 | +----+-------+--------+--------------+-------------------------------+ ``` Then to find the top 3 per dept. your WHERE would be ``` WHERE `Count employees who earn more` < 3 ``` If you had `=3` it would return the only employees that was the 4th highest Since you don't have that column, that's what this SQL does ``` (SELECT COUNT(DISTINCT(Salary)) FROM Employee WHERE DepartmentId = E.DepartmentId AND Salary > E.Salary) ``` If you wanted to produce the table described above you could do the following ``` SELECT D.Name AS Department, E.Name AS Employee, E.Salary AS Salary, Count(E2.Salary) as Count_employees_who_earn_more FROM Employee E INNER JOIN Department D ON E.DepartmentId = D.Id LEFT JOIN Employee E2 ON e2.DepartmentId = E.DepartmentId AND E2.Salary > E.Salary GROUP BY D.Name , E.Name , E.Salary ``` [Demo](http://sqlfiddle.com/#!9/c75705/25)
I was working on the same SQL problem. Just in case someone may need help. Here's the answer I came up with. ``` SELECT dpt.Name AS Department, e1.Name AS Employee, e1.Salary AS Salary FROM Employee AS e1 INNER JOIN Department dpt ON e1.DepartmentID = dpt.Id WHERE 3 > ( SELECT COUNT(DISTINCT Salary) FROM Employee AS e2 WHERE e2.Salary > e1.Salary AND e1.DepartmentID = e2.DepartmentID ) ORDER BY Department ASC, Salary DESC; ``` 1. The hard part is to get the top 3 salaries of each department. I first count the **[number of employees with a higher salary]**. After that, I use **3 > [number of employees with a higher salary]** to keep the top 3 salaries only. (If there are more than 3 employees in top 3, which is to say some of them have the same salary, all of them will be included.) **Query** ``` SELECT * FROM Employee e1 WHERE 3 > ( SELECT COUNT(DISTINCT Salary) FROM Employee e2 WHERE e2.Salary > e1.Salary AND e1.DepartmentID = e2.DepartmentID ); ``` **Output** ``` +------+-------+--------+--------------+ | Id | Name | Salary | DepartmentId | +------+-------+--------+--------------+ | 1 | Joe | 70000 | 1 | | 2 | Henry | 80000 | 2 | | 3 | Sam | 60000 | 2 | | 4 | Max | 90000 | 1 | | 6 | Randy | 85000 | 1 | +------+-------+--------+--------------+ ``` 2. Then it's the easy part. You can just join this table with Department on DepartmentID to get the department name. **Final Output** ``` +------------+----------+--------+ | Department | Employee | Salary | +------------+----------+--------+ | IT | Max | 90000 | | IT | Randy | 85000 | | IT | Joe | 70000 | | Sales | Henry | 80000 | | Sales | Sam | 60000 | +------------+----------+--------+ ```
How to select the top 3 salaries of the department?
[ "", "mysql", "sql", "" ]
In SQL how to check the duedate is 3 days away from today? For example I have a tax table in which there is field called duedate and i want to find that for which all records the due date is 3 days away from today
If your `duedate` column is `date datatype` then use [`trunc`](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions201.htm), ``` select * from tax where trunc(sysdate) - trunc(duedate) = 3 ```
You can try like this: ``` select * from tax where trunc(sysdate) - to_date(myDate, 'yyyy-mm-dd') = 3 ```
Sql query to check the due date is 3 days away from today?
[ "", "sql", "oracle", "" ]
I have 5 tables: 1. tblProvince 2. tblDivison 3. tblDistrict 4. tblCity 5. tblconstituency `tblProvince`: * ID * Name * Area `TblDivision`: * ID * Name * Area * Province\_id `tblDistrict`: * ID * Name * Area * Province\_id * Division\_id `tblCity`: * ID * Name * Area * Province\_id * Division\_id * District\_id `TblConstituency`: * ID * Name * Area * Province\_id * Division\_id * District\_id * City\_id Is this the right way to store data? adding complete detail of the Area. Or should I just save the id of the previous area (like city is the container of constituency). to store detail of constituency i should ad city\_id with it. like this... ``` tblCity +-------------+ | ID | +-------------+ | Name | +-------------+ | Area | +-------------+ | District_id | +-------------+ TblConstituency +-------------+ | ID | +-------------+ | Name | +-------------+ | Area | +-------------+ | City_id | +-------------+ ``` and if i want to get complete detail about constituency i could get id of city, and from city i could get district, and from district i could get division and so on. But, I feel its a massive overkill. it will be a long query every time i need to get detail. So, what's the best way? P.S: apologies for poor way of describing my question.
Never store the same information twice. This means you would have to *manually* keep that information synchronized, and synchronization is difficult and error-prone. Basically, any time you have multiple sources of truth, you have zero sources of truth. Consider your tables here: ``` Division +-------------+ | ID | +-------------+ | Name | +-------------+ | Area | +-------------+ | Province_id | +-------------+ District +-------------+ | ID | +-------------+ | Name | +-------------+ | Area | +-------------+ | Province_id | +-------------+ | Division_id | +-------------+ ``` `Division` is already storing the `Province_id`. So why does `District` need to store it as well? What happens if `District` stores a *different* `Province_id` than its corresponding `Division` record? Which one is correct for that `District`? Just link to the direct parent record: ``` District +-------------+ | ID | +-------------+ | Name | +-------------+ | Area | +-------------+ | Division_id | +-------------+ ``` The information, by relation to the `Division` table, already exists and can be queried. (Basically, that's what the `JOIN` keyword is for.) Since you already have the information, you don't need to repeat it.
Normalization through BCNF is based on functional dependencies. What are the functional dependencies in data like this? What are the candidate keys? ``` Cities State County City -- Alabama Pike Troy Arkansas Pike Delight Florida Bay Springfield Maine Penobscot Springfield ``` Here there's only one (trivial) functional dependency and only one candidate key. The only FD is State, County, City -> State, County, City. The only candidate key is {State, County, City}. This relation is in at least 5NF. You *can't* improve this relation, but you *can* improve the database. The database doesn't know that there's no county named "Los Angeles" in Alabama. So it will let you insert this invalid row. ``` Cities State County City -- Alabama Los Angeles Troy ``` To fix *that* problem, add a relation containing all the valid counties, and set a foreign key reference. ``` Counties State -- Alabama Autauga Alabama Baldwin ... Alabama Pike ... California Los Angeles ... ``` The relation "Counties" is all key, and it has no non-prime attributes. "Counties" is also in at least 5NF. The database still doesn't know that it shouldn't allow rows like this. ``` Cities State County City -- Wales Pike Troy ``` There's no state named *Wales* in the USA. Fix this problem the same way as the last problem. ``` States -- Alabama Arkansas ... California ... ``` And set a foreign key reference from Counties to States. Here's what it would look like in standard SQL, except that I didn't supply all 50 states or all 3000+ counties. ``` create table states ( state varchar(100) primary key ); insert into states values ('Alabama'), ('Arkansas'), ('California'), ('Florida'), ('Maine'); -- and more . . . create table counties ( county varchar(100) not null, state varchar(100) not null, primary key (county, state), foreign key (state) references states (state) on update restrict on delete restrict ); insert into counties values ('Autauga', 'Alabama'), ('Baldwin', 'Alabama'), ('Pike', 'Alabama'), ('Pike', 'Arkansas'), ('Los Angeles', 'California'), ('Bay', 'Florida'), ('Penobscot', 'Maine'); -- and more . . . create table cities ( city varchar(100) not null, county varchar(100) not null, state varchar(100) not null, primary key (city, county, state), foreign key (county, state) references counties (county, state) on update restrict on delete restrict ); insert into cities values ('Troy', 'Pike', 'Alabama'), ('Delight', 'Pike', 'Arkansas'), ('Springfield', 'Penobscot', 'Maine'), ('Springfield', 'Bay', 'Florida'); -- and more . . . ``` Now you'll find that it's impossible to insert the invalid tuples {Troy, Los Angeles, Alabama} and {Troy, Pike, Wales}. Using surrogate ID numbers instead of natural keys doesn't change the normal forms. But it *does* change how the database works. And not necessarily in a good way. Using the SQL tables above, this update will fail. ``` update states set state = 'Wibble' where state = 'Alabama'; ``` And that's a Good Thing. Let's build these tables with surrogate ID numbers instead. ``` create table states ( state_id integer primary key, state varchar(100) not null unique ); insert into states values (1, 'Alabama'), (2, 'Arkansas'), (3, 'California'), (4, 'Florida'), (5, 'Maine'); -- and more . . . create table counties ( county_id integer not null, county varchar(100) not null, state_id integer not null, foreign key (state_id) references states (state_id) on update restrict on delete restrict, primary key (county_id, state_id), unique (county, state_id) ); insert into counties values (1, 'Autauga', 1), (2, 'Baldwin', 1), (3, 'Pike', 1), (4, 'Pike', 2), (5, 'Los Angeles', 3), (6, 'Bay', 4), (7, 'Penobscot', 5); -- and more . . . create table cities ( city_id integer not null, city varchar(100) not null, county_id integer not null, state_id integer not null, foreign key (county_id, state_id) references counties (county_id, state_id) on update restrict on delete restrict, primary key (city_id, county_id, state_id), unique (city, county_id, state_id) ); insert into cities values (1, 'Troy', 3, 1), (2, 'Delight', 4, 2), (3, 'Springfield', 7, 5), (4, 'Springfield', 6, 4); -- and more . . . ``` All three of these tables are still in at least 5NF. But this (invalid) update will now succeed. ``` update states set state = 'Wibble' where state = 'Alabama'; ``` That's a Bad Thing. Using surrogate ID numbers makes every foreign key reference have the same behavior as declaring them `on update cascade`. To recover part of the semantics of `on update restrict`, you have to take the extra, unintuitive step of revoking update permissions on the referenced tables. Almost *nobody* gets that part right. --- There's no realtional principle that justifies splitting primary keys in order to have a path you *can* follow to recover the original relation. In other words, there's no *relational* principle that justifies changing this... ``` Cities city_id city county_id state_id -- 1 Troy 3 2 ``` ...to this. ``` Cities city_id city county_id -- 1 Troy 3 ``` ``` Counties county_id county state_id -- 3 Pike 1 ``` Not only is there *no* relational principle that justifies splitting primary keys, it *creates* one of the problems the relational model of data intended to solve. Look up "IMS", a hierarchical database management system that requires the user to follow paths through the data files.
Database Normalization confusion?
[ "", "sql", "normalization", "" ]
I need to execute insert and delete query in a single statement like this ``` INSERT INTO COMPANY (ID,NAME) VALUES (1, 'Paul');DELETE FROM COMPANY WHERE ID='12'; ``` This is my node.js code for executing query ``` pg.connect(pgConString, function (err, client, done) { if (err) { callBack("DB connection failed. " + err, null); return; } var query = client.query({ text: "INSERT INTO COMPANY (ID,NAME) VALUES (1, 'Paul');DELETE FROM COMPANY WHERE ID='12';" values: [1, "Poul1"], name: "insertQuery" }); query.on("error", function (err) { callBack("DB insertion failed. Error Message: " + err, null); return; }); query.on('end', function (result) { done(); return; }); }); ``` I got error message like this > error: cannot insert multiple commands into a prepared statement is it possible to execute multiple queries in postgresql database using node.js ?
Try like this ``` pg.connect(pgConString, function (err, client, done) { if (err) { callBack("DB connection failed. " + err, null); return; } client.query({ text: "INSERT INTO COMPANY (ID,NAME) VALUES (1, 'Paul');", values: [1, "Poul1"], name: "insertQuery" }); client.query({ text: "DELETE FROM COMPANY WHERE ID='12';", name: "deleteQuery" }); client.on("error", function (err) { callBack("DB insertion failed. Error Message: " + err, null); return; }); }); ```
Although there is an accepted answer, it's a bit obsolete. For now node-postgres handles multiple queries in one call and returns a neat little array to you, like: ``` const db.query('select 1; select 2; select 3;') results.map(r => (r.rows[0]['?column?'])) // [ 1, 2, 3 ] ``` There is also an alternative 'opinionated' library, called [pg-promise](https://www.npmjs.com/package/pg-promise), which also accepts query chains in one call and works with `sql` files as well.
Execute multiple queries in a single statement using postgres and node js
[ "", "sql", "node.js", "postgresql", "" ]
``` SELECT DISTINCT c.ID FROM tbl_Case c INNER JOIN tbl_RequestBaseRequest b ON CaseId = c.ID WHERE AreCalculationsCompleted = 0 AND b.IsApplicantRequest = 1 and c.IsArchived=0 AND (b.ID IN (SELECT DISTINCT ClientRequestId FROM tbl_Response) OR b.OldClientRequestId IN (SELECT DISTINCT ClientRequestId FROM tbl_Response)) ``` > What should be the alternative of OR, this OR is making this query > really slow.
You might try removing the `distinct` and being sure you have an index on `tbl_Response(ClientRequestId)`: ``` SELECT DISTINCT c.ID FROM tbl_Case c INNER JOIN tbl_RequestBaseRequest b ON CaseId = c.ID WHERE AreCalculationsCompleted = 0 AND b.IsApplicantRequest = 1 and c.IsArchived = 0 AND (b.ID IN (SELECT ClientRequestId FROM tbl_Response) OR b.OldClientRequestId IN (SELECT ClientRequestId FROM tbl_Response) ); ``` Other indexes might help. Also, removing the outer `DISTINCT` (if it is not necessary will also boost performance). Other indexes might help, but it is not possible to specify because you haven't qualified `AreCalculationsCompleted`.
``` SELECT DISTINCT c.ID FROM tbl_Case c INNER JOIN tbl_RequestBaseRequest b ON CaseId = c.ID WHERE AreCalculationsCompleted = 0 AND b.IsApplicantRequest = 1 and c.IsArchived=0 AND exists (SELECT 1 FROM tbl_Response t WHERE t.ClientRequestId = b.ID OR t.ClientRequestId = b.OldClientRequestId ) ```
SQL server alternative of OR in where condition
[ "", "sql", "sql-server", "" ]
I have the following table '`S3results`: ``` +-----------+----------+------------------+-------+ | Studentno | Fullname | Subject | Fmagg | +-----------+----------+------------------+-------+ | 100509 | Terry | Accounts | 1 | | 100509 | Terry | Art | 6 | | 100509 | Terry | Biology | 3 | | 100509 | Terry | Chemistry | 2 | | 100509 | Terry | Commerce | 2 | | 100509 | Terry | Computer Studies | 4 | | 100509 | Terry | English | 6 | | 100509 | Terry | Geography | 1 | | 100509 | Terry | History | 1 | | 100509 | Terry | Mathematics | 3 | | 100509 | Terry | Physics | 1 | | 100510 | Sena | Accounts | 4 | | 100510 | Sena | Art | 1 | | 100510 | Sena | Biology | 5 | | 100510 | Sena | Chemistry | 1 | | 100510 | Sena | Commerce | 3 | | 100510 | Sena | Computer Studies | 3 | | 100510 | Sena | English | 4 | | 100510 | Sena | Geography | 1 | | 100510 | Sena | History | 4 | | 100510 | Sena | Mathematics | 1 | | 100510 | Sena | Physics | 2 | | 100511 | Cristen | Accounts | 2 | | 100511 | Cristen | Art | 1 | | 100511 | Cristen | Biology | 2 | | 100511 | Cristen | Chemistry | 1 | | 100511 | Cristen | Commerce | 5 | | 100511 | Cristen | Computer Studies | 3 | | 100511 | Cristen | English | 6 | | 100511 | Cristen | Geography | 1 | | 100511 | Cristen | History | 1 | | 100511 | Cristen | Mathematics | 2 | | 100511 | Cristen | Physics | 6 | +-----------+----------+------------------+-------+ ``` What i want is to select 8 subjects with the lowest scores for each student in the `fmagg` column but English should be included in the results irrespective of their score. Below is the result I want: ``` +-----------+----------+------------------+-------+ | Studentno | Fullname | Subject | Fmagg | +-----------+----------+------------------+-------+ | 100509 | Terry | Accounts | 1 | | 100509 | Terry | Geography | 1 | | 100509 | Terry | History | 1 | | 100509 | Terry | Physics | 1 | | 100509 | Terry | Chemistry | 2 | | 100509 | Terry | Commerce | 2 | | 100509 | Terry | Biology | 3 | | 100509 | Terry | English | 6 | | 100510 | Sena | Art | 1 | | 100510 | Sena | Chemistry | 1 | | 100510 | Sena | Geography | 1 | | 100510 | Sena | Mathematics | 1 | | 100510 | Sena | Physics | 2 | | 100510 | Sena | Commerce | 3 | | 100510 | Sena | Computer Studies | 3 | | 100510 | Sena | English | 4 | | 100511 | Cristen | Art | 1 | | 100511 | Cristen | Chemistry | 1 | | 100511 | Cristen | Geography | 1 | | 100511 | Cristen | History | 1 | | 100511 | Cristen | Accounts | 2 | | 100511 | Cristen | Biology | 2 | | 100511 | Cristen | Mathematics | 2 | | 100511 | Cristen | English | 6 | +-----------+----------+------------------+-------+ ``` Some one proposed the solution below: ``` ;WITH cte AS ( SELECT Studentno, Fullname, [Subject], Fmagg, CASE WHEN [Subject] IN ('Mathematics', 'English') THEN 0 ELSE ROW_NUMBER() OVER (PARTITION BY Studentno ORDER BY Fmagg ASC) END AS Ranking FROM S3Results ) SELECT Studentno, Fullname, [Subject], Fmagg FROM cte WHERE Ranking < 8 ORDER BY Studentno, Fmagg ``` the above code worked for me until i realized that it's excluding values equal to or greater than 5 unless it's English. for example given a student scored the following: 5,2,3,5,2,3,3,4,7,7,1 running that query will produce: 3,1,2,2,3,3,4. as you can see those are 7 rows and not 8 as i wanted. thanks in advance.
Try this: ``` SELECT Studentno, Fullname, Subject, Fmagg FROM ( SELECT Studentno, Fullname, Subject, Fmagg, ROW_NUMBER() OVER (PARTITION BY Studentno ORDER BY CASE WHEN Subject = 'English' THEN -1 ELSE Fmagg END, Subject) AS rn FROM S3results) AS t WHERE t.rn <= 8 ORDER BY Studentno, Fullname, Fmagg, Subject ``` `ROW_NUMBER` will enumerate records within each student partition placing `'English'` subject always on the first place. After that the rest of the subjects will follow in ascending score order. **Output:** ``` Studentno Fullname Subject Fmagg ===================================== 100509 Terry Accounts 1 100509 Terry Geography 1 100509 Terry History 1 100509 Terry Physics 1 100509 Terry Chemistry 2 100509 Terry Commerce 2 100509 Terry Biology 3 100509 Terry English 6 100510 Sena Art 1 100510 Sena Chemistry 1 100510 Sena Geography 1 100510 Sena Mathematics 1 100510 Sena Physics 2 100510 Sena Commerce 3 100510 Sena Computer Studies 3 100510 Sena English 4 100511 Cristen Art 1 100511 Cristen Chemistry 1 100511 Cristen Geography 1 100511 Cristen History 1 100511 Cristen Accounts 2 100511 Cristen Biology 2 100511 Cristen Mathematics 2 100511 Cristen English 6 ```
``` select b.Studentno, b.Fullname, b.Subject, b.Fmagg from ( select row_number() over(partition by c.Studentno order by case when c.Subject = 'English' then 0 else c.Fmagg end asc) rn ,c.Studentno ,c.Subject ,c.Fullname ,c.Fmagg from S3results c ) as b where b.rn <= 8 order by b.Studentno, b.Fullname, b.Fmagg ```
How to return the lowest 8 values in a table of scores in sql with a particular value included in each result
[ "", "sql", "sql-server", "" ]
I created a little SQL-fiddle: <http://sqlfiddle.com/#!9/1cbee/1> I would like to sort these values: ``` INSERT INTO ForgeRock (`id`, `title`, `startdate`, `enddate`) VALUES (1, 'Test 1', '2015-12-01 13:30:00', '0000-00-00 00:00:00'), (2, 'Test 2', '2015-12-01 14:30:00', '0000-00-00 00:00:00'), (3, 'Test 3', '2016-01-01 10:30:00', '2016-01-01 14:30:00'), (4, 'Test 4', '2016-01-01 09:30:00', '2016-01-01 15:30:00'); ``` With SQL. The problem is: I would like to sort it according to the start date - if there is an end date, I would like to sort by end date. The result should be: ``` (2, 'Test 2', '2015-12-01 14:30:00', '0000-00-00 00:00:00'), (1, 'Test 1', '2015-12-01 13:30:00', '0000-00-00 00:00:00'), (4, 'Test 4', '2016-01-01 09:30:00', '2016-01-01 15:30:00'); (3, 'Test 3', '2016-01-01 10:30:00', '2016-01-01 14:30:00'), ``` So the first items should be shown with descending order. All items with an end date should be displayed after (!) the items with no end date. Any ideas how to solve that?
Try this: ``` SELECT title, startdate, enddate FROM ForgeRock ORDER BY CASE WHEN enddate = '0000-00-00 00:00:00' THEN 0 ELSE 1 END, enddate DESC, startdate DESC ``` This will display all items without an end date first, followed by all items with an end date.
you can use the below query given you don't have invalid date ``` SELECT title, startdate, enddate FROM ForgeRock order by startdate,enddate ```
SQL: Sort for start date if there is no end date
[ "", "mysql", "sql", "sorting", "" ]
I have this table: | id | Reader id | Book id | Taken date | Return date | And, for example, 3 rows ``` id Reader_id Book_id Taken_date Return_date 1 1 1 1999-01-08 NULL 2 2 2 2015-03-09 2015-04-10 3 1 3 2013-01-01 2014-01-01 ``` I need to get the id's of the readers who have returned books, so all the rows with that id in this table need to have Return\_date != NULL. In the case above the query is supposed to return only the second row and not return the last row, because the reader from the last row has not returned a book. How to achieve that?
First identify the `Reader_id` who has to return books ``` SELECT Reader_id FROM yourtable WHERE Return_date IS NULL ``` Then select the readers from which is not present in above query result Use `NOT IN` ``` SELECT * FROM yourtable WHERE Reader_id NOT IN (SELECT Reader_id FROM yourtable WHERE Return_date IS NULL) ``` Or use `Not Exists` which can handle `NULL` values from `Sub-Query` ``` SELECT * FROM yourtable a WHERE NOT EXISTS (SELECT 1 FROM yourtable b WHERE b.Return_date IS NULL AND a.Reader_id = b.Reader_id) ```
Try this... ``` select reader_id from tablename where id not in (SELECT id FROM tablename WHERE Return_date IS NULL) ```
SQL Select only rows where foreign key has an attribute
[ "", "sql", "sql-server", "" ]
The table structure is as below: ``` ++ID++++READ_ID++++READ_TYPE 101 201 30 102 201 35 103 201 40 104 201 60 105 202 50 106 202 60 ``` I need to select the READ\_TYPE based on the following condition: Condition 1: Check for each READ\_ID if either 30,35 or 40 is present. If present select the maximum READ\_TYPE present among 30, 35 and 40. For instance READ\_ID 201 has 30,35,40 and 60. The result must be 40. Condition 2: If 30, 35 or 40 is not present fetch the maximum of the read\_type. For instance READ\_ID 202 has 50 and 60. The result must be 60. How can this be achieved from a single oracle SQL query.
You can do this using conditional aggregation: ``` select read_id, (case when sum(case when read_type in (30, 35, 40) then 1 else 0 end) > 0 then max(case when read_type in (30, 35, 40) then read_type end) else max(read_type) end) as themax from t group by read_id; ```
You can use the `KEEP` clause to get what you want: ``` select read_id , max(read_type) keep (dense_rank last order by case when read_type not in (30,35,40) then 1 else 2 end , read_type) max_read_type from Your_Table group by read_id; ```
Select different values from table based on condition
[ "", "sql", "oracle", "plsql", "oracle-sqldeveloper", "" ]
This one is hard to explain and I'm sure I will facepalm when I see the solution, but I just can't get it right... I have three tables: **Table A** contains new records that I want to do something with. **Table B** contains all activities from Table C *of a specific type* (done beforehand). **Table C** is sort of a "master" table that contains all activities as well as a customer id and a lot of other stuff. I need to select all activities that is in Table A from Table B. So far so good. The part I can't get together is that I also need all the activities from Table B *that has the same customer id as an activity contained in Table A*. **This is what I'm after:** ``` activity 2 3 4 5 6 ``` The trick here is to get activity 2, because activity 2 is also done by *customer* 2, even though it is not in Table A. Here are the tables: **Table A** (new records) ``` activity 3 4 5 6 ``` **Table B** (all records of a specific type from Table C) ``` activity 1 2 <-- How do I get this record as well? 3 4 5 6 ``` **Table C** (all records) ``` activity customer 1 1 2 2 3 2 4 3 5 3 6 4 7 5 ``` I tried something like this... ``` SELECT * FROM table_b b INNER JOIN table_c c ON c.activity = b.activity INNER JOIN table_a a ON a.activity = b.activity ``` ... but of course it only yields: ``` activity 3 4 5 6 ``` How can I get activity 2 as well here?
To do this returning one column I would recommend staging the customer\_ids of activities in Table\_b that are in Table\_a into a CTE (common table expression [MSDN CTE](https://msdn.microsoft.com/en-us/library/ms175972.aspx)) then select activities in table\_c and join to the CTE to get only activities with a valid customer\_id. example of CTE: (Note the semi-colon `;` before the `WITH` keyword is workaround for an issue in SQL 2005 with multiple statements. It it not necessary if you are in a newer version, or not running batch statements.) ``` ;WITH cte_1 AS ( SELECT distinct c.customer --(only need a distinct result of customer ids) from table_b b join table_a a on b.activity = a.activity --(gets only activities in b that are in a) join table_c c on b.activity = c.activity --(gets customer id of activies in table b) ) SELECT a.activity FROM table_c a JOIN cte_1 b ON a.customer = b.customer ``` Alternatively you could do this in three joins with a select distinct. However I find the CTE to be an easier way to develop and think about this problem regardless of the way you decide to implement your solution. Although the three join solution will most likely scale and perform better over time with a growing data-set. Example: ``` SELECT distinct d.activity from table_b b join table_a a on b.activity = a.activity --(gets only activities in b that are in a) join table_c c on b.activity = c.activity --(gets customer id of activies in table b) join table_c d ON c.customer = d.customer ``` Both would output: ``` 2 3 4 5 6 ```
Here is one way to do it ``` SELECT * FROM TableB b1 WHERE EXISTS (SELECT 1 FROM Tablec c1 WHERE EXISTS (SELECT 1 FROM TableA a INNER JOIN Tablec c ON a.activity = c.activity WHERE c.customer = c1.customer) AND c1.activity = b1.activity) ```
How do I select group wise from a second table
[ "", "sql", "sql-server", "" ]
I'm using SSMS to export some tables from a database on one server to a different database on a different server. I'm using the SQL Server Import and Export Wizard to Export from the old one to the new one. It works great but it does not keep the auto increment. Is there a way to export with it keeping the auto increment **or is there a simple SQL code to tell all tables in the new database that the column "id" should be the primary key & auto inc?** I can't export SQL scripts since they'll become too big. Thank you.
Use Database Scripting. ![enter image description here](https://i.stack.imgur.com/FpfIs.png) **Where to find Advance Scripting options** ![enter image description here](https://i.stack.imgur.com/t9Fai.png) **To select data when generating scripts** ![enter image description here](https://i.stack.imgur.com/hsD4H.png)
I create a sample table for this question **DDL** ``` CREATE TABLE [dbo].[Brands]( [Id] [int] IDENTITY(1,1) NOT NULL, [Name] [nvarchar](200) NOT NULL, CONSTRAINT [PK_dbo.Merchants] PRIMARY KEY CLUSTERED ( [Id] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] ``` **DML** SSMS create data scripts with Id fields For Ex. ``` GO SET IDENTITY_INSERT [dbo].[Brands] ON INSERT [dbo].[Brands] ([Id], [Name]) VALUES (1, N'Kitchenette') INSERT [dbo].[Brands] ([Id], [Name]) VALUES (5, N'Mezzaluna') INSERT [dbo].[Brands] ([Id], [Name]) VALUES (7, N'ESRA-MARKA') SET IDENTITY_INSERT [dbo].[Brands] OFF ``` You need to change insert scripts like this ``` INSERT [dbo].[Brands] ([Name]) VALUES (N'Kitchenette') INSERT [dbo].[Brands] ([Name]) VALUES (N'Mezzaluna') INSERT [dbo].[Brands] ([Name]) VALUES (N'ESRA-MARKA') ``` So I delete Id in Insert. if you dont give Id , Auto Increament will work
Export database with tables issue. Not keeping auto increment
[ "", "sql", "sql-server", "database", "ssms", "" ]
I have two tables. template table.(`templ_id` and `obj_id` is primary key) ``` templ_id obj_id TP000002 PE554959 TP000003 PE555867 TP000006 PE555102 TP000009 PE554994 TP000009 PE554956 TP000009 PE555176 TP000009 PE555598 TP000009 PE555256 TP000010 PE555297 TP000010 PE555286 ``` Business table.(`bsn_no` is primary key) ``` bsn_no obj_id templ_id 1 PE554959 null 2 PE555867 null 3 PE555102 null 4 PE554994 null 5 PE554956 null 6 PE555176 null 7 PE555598 null 8 PE555256 null 9 PE555297 null 10 PE555286 null ``` I want to update business tables `templ_id` from template table's `templ_id` based on the `obj_id` using a single update query.
``` UPDATE BusinessTable SET BusinessTable.templ_id = (SELECT TemplateTable.templ_id FROM TemplateTable WHERE BusinessTable.obj_id = TemplateTable.obj.id ) ``` IF the IDs are the same you can group the subquery: ``` UPDATE BusinessTable SET BusinessTable.templ_id = (SELECT TemplateTable.templ_id FROM TemplateTable WHERE BusinessTable.obj_id = TemplateTable.obj.id GROUP BY Template.templ_id ) ```
You can do this with a correlated subquery: ``` update bsn_no set templ_id = (select t.templ_id from template_table t where t.obj_id = bsn_no.obj_id ); ``` This is standard SQL and should work in any database (although if you have duplicate `obj_id` in the `template_table`, it will return errors). Specific databases have other syntax for combining tables in an `update`. EDIT: If this returns multiple rows, the simplest solution is an aggregation *without* `group by` or using `where rownum = 1`: ``` update bsn_no set templ_id = (select t.templ_id from template_table t where t.obj_id = bsn_no.obj_id and rownum = 1 ); ``` This avoids the error, by choosing an (arbitrary) matching value.
How to update a table's column value from a different table's same column values?
[ "", "sql", "" ]
``` SELECT T.xhrs, T.eq_id, mt.CATEGORY, mt.MODEL_NAME, ROUND((SUM (T.TOT_AVAIL_TIME-T.maintenance_TIME) / SUM(T.TOT_AVAIL_TIME))*100) AVAILABILITY, ROUND((SUM(UTIL_TIME) / nullif(SUM(T.TOT_AVAIL_TIME-T.maintenance_TIME ),0) )*100) UTILIZATION, ROUND(SUM(T.failedcmds)/ SUM(T.total_failedcmds),2)*100 failedcmds, AVG(MAX_QL) MAX_QL, AVG(AVG_QL) avg_ql FROM db1 T, db2 mt WHERE T.eq_model = mt.eq_model and TOT_AVAIL_TIME != 0 AND TOT_AVAIL_TIME IS NOT NULL GROUP BY T.xhrs, T.eq_id, mt.CATEGORY, mt.MODEL_NAME ``` This query returns 2550 records and takes 12-15 s to run in SQL Developer and in mybatis it takes 30-35 secs. Is there any thing wrong with existing query? Is there any way to optimize above query and bring down the execution time to <5 secs in sql developer and <15 secs in ORM? Explain Plan [![enter image description here](https://i.stack.imgur.com/ZllX6.png)](https://i.stack.imgur.com/ZllX6.png)
Let me bring up a few points for us to think about performance. The matter of fact is that "MyBatis or any other ORM is going to be slower than native SQL execution". Some of the reasons for this fact are: * R1. It is clear that MyBatis adds overhead to database calls. You gain flexibility, maintenance and encapsulation but you lose in performance. * R2. In most cases, MyBatis is going to parse the ResultSet to Java objects, this adds additional overhead. SQL Clients can work straight with cursors which are faster but in a long run it may be harder to maintain. * R3. In most cases, MyBatis is going to create a transaction for you, this adds additional overhead. * R4. In most cases, MyBatis is going to create and manage a cache for you, this adds additional overhead for the fist select but speed up the process for the next selects. * R5. MyBatis can also help you with lazy loading and other data retrieval strategies. -R6. We should compare MyBatis executions against JDBC executions. Instead of comparing MyBatis against a SQL Client Tool (such as SQL Developer) because there are variables which can obscure your results. Example, they may not fetch all rows at once. That being said, MyBatis may give you flexibility, better maintenance, an easy way to handle transact, easy way to parse tables to Java objects but it you take some performance from you. So, if you want to speed up your queries there are a few things you should consider. See [MyBatis Documentation](http://www.mybatis.org/mybatis-3/sqlmap-xml.html "MyBatis Documentation"): * C1. Use fetchSize in your queries. * C2. Use cache wisely, see what kind of caching it is needed, in some cases it make sense to do not use cache at all. Example: `<select ... useCache="false">` * C3. Be aware of the "N+1 Selects Problem". The documentation has some insights about this characteristic of many ORMs. * C4. Try to use a lightweight transactionManager such as "JDBC", keep in mind that aspect oriented transactions (such as in Spring @Transactional) will add a little bit more of overhead.
Put `T.eq_model` and `T.TOT_AVAIL_TIME` in one index (in this order). Note: as you also check for NULL-ness of `TOT_AVAIL_TIME`, I assume it can be NULL by column definition. If you index it in a simple index, NULL-s are not indexed, and a `TOT_AVAIL_TIME IS NOT NULL` won't use the index later. So either combine it with `eq_model`, or remove that filter and change the column to `NOT NULL`. I used this workaround on nullable columns on JIRA: `(PROJECT_CAN_HAS_NULL ASC, '1')`.
High query execution time
[ "", "sql", "oracle", "orm", "mybatis", "" ]
I have a table TAB\_1 which has 230 rows. ``` CREATE TABLE TAB_1 (audit_id bigint NOT NULL PRIMARY KEY) ``` I have another table TAB\_2 which also has 230 rows. ``` CREATE TABLE TAB_2 (employee_id bigint NOT NULL PRIMARY KEY, first_name varchar(50) NOT NULL, last_name varchar(50) NOT NULL) ``` Both these tables have nothing in common. I want to write a query which will give me all the columns in these 2 tables 1 on 1 (i.e. 1st row of TAB\_1 with 1st row of TAB\_2, 2nd row of TAB\_1 with 2nd row of TAB\_2 and so on). That query will also have 230 rows. How can I do that? I tried these queries but they are given result as every row in TAB\_1 times every row in TAB\_2: ``` select a.audit_id, b.employee_id, b.first_name, b.last_name from TAB_1 a inner join TAB_2 b on 1 = 1 select a.audit_id, b.employee_id, b.first_name, b.last_name from TAB_1 a cross join TAB_2 b ```
You need a key to join on. You can get this using `row_number()`: ``` select t1.*, t2.* from (select t1.*, row_number() over (order by audit_id) as seqnum from tab_1 t1 ) t1 full outer join (select t2.*, row_number() over (order by employee_id) as seqnum from tab_2 t2 ) t2 on t1.seqnum = t2.seqnum; ``` This assumes that the ordering is based on the first column. The `full outer join` will return all rows, regardless of whether they have the same number of rows or not.
You don't want to join on "nothing" because you'll end up with a cartesian join -- and that will give you way more than 230 rows. Instead, you want to join on row number. [Here's someone that did it for you already.](https://stackoverflow.com/questions/12972320/how-to-do-an-inner-join-on-row-number-in-sql-server)
Create Query to join 2 tables 1 on 1 with nothing in common
[ "", "sql", "sql-server", "join", "" ]
I struggle to write a DELETE query in MariaDB 5.5.44 database. The first of the two following code samples works great, but I need to add a WHERE statement there. That is displayed in the second code sample. I need to delete only rows from *polozkyTransakci* where *puvodFaktury* <> *FAKTURA VO CZ* in *transakce\_tmp* table. I thought that my WHERE statement in the second sample could have worked ok with the inner SELECT, but it takes forever to process (it takes about 40 minutes in my cloud based ETL tool) and even then it does not leave the rows I want untouched. 1. ``` DELETE FROM polozkyTransakci WHERE typPolozky = 'odpocetZalohy'; ``` 2. ``` DELETE FROM polozkyTransakci WHERE typPolozky = 'odpocetZalohy' AND idTransakce NOT IN ( SELECT idTransakce FROM transakce_tmp WHERE puvodFaktury = 'FAKTURA VO CZ'); ``` Thaks a million for any help David
IN is very bad on performance .. Try using NOT EXISTS() ``` DELETE FROM polozkyTransakci WHERE typPolozky = 'odpocetZalohy' AND NOT EXISTS (SELECT 1 FROM transakce_tmp r WHERE r.puvodFaktury = 'FAKTURA VO CZ' AND r.idTransakce = polozkyTransakci.idTransakce ); ```
Before you can performance tune, you need to figure out why it is not deleting the correct rows. So first start with doing selects until you get the right rows identified. Build your select a bit at time checking the results at each stage to see if you are getting the results you want. Once you have the select then you can convert to a delete. When testing the delete do it is a transaction and run some test of the data that is left behind to ensure it deleted properly before rolling back or committing. Since you likely want to performance tune, I would suggest rolling back, so that you can then try again on the performance tuned version to ensure you got the same results. Of course, you only want to do this on a dev server! Now while I agree that not exists may be faster, some of the other things you want to look at are: * do you have cascade deletes happening? If you end up deleting many child records, that could be part of the problem. * Are there triggers affecting the delete? especially look to see if someone set one up to run through things row by row instead of as a set. Row by row triggers are a very bad thing when you delete many records. For instance suppose you are deleting 50K records and you have a delete trigger to an audit table. If it inserts to that table one record at a time, it is being executed 50K times. If it inserts all the deleted records in one step, that insert individually might take a bit longer but the total execution is much shorter. * What indexing do you have and is it helping the delete out? * You will want to examine the explain plan for each of your queries to see if they are improving the details of how the query will be performed. Performance tuning is a complex thing and it is best to get read up on it in detail by reading some of the performance tuning books available for your specific database.
Improve delete with IN performance
[ "", "mysql", "sql", "mariadb", "" ]
I know the question was asked before, but I might have a different case, I have this table : ``` | PK_DATA | EVENT_TYPE | DATE | ------------------------------------- | 123 | D | 12 DEC | | 123 | I | 11 DEC | | 123 | U | 10 DEC | | 124 | D | 11 JAN | | 124 | U | 12 JAN | | 125 | I | 1 JAN | ------------------------------------- ``` I want a query to give `max(DATE)` grouped by `PK_DATE` and at the same time give the corresponding `EVENT_TYPE` .... i.e. : ``` | 123 | D | 12 DEC | | 124 | U | 12 JAN | | 125 | I | 1 JAN | ``` I thought to group by `PK_DATA` and select `max(DATE)` but then the `EVENT_TYPE` wont be displayed until either apply an aggregate function to it or add it to the group clause and neither will do what I want ... any help ? BY the way I want to avoid any nested query , I know it can be done on two steps , a nested query to group then join again the main table with the query result
I found a solution, but not sure yet if it's better than Husqiv's in terms of performance or not, so I will post it to spread the knowledge : `WITH data (PK_DATA, EVENT_TYPE, "DATE") AS ( SELECT 123, 'D', DATE'2015-12-12' FROM DUAL UNION ALL SELECT 123, 'I', DATE'2015-12-11' FROM DUAL UNION ALL SELECT 123, 'U', DATE'2015-12-10' FROM DUAL UNION ALL SELECT 124, 'D', DATE'2015-01-11' FROM DUAL UNION ALL SELECT 124, 'U', DATE'2015-01-12' FROM DUAL UNION ALL SELECT 125, 'I', DATE'2015-01-01' FROM DUAL) select EVENT_TYPE, "DATE", PK_DATA from (select EVENT_TYPE,"DATE",DATA_ID, max("DATE") over (PARTITION BY PK_DATA) max_date from data ) where "DATE" = max_date;`
You can use `KEEP` clause, it's significantly faster and less resource intensive than running window function (if your data set is larger): ``` WITH data (PK_DATA, EVENT_TYPE, "DATE") AS ( SELECT 123, 'D', DATE'2015-12-12' FROM DUAL UNION ALL SELECT 123, 'I', DATE'2015-12-11' FROM DUAL UNION ALL SELECT 123, 'U', DATE'2015-12-10' FROM DUAL UNION ALL SELECT 124, 'D', DATE'2015-01-11' FROM DUAL UNION ALL SELECT 124, 'U', DATE'2015-01-12' FROM DUAL UNION ALL SELECT 125, 'I', DATE'2015-01-01' FROM DUAL) SELECT PK_DATA, MAX(EVENT_TYPE) KEEP (DENSE_RANK LAST ORDER BY "DATE") EVENT_TYPE, MAX("DATE") "DATE" FROM data GROUP BY PK_DATA ``` EDIT: Here is comparison between `ROW_NUMBER` and `KEEP`: ``` PANELMANAGEMENT@panel_management> set autot trace stat PANELMANAGEMENT@panel_management> SELECT 2 INVOICEDATE, 3 MAX(CREATED) V1, 4 MAX(TOTALCOST) KEEP (DENSE_RANK LAST ORDER BY ORDER_ID) V2 5 FROM 6 ORDERS 7 GROUP BY 8 INVOICEDATE 9 ORDER BY 10 INVOICEDATE; 269 rows selected. Elapsed: 00:00:05.03 PANELMANAGEMENT@panel_management> SELECT 2 INVOICEDATE, 3 CREATED V1, 4 TOTALCOST V2 5 FROM ( 6 SELECT 7 INVOICEDATE, 8 CREATED, 9 TOTALCOST, 10 ROW_NUMBER() OVER (PARTITION BY INVOICEDATE ORDER BY ORDER_ID DESC) FILTER 11 FROM 12 ORDERS) 13 WHERE 14 FILTER = 1 15 ORDER BY 16 INVOICEDATE; 269 rows selected. Elapsed: 00:00:21.82 ``` The `ORDERS` table has around 10 million records and 1 GB of data. The main difference is that analytic function needs to allocate much more memory because it needs to assign row number to all 10 million rows that are filtered afterwards to resulting 269 rows. Using `KEEP` Oracle knows that it needs to allocate just one row per `INVOICEDATE`. Also when you sort 10 million rows you need the memory for storing all of them. But if you need to sort 10 million rows and keep only single record for each group you can just allocate single record and when you are sorting you just replace it with the one that is greater/smaller. In this case the analytic function required around 100 MB of memory whereas `KEEP` "none".
Show columns not in GROUP BY clause without applying aggregate function on it
[ "", "sql", "oracle", "group-by", "aggregate-functions", "" ]
This is a pretty common case, but I'm not sure if I'm doing something wrong. ``` TABLE POSTS [ ID, USER_ID, TOPIC_ID, TIME ] ``` I want to get only the last post by each user on topic\_id '1' ``` SELECT p.* FROM posts p, ( SELECT user_id, max(time) mtime FROM posts WHERE topic_id = 1 GROUP BY user_id ) pm WHERE p.user_id = pm.user_id AND p.time = pm.mtime ``` Is this correct? Is there a way to do this without the subquery? Is there an option to get this data with spring-data queries, or this is a JDBC/stored procedure only thing?
You can get the latest post for each user **without** using a subquery by using join: ``` SELECT p.* FROM posts p LEFT OUTER JOIN posts t ON(p.user_id = t.user_id and p.time < t.time and t.topic_id = 1) WHERE p.topic_id = 1 and t.topic_id is null ```
Am not sure how to avoid `sub-query` here but another way to do this will be ``` SELECT p.* FROM posts p WHERE p.time = (SELECT Max(time) mtime FROM posts pm WHERE pm.topic_id = 1 AND p.user_id = pm.user_id) ``` Window Function `Row_number` will be very helpful in such cases unfortunately `Mysql` does not supports
Latest data group-by without a subquery
[ "", "mysql", "sql", "spring-data", "spring-data-jpa", "" ]
I wondering how to process a SQL comparision without redo the same query twice in the statement. Here is what I am looking for : ``` SELECT columnName10, IF( SELECT columnName20 FROM OtherExampleTable WHERE id = 15 IS NULL, 'nothing', SELECT columnName20 FROM OtherExampleTable WHERE id = 15 ) AS AliasColumn FROM ExampleTable ``` As you can see, the query **SELECT columnName20 FROM OtherExampleTable WHERE id = 15** is made 2 times. How could I do the same thing whithout the duplicate query ? Thank you guys.
You could use [`COALESCE`](http://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#function_coalesce): ``` SELECT column10, COALESCE( (SELECT columnName10 FROM OtherExampleTable WHERE id=15), 'nothing') FROM ExampleTable; ```
Two solutions. > Using IFNULL() ``` SELECT columnName10, IFNULL( SELECT columnName20 FROM OtherExampleTable WHERE id = 15, 'nothing' ) AS AliasColumn FROM ExampleTable ``` > Use user variable ``` SELECT columnName10, IF( @value IS NULL, 'nothing', @value:=(SELECT columnName20 FROM OtherExampleTable WHERE id = 15) ) AS AliasColumn FROM ExampleTable ```
SQL : Conditional result used in the same conditional outputs
[ "", "mysql", "sql", "" ]
I need to transfer `tbl_products` from system 1 to system 2 that means both system have same DB and same `tbl_products`.Need to get the same data in system 1's database to system 2's `tbl_products`. I had used COPY method in PostgreSQL to export and import data as csv. In system 1 ``` Copy (select * from tbl_products) to 'D:\AppFolder\DataTran\tbl_products).csv' with csv headers; ``` transferring `tbl_product.csv` via internet and In system 2 ``` Delete from tbl_products: copy tbl_products from 'D:\AppFolder\DataTran\tbl_products).csv' with csv headers; ``` Now I need to implement this same method using SQL Server **Note: Both computer is not connected.**
We use OLE Automation Object to write in csv: ``` DECLARE @Var as xml --or nvarchar(max) SELECT @Var= --some content yoy want to write in file DECLARE @FSO int DECLARE @hr int DECLARE @src varchar(255) DECLARE @desc varchar(255) DECLARE @oFile int DECLARE @filename nvarchar(max)='D:\Reports\our.csv'; -- Create OLE Automation Object EXEC @hr = sp_OACreate 'Scripting.FileSystemObject', @FSO OUT IF @hr <> 0 BEGIN EXEC sp_OAGetErrorInfo @FSO, @src OUT, @desc OUT SELECT hr=convert(varbinary(4),@hr), Source=@src, Description=@desc RETURN END -- Create the file EXEC @hr = sp_OAMethod @FSO, 'CreateTextFile', @oFile OUT, @filename, 8 , True IF @hr <> 0 BEGIN EXEC sp_OAGetErrorInfo @FSO, @src OUT, @desc OUT SELECT hr=convert(varbinary(4),@hr), Source=@src, Description=@desc RETURN END --Here we put content in file EXEC @hr = sp_OAMethod @oFile, 'Write', NULL, @Var IF @hr <> 0 BEGIN EXEC sp_OAGetErrorInfo @FSO, @src OUT, @desc OUT SELECT hr=convert(varbinary(4),@hr), Source=@src, Description=@desc RETURN END -- Clear used objects EXEC @hr = sp_OADestroy @FSO IF @hr <> 0 BEGIN EXEC sp_OAGetErrorInfo @FSO, @src OUT, @desc OUT SELECT hr=convert(varbinary(4),@hr), Source=@src, Description=@desc RETURN END EXEC @hr = sp_OADestroy @oFile IF @hr <> 0 BEGIN EXEC sp_OAGetErrorInfo @oFile, @src OUT, @desc OUT SELECT hr=convert(varbinary(4),@hr), Source=@src, Description=@desc RETURN END ``` So you just need to put all the table in `@Var`, or write a `while` loop with taking every string and putting it into file, or... whatever you want. To load csv-data on server you can use: ``` INSERT INTO dbo.YourTable SELECT a.* FROM OPENROWSET( BULK 'D:\our.csv', FORMATFILE = 'D:\our.fmt') AS a; ``` The sample of our.fmt (it's file that describes the fields in csv) ``` 9.0 4 1 SQLCHAR 0 50 ";" 1 Field1 SQL_Latin1_General_Cp437_BIN 2 SQLCHAR 0 50 ";" 2 Field2 SQL_Latin1_General_Cp437_BIN 3 SQLCHAR 0 50 ";" 3 Field3 SQL_Latin1_General_Cp437_BIN 4 SQLCHAR 0 500 "\r\n" 4 Field4 SQL_Latin1_General_Cp437_BIN ``` You can find description of \*.fmt [here](https://msdn.microsoft.com/en-us/library/ms190393.aspx) .
In SQL Server, you would more likely set up a db link between the two servers (this is easy, although the [documentation](https://msdn.microsoft.com/en-us/library/ff772782.aspx) is rather long). Then, you simply do: ``` select t.* into <local_table> from <remote_link>.<database>.<schema>.<table>; ``` There is no need to go through CSV for this purpose.
Copy data on one table to another
[ "", "sql", "sql-server", "sql-server-2008", "" ]
This is the database data. ``` Name id Col1 Col2 Col3 Col4 Total Balance Row1 1 6 1 A Z - - Row2 2 2 3 B Z - - Row3 3 9 5 B Y - - Row4 4 16 8 C Y - - ``` I want to update the column "Total" and "Balance" from Row2 to Row4 with condition. This is the logic to sum the total column : > update Total = Col1+Col2 if Col3 = A and Col4 <>Z > OR > Total = Col1-Col2 if Col3 = B and Col4 <>Z > OR > Total = Col1\*Col2 if Col3 = C and Col4 <>Z AND also update the amount of balance, > balance = previous row of balance + current row of total
Here comes a solution with assist of one user variable. The result is verified with the full demo attached. SQL: ``` -- data preparation for demo create table tbl(Name char(100), id int, Col1 int, Col2 int, Col3 char(20), Col4 char(20), Total int, Balance int); insert into tbl values ('Row1',1,6,1,'A','Z',0,0), ('Row2',2,2,3,'B','Z',0,0), ('Row3',3,9,5,'B','Y',0,0), ('Row4',4,12,8,'C','Y',0,0); SELECT * FROM tbl; -- Query needed SET @bal = 0; UPDATE tbl SET Total = CASE WHEN Col3 = 'A' and Col4 <> 'Z' THEN Col1+Col2 WHEN Col3 = 'B' and Col4 <> 'Z' THEN Col1-Col2 WHEN Col3 = 'C' and Col4 <> 'Z' THEN Col1*Col2 ELSE 0 END, Balance = (@bal:=@bal + Total); SELECT * FROM tbl; ``` Output(as expected): ``` mysql> SELECT * FROM tbl; +------+------+------+------+------+------+-------+---------+ | Name | id | Col1 | Col2 | Col3 | Col4 | Total | Balance | +------+------+------+------+------+------+-------+---------+ | Row1 | 1 | 6 | 1 | A | Z | 0 | 0 | | Row2 | 2 | 2 | 3 | B | Z | 0 | 0 | | Row3 | 3 | 9 | 5 | B | Y | 0 | 0 | | Row4 | 4 | 12 | 8 | C | Y | 0 | 0 | +------+------+------+------+------+------+-------+---------+ 4 rows in set (0.00 sec) mysql> -- Query needed mysql> SET @bal = 0; Query OK, 0 rows affected (0.00 sec) mysql> UPDATE tbl -> SET -> Total = CASE WHEN Col3 = 'A' and Col4 <> 'Z' -> THEN Col1+Col2 -> WHEN Col3 = 'B' and Col4 <> 'Z' -> THEN Col1-Col2 -> WHEN Col3 = 'C' and Col4 <> 'Z' -> THEN Col1*Col2 -> ELSE 0 END, -> Balance = (@bal:=@bal + Total); Query OK, 2 rows affected (0.00 sec) Rows matched: 4 Changed: 2 Warnings: 0 mysql> mysql> SELECT * FROM tbl; +------+------+------+------+------+------+-------+---------+ | Name | id | Col1 | Col2 | Col3 | Col4 | Total | Balance | +------+------+------+------+------+------+-------+---------+ | Row1 | 1 | 6 | 1 | A | Z | 0 | 0 | | Row2 | 2 | 2 | 3 | B | Z | 0 | 0 | | Row3 | 3 | 9 | 5 | B | Y | 4 | 4 | | Row4 | 4 | 12 | 8 | C | Y | 96 | 100 | +------+------+------+------+------+------+-------+---------+ 4 rows in set (0.00 sec) ```
``` SET @prev_id = 1; UPDATE tableName SET Total = CASE WHEN Col3 = A and Col4 <> Z THEN Col1 + Col2 WHEN Col3 = B and Col4 <> Z THEN Col1 - Col2 WHEN Col3 = C and Col4 <> Z THEN Col1 * Col2 END, Balance = CASE WHEN Col3 = A and Col4 <> Z THEN (SELECT Balance FROM tableName WHERE id = @prev_id) + (Col1 + Col2) WHEN Col3 = B and Col4 <> Z THEN (SELECT Balance FROM tableName WHERE id = @prev_id) + (Col1 - Col2) WHEN Col3 = C and Col4 <> Z THEN (SELECT Balance FROM tableName WHERE id = @prev_id) + (Col1 * Col2) END, @prev_id := id WHERE t1.id > 1; ```
Update the total based on the previous row of balance
[ "", "mysql", "sql", "" ]
I need to get a value of column with three characters instead of one. this is my request : `select rank() over (order by 1,2,32 ) from dual` and these is the result : [![enter image description here](https://i.stack.imgur.com/U4vEc.png)](https://i.stack.imgur.com/U4vEc.png) I need to get `001` instead of `1` Thanks for your help
Use [`LPAD()`](https://docs.oracle.com/cd/B28359_01/olap.111/b28126/dml_functions_2017.htm): ``` SELECT LPAD( '1', 3, '0' ) FROM DUAL; ``` Outputs: ``` 001 ``` So wrapping it around your query: ``` SELECT LPAD( rank() over (order by 1,2,32 ), 3, '0' ) FROM DUAL; ```
Probably the most straight-forward solution: ``` select to_char(rank() over (order by 1,2,32 ),'fm000') from dual ```
How to get a result with three characters
[ "", "sql", "oracle", "" ]
I have a table with different columns of a same field like: ``` ID | Genre1 | Genre2 1 | Sci-fi | Drama 2 | Musical | Sci-fi ``` How could I get the top 5 genres (taking into account both columns)? I think a good approach would be get something like: ``` Genre Count Sci-fi 13 Drama 11 ``` And then I would use "TOP" to this. Right now I am using: ``` SELECT TOP 5 Genre1, count(Genre1) AS times FROM Customer_Profile GROUP BY Genre1 ORDER BY count(Genre1) DESC; ``` It works for one of the columns but how I could apply this in order to take into account both genre columns? (I might need to use Union All but I don't know how). Thank you
Use UNION like this: ``` SELECT TOP 5 Genre,count(Genre) as Times FROM(select Genre1 as Genre FROM Customer_Profile UNION ALL select Genre2 FROM Customer_Profile) GROUP BY Genre ORDER BY count(Genre) DESC; ```
Try with `union all` ``` SELECT TOP 5 Genre, count(Genre) AS times FROM ( select Genre1 Genre from Customer_Profile union all select Genre2 Genre from Customer_Profile ) x GROUP BY Genre ORDER BY count(Genre) DESC; ```
Unify different fields and get top more repeated values
[ "", "sql", "union", "" ]
I have a table called users, that looks like this for example: Table: users ``` username id Simon 6 Ida 7 ``` And a relationships table ``` Table: Relationships me partner 6 7 ``` Now for every relationship that is created the me and partner columns will get the id depending on which one of the users sent the request. If Ida sends a request to Simon, the column looks like this: me: 7 partner: 6 because Ida is id number 7 and Simon is number 6. But if Simon sends a request then the columns looks like this: me: 6 partner: 7 What i want to do is to write a query that will get the right id from the relationships table for every user with a relationship, and then use the id to Join the users table and get the partners username and print it out. My problem is that the columns Me and Partner can have different values depending on which of the 2 users sended the request first. How do i write a query that prints out the right information for each and every user and give them the right id of their partner? How the output should look like: * From simons point of view\* : You are in a relationship with Ida. * From Idas point of view \* : You are in a relationship with Simon.
I want to offer an alternative. Join the user table to the relationship table either on the `me` column or the `partner` column. Then join the user table back onto the relationship table using the same criteria. ``` SELECT u.username, FROM users u JOIN relationships r ON u.id = r.me OR u.id = r.partner JOIN users u2 ON r.me = u2.id OR r.partner = u2.id WHERE u.id = 'THE USER YOU CARE ABOUT' ``` Untested. If you throw up some `CREATE TABLE`s and `INSERT INTO`s we can all have a play.
If you want to get all users which have a relationship with some user (ex.: id:2) you specify, I would use something like this: ``` SELECT id, username FROM user, (SELECT receiver_id as rel_id FROM relationship WHERE sender_id=2 union SELECT sender_id as rel_id FROM relationship WHERE receiver_id=2) tab WHERE id=rel_id ``` The inner statement means you are selecting all the relationships in which 2 was the sender and all the relationships in which 2 was the receiver and stacking them in the same column (union). After that, the table user is used to get the user name from the id.
How do i get relationships between users?
[ "", "mysql", "sql", "" ]
I'm used to make queries with sparql, but i have to do this one with mysql that i don't really know the syntax. I have these three tables: Products: ``` id o 1 p1 2 p2 ``` Specification: ``` id o 3 sp1 4 sp2 ``` Source: ``` id o 1 s1 3 s1 2 s2 4 s2 ``` As we see, the products and the specifications can be connected with the source different ids, so i guess i have to make 2 selects with 2 inner joins between source > product and source > specification and an inner join between the 2 selects : ``` SELECT * FROM ( SELECT pt.o as po, st.id as psID, st.o as pso FROM source_test st inner join products_test pt on st.id = pt.id ) x INNER JOIN ( SELECT st2.o as spo, st1.id as spsID,st1.o as spso FROM source_test st1 inner join specification_test st2 on st1.id =st2.id ) y ``` This gives: ``` po psID pso spo spsID spso p1 1 s1 sp1 3 s1 p2 2 s2 sp1 3 s1 p1 1 s1 sp2 4 s2 p2 2 s2 sp2 4 s2 ``` Now i need to filter product (po) and the specification (spo) that have the same source (pso),(spso) I'm i on doing things the right way, what could be the continuation or a good query to get: ``` po spo spso p1 sp1 s1 p2 sp1 s2 ``` Thanks in advance.
The only thing i had to do is to to filter y ON x.pso = y.spso ``` SELECT * FROM ( SELECT product.o as po, source.s as psID, source.o as pso FROM source inner join product on source.s = product.s ) x INNER JOIN ( SELECT specification.o as spo, source.s as spsID, source.o as spso FROM source inner join specification on source.s = specification.s ) y ON x.pso = y.spso ``` result: ``` po psID pso spo spsID spso p1 1 s1 sp1 3 s1 p2 2 s2 sp2 4 s2 ```
Add `ON x.pso = y.spso` to your query: ``` SELECT x.po, y.spo, y.spso FROM ( SELECT products_test.o as po, source_test.id as psID, source_test.o as pso FROM source_test inner join products_test on source_test.id = products_test.id ) x INNER JOIN ( SELECT specification_test.o as spo, source_test.id as spsID,source_test.o as spso FROM source_test inner join specification_test on source_test.id =specification_test.id ) y ON x.pso = y.spso ```
mysql inner join 3 tables and union
[ "", "mysql", "sql", "" ]
I have a query which converts more than a row to single row. I wanted to know if there is any technique better than . To illustrate our case I have taken a simple usercars relation and simulated queries similar to our application. ``` table: Users PrimaryKey: UserId --------------------------------------- | UserId | UserDetails1 | UserDetails2| --------------------------------------- | 1 | name1 | Addr1 | | 2 | name2 | Addr2 | --------------------------------------- table: UserCars Unique Constraint(UserId, CarType) index on userid, cartype ------------------------------------------- | UserId | CarType | RedCount | BlueCount | ------------------------------------------- | 1 | SUV | 1 | 0 | | 1 | sedan | 1 | 2 | | 2 | sedan | 1 | 0 | ------------------------------------------- Consider CarType as an enum type with values SUV and sedan only ``` Application needs to fetch UserDetails1, sum(RedCount), sum(BlueCount), SUV's RedCount, SUV's BlueCount, sedan RedCount, sedan BlueCount for every user in a single query. For the above example, the result should be like ``` -------------------------------------------------------------------------------- | UserId | UserDetails1 | TotalRed |TotalBlue|SUVRed|SUVBlue|sedanRed|sedanBlue| -------------------------------------------------------------------------------- | 1 | name1 | 2 | 2 | 1 | 2 | 1 | 0 | | 2 | name2 | 1 | 0 | 0 | 0 | 1 | 0 | -------------------------------------------------------------------------------- ``` Currently, our query is like below ``` SELECT --User Information u.UserId, u.UserDetails1, --Total Counts by color count_by_colour.TotalRed, count_by_colour.TotalBlue, -- Counts by type COALESCE(suv.red, 0) AS SUVRed, COALESCE(suv.blue, 0) AS SUVBlue, COALESCE(sedan.red, 0) AS sedanRed, COALESCE(sedan.blue, 0) AS sedanBlue FROM Users u JOIN ( SELECT c.UserId, SUM(RedCount) as TotalRed, SUM(BlueCount) AS TotalBlue FROM UserCars c GROUP BY UserId ) count_by_colour ON (u.UserId = count_by_colour.UserId) LEFT JOIN ( SELECT UserId, RedCount AS red, BlueCount AS blue FROM UserCars WHERE CarType = 'SUV') suv ON (u.UserId = suv.UserId) LEFT JOIN ( SELECT UserId, RedCount AS red, BlueCount AS blue FROM UserCars WHERE CarType = 'sedan') sedan ON (u.UserId = sedan.UserId) ``` Though the query fetches data as expected, I wanted to know if there is any technique better than this. In this example, I have given only two types (SUV and sedan) but in our original application which is related to marketing, has more types which means more left joins. Note: tables cannot be altered as there are other applications use the same Thanks, Ravi
As @Giorgos Betsos pointed out conditional aggregation can be used to avoid the left joins in my initial query. Thanks to Giorgos Betsos for suggesting that. But the reason for not accepting @Giorgos Betsos's answer as the answer for original question is that grouping on Users table using all columns from users table is taking more time. In real case there will be more number of columns to be fetched from users table and hence it should be avoided. I slightly modified his query as follows ``` SELECT --User Information u.UserId, u.UserDetails1, --Total Counts by color temp.TotalRed, temp.TotalBlue, -- Counts by type temp.SUVRed, temp.SUVBlue, temp.sedanRed, temp.sedanBlue FROM Users u JOIN (SELECT userid, SUM(RedCount) AS TotalRed, SUM(BlueCount) AS TotalBlue, COALESCE(SUM(CASE WHEN CarType = 'SUV' THEN RedCount END), 0) AS SUVRed, COALESCE(SUM(CASE WHEN CarType = 'SUV' THEN BlueCount END), 0) AS SUVBlue, COALESCE(SUM(CASE WHEN CarType = 'sedan' THEN RedCount END), 0) AS SedanRed, COALESCE(SUM(CASE WHEN CarType = 'sedan' THEN BlueCount END), 0) AS SedanBlue FROM usercars GROUP BY userid) temp ON (temp.userid = u.userid) ``` I ran both these queries against the same dataset and the query plan is as follows For query in Giorgos Betsos's answer --- ``` GroupAggregate (cost=34407.59..41848.99 rows=99999 width=25) (actual time=477.323..644.976 rows=99999 loops=1) -> Sort (cost=34407.59..34903.09 rows=198197 width=25) (actual time=477.303..513.956 rows=199974 loops=1) Sort Key: u.userid, u.userdetails1 Sort Method: external merge Disk: 7608kB -> Hash Right Join (cost=3375.98..12227.15 rows=198197 width=25) (actual time=83.339..265.419 rows=199974 loops=1) Hash Cond: (uc.userid = u.userid) -> Seq Scan on usercars uc (cost=0.00..3176.51 rows=199951 width=16) (actual time=0.009..48.687 rows=199951 loops=1) -> Hash (cost=1636.99..1636.99 rows=99999 width=13) (actual time=83.137..83.137 rows=99999 loops=1) Buckets: 4096 Batches: 8 Memory Usage: 570kB -> Seq Scan on users u (cost=0.00..1636.99 rows=99999 width=13) (actual time=0.009..34.343 rows=99999 loops=1) Total runtime: 649.600 ms ``` For the modified query given in this comment ``` Hash Join (cost=3376.40..23359.86 rows=100884 width=61) (actual time=87.938..392.103 rows=99976 loops=1) Hash Cond: (temp.userid = u.userid) -> Subquery Scan on temp (cost=0.42..15883.52 rows=100884 width=52) (actual time=0.064..231.107 rows=99976 loops=1) -> GroupAggregate (cost=0.42..14874.68 rows=100884 width=16) (actual time=0.063..216.605 rows=99976 loops=1) -> Index Scan using user_cartype on usercars (cost=0.42..8367.18 rows=199951 width=16) (actual time=0.036..44.917 rows=199951 loops=1) -> Hash (cost=1636.99..1636.99 rows=99999 width=13) (actual time=87.635..87.635 rows=99999 loops=1) Buckets: 4096 Batches: 8 Memory Usage: 570kB -> Seq Scan on users u (cost=0.00..1636.99 rows=99999 width=13) (actual time=0.008..36.204 rows=99999 loops=1) Total runtime: 395.397 ms ``` Once again thanks to Giorgos Betsos for his suggestion. Thanks, Ravi
You can use *conditional aggregation*: ``` SELECT u.UserId, u.UserDetails1, SUM(RedCount) AS TotalRed, SUM(BlueCount) AS TotalBlue, COALESCE(SUM(CASE WHEN CarType = 'SUV' THEN RedCount END), 0) AS SUVRed, COALESCE(SUM(CASE WHEN CarType = 'SUV' THEN BlueCount END), 0) AS SUVBlue, COALESCE(SUM(CASE WHEN CarType = 'sedan' THEN RedCount END), 0) AS SedanRed, COALESCE(SUM(CASE WHEN CarType = 'sedan' THEN BlueCount END), 0) AS SedanBlue FROM Users AS u LEFT JOIN UserCars AS uc ON u.UserId = uc.UserId GROUP BY u.UserId, u.UserDetails1 ``` [**Demo here**](http://sqlfiddle.com/#!15/2784f/5)
PostgreSQL - Joining data from multiple rows to one row
[ "", "sql", "postgresql", "pivot", "" ]
I need to show the distribution membership in year and quarter. I am using SQL Server 2012. The data look like this: ``` CREATE TABLE MyTable ( MyGroup nvarchar(10) NULL, StartTime nvarchar(10) NULL, EndTime nvarchar (10) NULL, Quantity int NULL ) Insert into MyTable Values ('a', '7/1/2014', '6/30/20116', '10'), ('b', '12/1/2013', '11/30/2014', '8') ``` The desired result: ``` MyGroup StartTime EndTime Year Quarter Quantity a 7/1/2014 6/30/2016 2014 2014-Q3 10 a 7/1/2014 6/30/2016 2014 2014-Q4 10 a 7/1/2014 6/30/2016 2015 2015-Q1 10 a 7/1/2014 6/30/2016 2015 2015-Q2 10 a 7/1/2014 6/30/2016 2015 2015-Q3 10 a 7/1/2014 6/30/2016 2015 2015-Q4 10 a 7/1/2014 6/30/2016 2016 2016-Q1 10 a 7/1/2014 6/30/2016 2016 2016-Q2 10 b 12/1/2013 11/30/2014 2013 2013-Q4 8 b 12/1/2013 11/30/2014 2014 2014-Q1 8 b 12/1/2013 11/30/2014 2014 2014-Q2 8 b 12/1/2013 11/30/2014 2014 2014-Q3 8 ```
[![enter image description here](https://i.stack.imgur.com/unRAv.png)](https://i.stack.imgur.com/unRAv.png) Try below query. You need to optimize query by introducing indexes in your table in case you are doing this operation with large data. ``` ;With Quarters as ( select MyGroup,StartTime as StartTime1 ,DATEADD(quarter,DATEDIFF(quarter,0,StartTime),0) as StartTime , Endtime, Quantity from MyTable union all select MyGroup,StartTime1,DATEADD(quarter,1,StartTime) , Endtime , Quantity from Quarters where StartTime < DATEADD(quarter,DATEDIFF(quarter,0,EndTime),0) ) select MyGroup, Convert(varchar(10),StartTime,110) as StartTime, Convert(varchar(10),EndTime,110) as EndTime, DATEPART(YEAR,StartTime) as Years, -- CONVERT(varchar(3),StartTime,109) + ' ' + CONVERT(varchar(4),StartTime,120) as QuarterMonth , CONVERT(varchar(4),StartTime,120) + '-Q' + CAST(CEILING(CAST(month(StartTime) AS decimal(4,2)) / 3) AS char(1)) AS SelectQuarter , Quantity from Quarters order by Quantity desc ```
Here I use a [Numbers table](https://stackoverflow.com/questions/1393951/what-is-the-best-way-to-create-and-populate-a-numbers-table?lq=1) to explode the days as rows from your Start-->End range, group that day range by quarter, then extract the year:quarter. ``` declare @MyTable table ( MyGroup nvarchar(10) NULL, StartTime date NULL, EndTime date NULL, Quantity int NULL ); Insert into @MyTable Values ('a', '7/1/2014', '6/30/2016', '10'), ('b', '12/1/2013', '11/30/2014', '8') ;with DaysBetween (MyGroup, MyDate) as ( select mt.MyGroup, dateadd(day, n-1, mt.StartTime) from @MyTable mt cross apply dbo.Number n where n.N <= datediff(day, mt.StartTime, mt.EndTime) ), AsQuarters (MyGroup, StartOfQuarter) as ( select MyGroup, dateadd(quarter, datediff(quarter, 0, MyDate), 0) from DaysBetween group by MyGroup, dateadd(quarter, datediff(quarter, 0, MyDate), 0) ) select MyGroup, datepart(year, StartOfQuarter), datepart(quarter, StartOfQuarter) from AsQuarters order by 1, 2, 3; ``` Returns: ``` MyGroup year quarter ------- ---- ------- a 2014 3 a 2014 4 a 2015 1 a 2015 2 a 2015 3 a 2015 4 a 2016 1 a 2016 2 b 2013 4 b 2014 1 b 2014 2 b 2014 3 b 2014 4 <-- Did you forget this one? ```
How to split a time frame in one row to different quarters in different rows in T-SQL
[ "", "sql", "sql-server-2012", "" ]
I have these tables: ``` PERSON COURSE PERSON_COURSE +----+----------+ +----+----------+ +------+------+ | ID | Name | | ID | Name | | P_ID | C_ID | +----+----------+ +----+----------+ +------+------+ | P1 | Person 1 | | C1 | Course 1 | | P1 | C1 | | P2 | Person 2 | | C2 | Course 2 | | P1 | C2 | | P3 | Person 3 | | C3 | Course 3 | | P3 | C2 | +----+----------+ +----+----------+ | P3 | C3 | +------+------+ ``` and I want to select *all persons which does not attend course C1*. So I'm using: ``` select p.id from person p where p.id not in ( select pc.p_id from person_course pc where pc.c_id != 'C1') ``` Now, I'm wondering if it's possible to obtain the same result without using a subquery.
One option is a left join, trying to match people with the course and including only people where there is no match; ``` SELECT p.* FROM person p LEFT JOIN person_course pc ON p.id = pc.p_id AND pc.c_id = 'C1' WHERE pc.c_id IS NULL; ``` [An SQLfiddle to test with](http://sqlfiddle.com/#!9/a58f5/2).
I just want to point out that you can do this using a subquery, but it is not the one in the question. Instead: ``` select p.id from person p where p.id not in (select pc.p_id from person_course pc where pc.c_id = 'C1' ---------------------------------^ ); ``` (Although I prefer `NOT EXISTS` for this logic, I am keeping this as similar to your logic as possible.) Be careful with double negatives.
SQL exclude rows without subquery
[ "", "sql", "subquery", "" ]