Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
Hi I'm trying to build a sql query where I update the value of a table where the left join of another table is equal to 3. Example when a vehicle has 3 photos. The query I've written thus far, it seems to fail with group by though. ``` UPDATE domain.vehicle_listing AS t0 LEFT OUTER JOIN photo AS t1 ON t0.id = t1.vehicle_listing_id SET t0.active = 0 WHERE `create_date` >= '2015-5-2' AND user_profile_id is not null AND t0.active = 1 GROUP BY t1.vehicle_listing_id HAVING COUNT(DISTINCT t1.id) = 3 ORDER BY create_date desc; Vehicle_Listing id Photo id, vehicle_listing_id, photo_url OneToMany relationship with photo. ```
It is silly to use a `left join` for this. You want `inner join`: ``` UPDATE cardaddy.vehicle_listing vl INNER JOIN (SELECT p.vehicle_listing_id, count(*) as cnt FROM photo p GROUP BY p.vehicle_listing_id ) p ON vl.id = p.vehicle_listing_id AND p.cnt = 3 SET vl.active = 0 WHERE vl.create_date >= '2015-05-02' AND vl.user_profile_id IS NOT NULL AND vl.active = 1; ``` (Assuming that `user_profile_id` is in `vehicle_listing`.)
You can also use `exists` ``` UPDATE vehicle_listing AS t0 SET t0.active = 0 WHERE t0.`create_date` >= '2015-05-02' AND t0.user_profile_id is not null AND t0.active = 1 AND EXISTS ( SELECT 1 FROM photo WHERE vehicle_listing_id=t0.id GROUP BY vehicle_listing_id HAVING COUNT(DISTINCT id) = 3 ) ``` Sample data for `vehicle_listing` ``` INSERT INTO vehicle_listing (`id`, `title`, `create_date`, `active`,user_profile_id) VALUES (1, 'test', '2015-05-02 00:00:00', 1,1), (2, 'test1', '2015-05-02 00:00:00', 1,1) ; ``` Sample data for `photo` ``` INSERT INTO photo (`id`, `vehicle_listing_id`, `photo_url`) VALUES (1, 1, 'image.jpg'), (2, 1, 'image.jpg'), (3, 1, 'image.jpg'), (4, 2, 'image.jpg'), (5, 2, 'image.jpg') ; ``` Sample Output ``` id title create_date active user_profile_id 1 test May, 02 2015 00:00:00 0 1 2 test1 May, 02 2015 00:00:00 1 1 ``` [`DEMO`](http://www.sqlfiddle.com/#!9/2c624/1)
Sql update where left join count equal to 3
[ "", "mysql", "sql", "" ]
I have a requirement where I have two tables `tbl_bb` and `tbl_rr` and data in them like this: **tbl\_bb**: ``` isin agency rating date ---------------------------------- 142356 MDA A 2010-01-20 142356 MDA AA 2012-05-14 142356 MDA BB 2013-07-04 ``` **tbl\_rr**: ``` isin agency rr_rating rr_date ------------------------------------ 142356 MDA A 2010-01-15 142356 MDA AA+ 2009-05-30 142356 MDA BBB 2013-07-04 ``` Now, requirement is I have to pick each and every record from tbl\_bb and loop through the data in tbl\_rr based on joining of isin and agency and the condition for looping is consider date from tbl\_bb and check **every date** from `tbl_rr` and pick the closest date to that of `tbl_bb` date and other condition is `rr_date(tbl_rr) <= date (tbl_bb)` **Output:** ``` isin agency rating date rr_rating rr_date ------------------------------------------------------- 142356 MDA A 2010-01-20 A 2010-01-15 142356 MDA AA 2012-05-14 AA+ 2010-01-15 142356 MDA BB 2013-07-04 BBB 2013-07-04 ``` Thanks!!
I would do this using [`APPLY`](https://technet.microsoft.com/en-us/library/ms175156%28v=sql.105%29.aspx) to get the closest 1 record from `tbl_rr`: ``` SELECT bb.isin, bb.agency, bb.rating, bb.date, rr.rr_rating, rr.rr_date FROM tbl_bb AS bb OUTER APPLY ( SELECT TOP 1 rr.rr_rating, rr.rr_date FROM tbl_rr AS rr WHERE rr.isin = bb.isin AND rr.agency = bb.agency AND rr.rr_date <= bb.date ORDER BY rr.rr_date DESC ) AS rr; ``` --- **FULL WORKING EXAMPLE** ``` -- SAMPLE DATA WITH tbl_bb AS ( SELECT isin, agency, rating, date FROM (VALUES (142356,'MDA','A','2010-01-20'), (142356,'MDA','AA','2012-05-14'), (142356,'MDA','BB','2013-07-04') ) AS bb(isin, agency, rating, date) ), tbl_rr AS ( SELECT isin, agency, rr_rating, rr_date FROM (VALUES (142356,'MDA','A','2010-01-15'), (142356,'MDA','AA+','2012-05-30'), (142356,'MDA','BBB','2013-07-04') ) AS rr (isin, agency, rr_rating, rr_date) ) -- SAMPLE DATA END SELECT bb.isin, bb.agency, bb.rating, bb.date, rr.rr_rating, rr.rr_date FROM tbl_bb AS bb OUTER APPLY ( SELECT TOP 1 rr.rr_rating, rr.rr_date FROM tbl_rr AS rr WHERE rr.isin = bb.isin AND rr.agency = bb.agency AND rr.rr_date <= bb.date ORDER BY rr.rr_date DESC ) AS rr; ```
What I would do is I would join these tables as given in description, find `DATEDIFF` between two dates and then just find lowest `DATEDIFF` value for each `ISIN` and `agency`
Data comparison between two tables in SQL server
[ "", "sql", "sql-server", "sql-server-2008", "sql-server-2008-r2", "" ]
I have a table that has a column with data like below: ``` aaaa;1 aaaa;2 aaaa;3 bbbb;1 cccc;1 dddd;1 dddd;2 ``` I need to select the data with the highest number after the semicolon (`;`) like this: ``` aaaa;3 bbbb;1 cccc;1 dddd;2 ``` Can anyone give me ideas to how to do this?
As simple as this: ``` select -- Construct the string by left part + max(right part) LEFT([column], CHARINDEX(';', [column], 0) - 1) + ';' + MAX(RIGHT([column], LEN([column]) - CHARINDEX(';', [column], 0))) from [table] group by LEFT([column], CHARINDEX(';', [column], 0) - 1) -- The left part of ';' ```
Use CharIndex to discover the location of the semicolon and then sort using the result. Example code below: ``` declare @table table ( col1 varchar(25) ) insert into @table (col1) values ('aaaa;1') insert into @table (col1) values ('aaaa;2') insert into @table (col1) values ('aaaa;3') insert into @table (col1) values ('bbbb;1') insert into @table (col1) values ('dddd;1') insert into @table (col1) values ('dddd;2') select top 1 col1, charindex(';',col1,0) as SemiColonLocation, substring(col1, 0, charindex(';',col1,0) + 1) as TextVal, substring(col1, charindex(';',col1,0) + 1, (len(col1) - charindex(';',col1,0))) as AfterVal from @table order by substring(col1, charindex(';',col1,0) + 1, (len(col1) - charindex(';',col1,0))) desc ```
Comparing data from the same column in SQL Server
[ "", "sql", "sql-server", "" ]
Is possible to delete all contents in scheme in Oracle? I found this script: ``` Begin for c in (select table_name from user_tables) loop execute immediate ('drop table "'||c.table_name||'" cascade constraints'); end loop; End; / ``` But I would like to know if are there anything to drop everything in the schema, indexes,tables,contraints... but not the schema (drop user ...). Thanks.
Normally, it is simplest to drop and add the user. This is the preferred method if you have system or sysdba access to the database. If you don't have system level access, and want to scrub your schema, the following sql will produce a series of drop statments, which can then be executed. ``` select 'drop '||object_type||' '|| object_name|| DECODE(OBJECT_TYPE,'TABLE',' CASCADE CONSTRAINTS','') || ';' from user_objects ``` Then, I normally purge the recycle bin to really clean things up. To be honest, I don't see a lot of use for oracle's recycle bin, and wish i could disable it, but anyway: ``` purge recyclebin; ``` This will produce a list of drop statements. Not all of them will execute - if you drop with cascade, dropping the PK\_\* indices will fail. But in the end, you will have a pretty clean schema. Confirm with: ``` select * from user_objects ``` Also, just to add, the Pl/sql block in your question will delete only tables, it doesn't delete all other objects. ps: Copied from some website, was useful to me. Tested and working like a charm.
Found the following script on github which worked out-of-the-box (SQL\*Plus: Release 12.2.0.1.0 Production): <https://gist.github.com/rafaeleyng/33eaef673fc4ee98a6de4f70c8ce3657> Thanks to the author Rafael Eyng. Just login into the schema whose objects you want to drop. ``` BEGIN FOR cur_rec IN (SELECT object_name, object_type FROM user_objects WHERE object_type IN ('TABLE', 'VIEW', 'PACKAGE', 'PROCEDURE', 'FUNCTION', 'SEQUENCE', 'TYPE', 'SYNONYM', 'MATERIALIZED VIEW' )) LOOP BEGIN IF cur_rec.object_type = 'TABLE' THEN EXECUTE IMMEDIATE 'DROP ' || cur_rec.object_type || ' "' || cur_rec.object_name || '" CASCADE CONSTRAINTS'; ELSE EXECUTE IMMEDIATE 'DROP ' || cur_rec.object_type || ' "' || cur_rec.object_name || '"'; END IF; EXCEPTION WHEN OTHERS THEN DBMS_OUTPUT.put_line ( 'FAILED: DROP ' || cur_rec.object_type || ' "' || cur_rec.object_name || '"' ); END; END LOOP; END; / ``` There still might be PUBLIC SYNONYMS pointing to the just dropped tables. Following script deletes these as well: ``` BEGIN FOR cur_syn IN (SELECT synonym_name FROM all_synonyms WHERE table_owner = 'MY_USER') LOOP BEGIN EXECUTE IMMEDIATE 'drop public synonym ' || cur_syn.synonym_name ; EXCEPTION WHEN OTHERS THEN DBMS_OUTPUT.PUT_LINE ('Failed to drop the public synonym ' || cur_syn.synonym_name || '! ' || sqlerrm); END; END LOOP; END; / ```
Delete all contents in a schema in Oracle
[ "", "sql", "oracle", "plsql", "" ]
**Question 1:** regarding performance, which is the best: subqueries or JOINs? **Question 2:** is there any way to measure and compare the time when running the subquery or JOIN? **Edit** I am a bit confused, though less than earlier. I found this <http://www.akadia.com/services/sqlsrv_subqueries.html> It wrap it up pretty well in combination with the answers below.
> There are a lot of opinions about JOINs vs Subqueries. Chris London has a great article on this subject. > So it seems like the verdict is to do subqueries. The reason the > subquery in the join is faster than the subquery in the where clause > is, I believe, because when it’s in the where it has to run that > condition for every row whereas it only has to run it once for the > subquery/join. Like I said before different RDBMSs handle things > differently but even if your RDBMS doesn’t handle subqueries as well > others, to me, they are more readable. So now I recommend subqueries! Source: <http://www.chrislondon.co/joins-vs-subqueries/>
There is no generic answer. It depends on the platform you are using (Microsoft SQL Server, Oracle, MySQL, etc.) as well as the query. As PM 77-1 said, optimizers these days are pretty good at resolving the relational algebra of a query to the most efficient execution plan. But each server and each query are different. In SQL Server you can look at the Estimated Execution plan to compare different queries and see how the optimizer will choose to execute it. Most platforms have a similar concept, like EXPLAIN. This topic is very broad and requires a lot of experimenting with each platform and each database and query. Getting good at performance tuning is an art worth exploring though.
Performance on subqueries and JOINS?
[ "", "sql", "pdo", "" ]
I have a table which has the following structure and sample data: ``` ITEM LOC STOCK 0001 KS5 10 0001 KS6 30 0002 KS5 10 0002 KS6 20 ``` I need to query cross tab so that I get ``` ITEM KS5 KS6 0001 10 30 0002 10 20 ``` The LOC (KS5 and KS6) can vary and new locations can be added. How can I get the desired result?
For dynamically generated results you need some dynamic PLSQL solution, something like this procedure creating view `v_list_loc`: ``` create or replace procedure p_list_loc is v_sql varchar2(32000) := ''; begin for c in (select distinct loc from test order by loc) loop v_sql := v_sql || '''' ||c.loc|| ''' '||c.loc||','; end loop; v_sql := 'create or replace view v_list_loc as ' ||'select * from (select item, loc, stock from test) pivot (sum(stock) ' ||'for (loc) in ('||rtrim(v_sql, ',')||'))'; execute immediate v_sql; end p_list_loc; ``` In procedure code replace `test` with your table name. Compile this procedure, run and select results from generated view `v_list_loc`: ``` SQL> exec p_list_loc; PL/SQL procedure successfully completed SQL> select * from v_list_loc; ITEM KS5 KS6 ----- ---------- ---------- 0001 10 30 0002 10 20 ``` Every time when new values in column `loc` appears you need to execute procedure before selecting from view.
Please try this query . ``` SELECT * FROM (SELECT ITEM ,LOC ,STOCK FROM TABLE_NAME) PIVOT (SUM(STOCK) FOR (LOC) IN ('KS5' , 'KS6')) ORDER BY ITEM; ``` Regards.
Oracle SQL Cross Tab Query
[ "", "sql", "oracle", "oracle11g", "report", "crosstab", "" ]
In my database I have a column where the value is separated by a comma (`firstname,lastname`). I am trying to search this entire field in where clause using like condition. I have tried this: ``` SELECT fullname from users where ( upper( fullname ) like upper('abc, xyz')) ```
I have used regexp\_like instead of only like and it worked for me.
Are you looking something like this- It will retrieve field values separated with comma ',' ``` select * from table_name where filedName LIKE %,%; ``` If this is not what you are finding, please eleborate or explain more about your question.
How to use like clause for a value separated with comma in Oracle
[ "", "sql", "oracle10g", "sql-like", "" ]
This is my model with associations: ``` class ProjectLineThreshold < ActiveRecord::Base belongs_to :project_line belongs_to :source, class_name: 'Language' belongs_to :target, class_name: 'Language' end ``` ProjectLineThreshold table has these columns (:id, :source\_id, :target\_id, :score). I need to add names of the languages by source\_id and target\_id from languages table. I came up with this statement : ``` thresholds = self.project_line_thresholds.joins(:source, :target) .select('project_line_thresholds.id, project_line_thresholds.source_id, project_line_thresholds.target_id, project_line_thresholds.score, languages.name as source_name, languages.name as target_name') ``` But I get the same names for target and source. What would be the proper join statement, or I am doing it wrong?
The following query will only hit the db once: ``` self.project_line_thresholds .joins(:source, :target) .includes(:source, :target) .map {|plt| [plt.id, plt.source_id, plt.target_id, plt.score, source.name, target.name]} ```
You don't need the select statement, just get the names through the associations: ``` ProjectLineThreshold.includes(:source, :target).each do |plt| puts "Source name: #{plt.source.name}" puts "Target name: #{plt.target.name}" end ``` Note that `includes` just makes sure to preload the associated records, otherwise it would run separate queries to retrieve source and target during each iteration of the loop.
Rails SQL Join two tables, One table has two columns with ids of other table, I need to get names of these ids
[ "", "sql", "ruby-on-rails", "postgresql", "join", "" ]
I'm making a stored procedure where I need to use a value that has been set earlier. I'm pretty bad at explaining this so I will use an example: ``` CASE WHEN ((select top 1 stuksweergeven from componenten where componentid = componentlink.componentid) = 1) and ((select opbrengstperkilo from componenten where componentid = componentlink.componentid) <> 0) THEN amount1 * (select opbrengstperkilo from componenten where componentid = componentlink.componentid) ELSE amount1 END AS Total, Amount1 * Total *(SELECT dbo.SelectReceptenLinkGewicht(Componentid,0)) AS TotalWeight ``` I made a `CASE` that gives it outcome as Total. After that i would like to use Total to calcute the TotalWeight. Sorry for my English.
The thing is that all expressions in `SELECT` list are evaluated in `all at once` manner. That's why you need to replicate your code. But you can create `subquery` for that or `cte` like: ``` with cte as( select Amount1, ComponentID, CASE WHEN ((select top 1 stuksweergeven from componenten where componentid = componentlink.componentid) = 1) and ((select opbrengstperkilo from componenten where componentid = componentlink.componentid) <> 0) THEN amount1 * (select opbrengstperkilo from componenten where componentid = componentlink.componentid) ELSE amount1 END AS Total from SomeTable) select Total, Amount1 * Total *(SELECT dbo.SelectReceptenLinkGewicht(Componentid,0)) AS TotalWeight from cte ``` Or: ``` select Total, Amount1 * Total *(SELECT dbo.SelectReceptenLinkGewicht(Componentid,0)) AS TotalWeight from ( select Amount1, ComponentID, CASE WHEN ((select top 1 stuksweergeven from componenten where componentid = componentlink.componentid) = 1) and ((select opbrengstperkilo from componenten where componentid = componentlink.componentid) <> 0) THEN amount1 * (select opbrengstperkilo from componenten where componentid = componentlink.componentid) ELSE amount1 END AS Total from SomeTable) t ```
you can totally use `CROSS APPLY` to make things work for you. A very informative article: <http://sqlmag.com/blog/tip-apply-and-reuse-column-aliases>
Use value of 'AS ColumnName' later in query
[ "", "sql", "sql-server", "stored-procedures", "" ]
Why doesn't the following code work in SQL ``` SELECT * FROM DATA WHERE VALUE != NULL; ```
we can not Compare null value with **=** We Special operator to compare null value in sql **IS OPERATOR** ``` SELECT * FROM DATA WHERE VALUE is not NULL; ``` Null is not any Value.Sql consider Null as **Unknown/absence of data**.
The condition you written is not in proper format. If you want to select not null values from your table then you can use the following command ``` select *from table name where column name IS NOT NULL ```
Selecting rows which are not null in sql
[ "", "sql", "" ]
I have a problem in SQL Wich i'll try to explain briefly **TABLENAME:** EXAMPLE ``` Customer name | ProductClass ----------------------------- A | Accessory B | Accessory B | Bicycle C | Bicycle ``` **My goal:** Show Only the 2 rows of Customer B. So letting the query only show the custumors who have 2 value's for ProductClass **If i attempt** ``` Select * From Example WHERE ProductClass LIKE 'Accessory' AND ProductClass LIKE 'Bicycle' ``` I get no results **If i attempt** ``` Select * From Example WHERE ProductClass LIKE 'Accessory' OR ProductClass LIKE 'Bicycle' ``` I get all 4 the rows.
> show the custumors who have 2 value's for ProductClass ``` SELECT * FROM Example e INNER JOIN ( select [customer name] from example group by [customer name] having count(*) = 2 ) c on c.[customer name] = e.[customer name] ```
``` Select [Customer name] From Example group by [Customer name] having count(distinct ProductClass) > 1 ``` If you want to get the entire row, then you could use ``` SELECT * FROM Example WHERE [Customer name] in ( Select [Customer name] From Example group by [Customer name] having count(distinct ProductClass) > 1 ) ```
SQL: WHERE & the AND-statement problems
[ "", "sql", "sql-server", "" ]
I have two tables. **table\_1 :** ``` id,value1 2015,100 ``` **table\_2 :** ``` id,target 2015,200 ``` I want to find the percentage of achievement : `table_1.value1/table_2.target=...%`. How to create `SQL` scripts in Microsoft Access and Oracle?
You need to join the tables. Assuming you mean per id: ``` SELECT table_1.id, value/target FROM table_1 JOIN table_2 ON table_1.id = table_2.id ```
ORACLE - to get the percent value with 2 decimal places (for example): ``` SELECT table_1.id, ROUND(table_1.value/table_2.target * 100, 2) percentage FROM table_1, table_2 WHERE table_1.id = table_2.id ```
Calculation between fields in different tables
[ "", "sql", "oracle", "ms-access", "select", "" ]
I'm looking for a sql statement to count the number of unique characters in a string. e.g. ``` 3333333333 -> returns 1 1113333333 -> returns 2 1112222444 -> returns 3 ``` I did some tests with REGEX and mysql-string-functions, but I didn't find a solution.
There is no direct or easy way of doing it. You may need to write a store function to do the job and by looking at all the characters you may expect in the data. Here is an example for just digits , which could be extended for all the characters in a stored function ``` mysql> select * from test ; +------------+ | val | +------------+ | 11111111 | | 111222222 | | 1113333222 | +------------+ select val, sum(case when locate('1',val) > 0 then 1 else 0 end ) + sum( case when locate('2',val) > 0 then 1 else 0 end) + sum(case when locate('3',val) > 0 then 1 else 0 end) +sum(case when locate('4',val) > 0 then 1 else 0 end ) as occurence from test group by val +------------+-----------+ | val | occurence | +------------+-----------+ | 11111111 | 1 | | 111222222 | 2 | | 1113333222 | 3 | +------------+-----------+ ``` Or if you have enough time , create a lookup table with all the characters you could think of. And make the query in 2 lines ``` mysql> select * from test ; +------------+ | val | +------------+ | 11111111 | | 111222222 | | 1113333222 | +------------+ 3 rows in set (0.00 sec) mysql> select * from look_up ; +------+------+ | id | val | +------+------+ | 1 | 1 | | 2 | 2 | | 3 | 3 | | 4 | 4 | +------+------+ 4 rows in set (0.00 sec) select t1.val, sum(case when locate(t2.val,t1.val) > 0 then 1 else 0 end ) as occ from test t1,(select * from look_up)t2 group by t1.val ; +------------+------+ | val | occ | +------------+------+ | 11111111 | 1 | | 111222222 | 2 | | 1113333222 | 3 | +------------+------+ ```
This is for fun right? SQL is all about processing sets of rows, so if we can convert a 'word' into a set of characters as rows then we can use the 'group' functions to do useful stuff. Using a 'relational database engine' to do simple character manipulation feels wrong. Still, is it possible to answer your question with just SQL? Yes it is... Now, i always have a table that has one integer column that has about 500 rows in it that has the ascending sequence 1 .. 500. It is called 'integerseries'. It is a really small table that used a lot so it gets cached in memory. It is designed to replace the `from 'select 1 ... union ...` text in queries. It is useful for generating sequential rows (a table) of anything that you can calculate that is based on a integer by using it in a `cross join` (also any `inner join`). I use it for generating days for a year, parsing comma delimited strings etc. Now, the sql *`mid`* function can be used to return the character at a given position. By using the 'integerseries' table i can 'easily' convert a 'word' into a characters table with one row per character. Then use the 'group' functions... ``` SET @word='Hello World'; SELECT charAtIdx, COUNT(charAtIdx) FROM (SELECT charIdx.id, MID(@word, charIdx.id, 1) AS charAtIdx FROM integerseries AS charIdx WHERE charIdx.id <= LENGTH(@word) ORDER BY charIdx.id ASC ) wordLetters GROUP BY wordLetters.charAtIdx ORDER BY charAtIdx ASC ``` Output: ``` charAtIdx count(charAtIdx) --------- ------------------ 1 d 1 e 1 H 1 l 3 o 2 r 1 W 1 ``` Note: The number of rows in the output is the number of different characters in the string. So, if the number of output rows is counted then the number of 'different letters' will be known. This observation is used in the final query. *The final query:* The interesting point here is to move the 'integerseries' 'cross join' restrictions (1 .. length(word)) into the actual 'join' rather than do it in the `where` clause. This provides the optimizer with clues as to how to restrict the data produced when doing the `join`. ``` SELECT wordLetterCounts.wordId, wordLetterCounts.word, COUNT(wordLetterCounts.wordId) AS letterCount FROM (SELECT words.id AS wordId, words.word AS word, iseq.id AS charPos, MID(words.word, iseq.id, 1) AS charAtPos, COUNT(MID(words.word, iseq.id, 1)) AS charAtPosCount FROM words JOIN integerseries AS iseq ON iseq.id BETWEEN 1 AND words.wordlen GROUP BY words.id, MID(words.word, iseq.id, 1) ) AS wordLetterCounts GROUP BY wordLetterCounts.wordId ``` Output: ``` wordId word letterCount ------ -------------------- ------------- 1 3333333333 1 2 1113333333 2 3 1112222444 3 4 Hello World 8 5 funny - not so much? 13 ``` Word Table and Data: ``` CREATE TABLE `words` ( `id` int(11) NOT NULL AUTO_INCREMENT, `word` varchar(128) COLLATE utf8mb4_unicode_ci NOT NULL, `wordlen` int(11) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=6 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci; /*Data for the table `words` */ insert into `words`(`id`,`word`,`wordlen`) values (1,'3333333333',10); insert into `words`(`id`,`word`,`wordlen`) values (2,'1113333333',10); insert into `words`(`id`,`word`,`wordlen`) values (3,'1112222444',10); insert into `words`(`id`,`word`,`wordlen`) values (4,'Hello World',11); insert into `words`(`id`,`word`,`wordlen`) values (5,'funny - not so much?',20); ``` Integerseries table: range 1 .. 30 for this example. ``` CREATE TABLE `integerseries` ( `id` int(11) unsigned NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=500 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci ```
Count number of unique characters in a string
[ "", "mysql", "sql", "database", "" ]
I am casting `DateTime` field to Time by using `CAST` Syntax. ``` select CAST([time] as time) as [CSTTime] ``` DateTime `2015-03-19 00:00:00.000` Present Output : Time `03:05:36.0000000` I need only `HH:MM:SS` and not Milliseconds or 0000's How to filter or Cast it to exact `HH:MM:SS` Format.
`Time` is not stored with its *display format* in SQL Server. Therefore, from the user perspective, you can say that it **has no format.** Of course, that's not completely accurate since it does have a *storage format*, but as an average user you can't really use it. This is true for all date and time data types: `Date`, `DateTimeOffset`, `DateTime2`, `SmallDateTime`, `DateTime` and `Time`. If you need a format then you don't need to cast to `time` but to a `char`. Use [`Convert`](https://msdn.microsoft.com/en-us/library/ms187928.aspx) to get the `char` you need: ``` SELECT CONVERT(char(10), [time], 108) as CSTTime ``` Here is some background data if you're interested: In [this article](http://sqlmag.com/sql-server/solving-datetime-mystery) published in 2000 the writer explains in depth how SQL Server treats dates and times. I doubt if anything significant changed between 2000 and 2015 in the way SQL Server stores `date`, `time` and `datetime` values internally. Here are the relevant quotes, if you don't want to read all of it: > **So how does SQL Server internally store the dates?** It uses 8 bytes to store a datetime value—the first 4 for the date and the second 4 for the time. SQL Server can interpret both sets of 4 bytes as integers. > ........ > ........ > SQL Server stores the second integer for the time as the number of clock ticks after midnight. A second contains 300 ticks, so a tick equals 3.3 milliseconds (ms). since `time` is actually stored as a 4 byte integer, it really doesn't have a format as an integral part of the data type. You might also want to check out [this article](http://ariely.info/Blog/tabid/83/EntryId/162/Examine-how-DateTime2-type-stored-in-the-data-file.aspx) for a more detailed explanation with code samples.
You can achieve it with `CAST` just simple use `TIME(0)` datatype in following: ``` SELECT CAST('2015-03-19 01:05:06.289' AS TIME(0)) ``` **OUTPUT:** `01:05:06`
How to cast the DateTime to Time
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I just switched from sqlite3 to postgresql-9.4. Before, when I search with my method "typeahead", I would get results that were not case-sensitive. But now that I switched, the method is suddenly case-sensitive. How can I make the method case-insensitive with postgresql? ``` def typeahead q = params[:query] render json: Subject.where('name like ? OR second_name like ? OR keywords like ?', "%#{q}%", "%#{q}%", "%#{q}%") end ```
I think you can use ILIKE instead of LIKE ``` Subject.where('name ilike ? OR second_name ilike ? OR keywords ilike ?', "%#{q}%", "%#{q}%", "%#{q}%") ``` or to simplify a little ``` Subject.where('name ilike :name OR second_name ilike :name OR keywords ilike :name', name: "%#{q}%") ```
You can also use "ilike" keyword instead "like".
How to make postgresql search method case-insensitive?
[ "", "sql", "ruby-on-rails", "ruby", "postgresql", "" ]
I'm trying to get my head around this SQL question: A database for a hotel chain contains the following tables: ``` Hotel(HotelNo, HotelName, City) Room(RoomNo, HotelNo, Type, Price) Booking(HotelNo, GuestNo, DateFrom, DateTo, RoomNo) Guest(GuestNo, GuestName, GuestAddress) ``` I want to List the details of all rooms at the Grosvenor Hotel,including the name of the guest staying in the room, if the room is occupied. I'm okay with joining 2 tables in SQL but I don't know how to go about joining 4 tables. My attempt would probably be: ``` SELECT Room.*, Guest.GuestName FROM Room INNER JOIN Hotel, Booking, Guest ON Hotel.HotelName = "Grosvenor Hotel", Hotel.HotelNo = Room.HotelNo, Booking.GuestNo = Guest.GuestNo; ``` I think that's completely wrong but anyway, hopefully someone knows what I should be doing. Thanks in advance
The correct syntax is: ``` SELECT Room.*, Guest.GuestName FROM Room INNER JOIN Hotel on Hotel.HotelNo = Room.HotelNo, inner join Booking on Booking.hotelno= Hotel.HotelNo inner join Guest on Booking.GuestNo = = Guest.GuestNo where Hotel.HotelName = "Grosvenor Hotel" ```
Try this : ``` select g.roomno, g.guestname from hotel h join room r on h.hotelno = r.hotelno join booking b on b.hotelno=r.hotelno join guest g on g.guestno=b.guestno where h.hotelname='Grosvenor Hotel'; ```
Joining many tables in SQL
[ "", "mysql", "sql", "sql-server", "inner-join", "" ]
My table Data looks like ``` Sno Componet Subcomponent IRNo 1 1 C1 to C100 001 2 1 C101 to C200 002 3 1 C201 to C300 003 4 1 C301,C400 004 5 1 C401,C500 005 ``` If user enter C50 into textbox then it will get the data from First Row.Mean C50 between C1 to C100(C1,C100) as same as if user enter C340 , then it will the data from SNO 4. Means C340 between C301,C400(C301 to C400) How can I write the query for this in sql server?
This is a terrible design and should be replaced with a better one if possible. If re-designing is not possible then [this answer](https://stackoverflow.com/a/30013830/3094533) by Eduard Uta is a good one, but still has one drawback compared to my suggested solution: It assumes that the Subcomponent will always contain exactly one letter and a number, and that the range specified in the table has the same letter in both sides. a range like `AB1 to AC100` might be possible (at least I don't think there's a way to prevent it using pure t-sql). This is the only reason I present my solution as well. Eduard already got my vote up. ``` DECLARE @Var varchar(50) = 'C50' -- also try 'AB150' and 'C332' ;WITH CTE AS ( SELECT Sno, Comp, SubComp, LEFT(FromValue, PATINDEX('%[0-9]%', FromValue)-1) As FromLetter, CAST(RIGHT(FromValue, LEN(FromValue) - (PATINDEX('%[0-9]%', FromValue)-1)) as int) As FromNumber, LEFT(ToValue, PATINDEX('%[0-9]%', ToValue)-1) As ToLetter, CAST(RIGHT(ToValue, LEN(ToValue) - (PATINDEX('%[0-9]%', ToValue)-1)) as int) As ToNumber FROM ( SELECT Sno, Comp, SubComp, LEFT(SubComp, CASE WHEN CHARINDEX(' to ', SubComp) > 0 THEN CHARINDEX(' to ', SubComp)-1 WHEN CHARINDEX(',', SubComp) > 0 THEN CHARINDEX(',', SubComp)-1 END ) FromValue, RIGHT(SubComp, CASE WHEN CHARINDEX(' to ', SubComp) > 0 THEN LEN(SubComp) - (CHARINDEX(' to ', SubComp) + 3) WHEN CHARINDEX(',', SubComp) > 0 THEN CHARINDEX(',', SubComp)-1 END ) ToValue FROM T ) InnerQuery ) SELECT Sno, Comp, SubComp FROM CTE WHERE LEFT(@Var, PATINDEX('%[0-9]%', @Var)-1) BETWEEN FromLetter AND ToLetter AND CAST(RIGHT(@Var, LEN(@Var) - (PATINDEX('%[0-9]%', @Var)-1)) as int) BETWEEN FromNumber And ToNumber ``` [sqlfiddle here](http://sqlfiddle.com/#!6/df1a9/10)
No comments about the design. One solution for your question is using a CTE to sanitize the range boundaries and get them to a format that you can work with like so: ``` DECLARE @inputVal varchar(100) = 'C340' -- sanitize input: SELECT @inputVal = RIGHT(@inputVal, (LEN(@inputVal)-1)) ;WITH cte (Sno, SubcomponentStart, SubcomponentEnd, IRNo ) AS ( SELECT Sno, CASE WHEN Subcomponent LIKE '%to%' THEN REPLACE(SUBSTRING(Subcomponent, 2, CHARINDEX('to', Subcomponent)), 'to','') ELSE REPLACE(SUBSTRING(Subcomponent, 2,CHARINDEX(',', Subcomponent)), ',','') END as SubcomponentStart, CASE WHEN Subcomponent LIKE '%to%' THEN REPLACE(SUBSTRING(Subcomponent, CHARINDEX('to', Subcomponent)+4, LEN(Subcomponent)), 'to', '') ELSE REPLACE(SUBSTRING(Subcomponent, CHARINDEX(',', Subcomponent)+3, LEN(Subcomponent)), ',', '') END as SubcomponentEnd, IRNo from test ) SELECT t.* FROM test t INNER JOIN cte c ON t.Sno = c.Sno WHERE CAST(@inputVal as int) BETWEEN CAST(c.SubcomponentStart as INT) AND CAST(c.SubcomponentEnd as INT) ``` SQL Fiddle / tested here: <http://sqlfiddle.com/#!6/1b9f0/19>
How to split two words and number between two number?
[ "", "sql", "sql-server", "" ]
I want to get all dates between `startend` and `Enddate`.I used between to get the desired result. But the between function skipping the current date. ``` Declare @StartDate Datetime ='2014-04-01 11:13:37' ,@EndDate datetime ='2014-04-04 11:13:37' ``` Query:- ``` Select * from table where date between @Startdate and @EndDate ``` Current Result:- ``` 2014-04-02 11:13:37 2014-04-03 11:13:37 2014-04-04 11:13:37 ``` Expected result:- ``` 2014-04-01 11:13:37 2014-04-02 11:13:37 2014-04-03 11:13:37 2014-04-04 11:13:37 ```
You Can Try this ``` Declare @StartDate Date ='2014-04-01' ,@EndDate date ='2014-04-04' (OR) Declare @StartDate Datetime ='2014-04-01 00:00:00' ,@EndDate datetime ='2014-04-04 12:59:59' ``` Query:- ``` Select * from table where date between @Startdate and @EndDate ```
You could create a procedure like this: ``` CREATE PROCEDURE getAllDaysBetweenTwoDate ( @StartDate DATETIME, @EndDate DATETIME ) AS BEGIN DECLARE @TOTALCount INT SET @StartDate = DATEADD(DAY,-1,@StartDate) Select @TOTALCount= DATEDIFF(DD,@StartDate,@EndDate); WITH d AS ( SELECT top (@TOTALCount) AllDays = DATEADD(DAY, ROW_NUMBER() OVER (ORDER BY object_id), REPLACE(@StartDate,'-','')) FROM sys.all_objects ) SELECT AllDays From d RETURN END GO ``` Courtesy: [Find All the Days Between Two Dates](http://www.codeproject.com/Tips/639460/Find-All-the-Days-Between-Two-Dates)
How to get all dates between two dates in sql server
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "" ]
I have a query like this: ``` DECLARE @Sortorder VARCHAR(5) = 'asc', @ColumnNumber INT = 9 SELECT SUBSTRING(csu.UserName, CHARINDEX(CHAR(92), csu.UserName) + 1, LEN(csu.UserName)) AS UserName, w.WorkItemId FROM [tasks].[WorkItems] w LEFT JOIN operations.CustomerServiceUser csu ON csu.UserId = w.AssignedToUserId WHERE w.[ShowInTaskList] = 1 AND UserName IS NOT NULL ORDER BY CASE WHEN @ColumnNumber = 9 AND @SortOrder = 'asc' THEN UserName END ASC, CASE WHEN @ColumnNumber = 9 AND @SortOrder = 'desc' THEN UserName END DESC ``` When I do that the data is not sorted by UserName in any order asc or desc, but when I do this: ``` SELECT SUBSTRING(csu.UserName, CHARINDEX(CHAR(92), csu.UserName) + 1, LEN(csu.UserName)) AS UserName, w.WorkItemId FROM [tasks].[WorkItems] w LEFT JOIN operations.CustomerServiceUser csu ON csu.UserId = w.AssignedToUserId WHERE w.[ShowInTaskList] = 1 AND UserName IS NOT NULL ORDER BY UserName ``` What am I doing wrong in the dynamic order by? The values are the same in the declared variables and in the case. To be frank I don't know what keywords I should pass to google ;) Many thanks for the answer.
You have the column `UserName` and an alias `UserName`. It is the **column** value that is being used to sort the results, not the alias. While it is perfectly acceptable to use an alias name inside the `ORDER BY` clause, it cannot be used inside `CASE WHEN` statement. The solution is to use a sub-query (or CTE): ``` DECLARE @Sortorder VARCHAR(5) = 'asc', @ColumnNumber INT = 9 SELECT * FROM ( SELECT SUBSTRING(csu.UserName, /* removed for readability */) AS UserNameCopy, w.WorkItemId FROM [tasks].[WorkItems] w LEFT JOIN operations.CustomerServiceUser csu ON csu.UserId = w.AssignedToUserId WHERE w.[ShowInTaskList] = 1 AND UserName IS NOT NULL ) AS SubQuery ORDER BY CASE WHEN @ColumnNumber = 9 AND @SortOrder = 'asc' THEN SubQuery.UserNameCopy END ASC, CASE WHEN @ColumnNumber = 9 AND @SortOrder = 'desc' THEN SubQuery.UserNameCopy END DESC ```
You can use CROSS APPLY to make your code look more friendly. It does not affect performance: ``` DECLARE @Sortorder VARCHAR(5) = 'asc' , @ColumnNumber INT = 9; SELECT SUBSTRING(csu.UserName, CHARINDEX(CHAR(92), csu.UserName) + 1, u.UserName , w.WorkItemId FROM [tasks].[WorkItems] w LEFT JOIN operations.CustomerServiceUser csu ON csu.UserId = w.AssignedToUserId CROSS APPLY (SELECT LEN(csu.UserName) AS UserName ) u WHERE w.[ShowInTaskList] = 1 AND UserName IS NOT NULL ORDER BY CASE WHEN @ColumnNumber = 9 AND @Sortorder = 'asc' THEN u.UserName END ASC , CASE WHEN @ColumnNumber = 9 AND @Sortorder = 'desc' THEN u.UserName END DESC; ``` And example with the data: ``` CREATE TABLE #a ( aColumn INT, b INT ); INSERT INTO #a VALUES ( 1, 1 ), ( 1, 2 ), ( 2, 1 ), ( 3, 1 ), ( 1, 3 ), ( 4, 4 ); DECLARE @Sortorder VARCHAR(5) = 'asc' , @ColumnNumber INT = 9; SELECT aColumn , b aColumn FROM #a tbl CROSS APPLY ( SELECT CAST(( tbl.aColumn + 1 - 2 ) * 5 AS VARCHAR(100)) r /*or any other kind of operation, such as substring etc*/ ) shortcut ORDER BY CASE WHEN @ColumnNumber = 9 AND @Sortorder = 'asc' THEN shortcut.r END ASC , CASE WHEN @ColumnNumber = 9 AND @Sortorder = 'desc' THEN shortcut.r END DESC; DROP TABLE #a; ```
ORDER BY clause is not sorting by alias in SQL Server
[ "", "sql", "sql-server", "t-sql", "sorting", "sql-order-by", "" ]
I want to check if the id I want to insert into tableA exists in tableB into an if statement Can I do something like this ``` if new.id exists (select id from tableB where stat = '0' ) then some code here end if; ``` When I try this I get an error message, any thoughts?
This is usually done with a trigger. A trigger function does the trick: ``` CREATE FUNCTION "trf_insert_tableA"() RETURNS trigger AS $$ BEGIN PERFORM * FROM "tableB" WHERE id = NEW.id AND stat = '0'; IF FOUND THEN -- Any additional code to go here, optional RETURN NEW; ELSE RETURN NULL; END IF; END; $$ LANGUAGE plpgsql; CREATE TRIGGER "tr_insert_tableA" BEFORE INSERT ON "tableA" FOR EACH ROW EXECUTE PROCEDURE "trf_insert_tableA"(); ``` A few notes: * Identifiers in PostgreSQL are case-insensitive. PostgreSQL by default makes them lower-case. To maintain the case, use double-quotes. To make your life easy, use lower-case only. * A [trigger](http://www.postgresql.org/docs/current/static/plpgsql-trigger.html#PLPGSQL-DML-TRIGGER) needs a trigger function, this is always a two-step affair. * In an `INSERT` trigger, you can use the `NEW` implicit parameter to access the column values that are attempted to be inserted. In the trigger function you can modify these values and those values are then inserted. This only works in a `BEFORE INSERT` trigger, obviously; `AFTER INSERT` triggers are used for side effects such as logging, auditing or cascading inserts to other tables. * [The `PERFORM` statement](http://www.postgresql.org/docs/current/static/plpgsql-statements.html#PLPGSQL-STATEMENTS-SQL-NORESULT) is a special form of a `SELECT` statement to test for the presence of data; it does not return any data, but it does set the `FOUND` implicit parameter that you can use in a conditional statement. * Depending on your logic, you may want the insert to succeed or to fail. `RETURN NEW` to make the insert succeed, `RETURN NULL` to make it fail. After you defined the trigger, you can simply issue an `INSERT` statement: the trigger function is invoked automatically.
Why not do it like this? I'm not very knowledgeable about PostgreSQL but this would work in T-SQL. ``` INSERT INTO TargetTable(ID) SELECT ID FROM TableB WHERE ID NOT IN (SELECT DISTINCT ID FROM TargetTable) ```
sql query inside if stage with exists
[ "", "sql", "postgresql", "" ]
I'm a new student to database and I've come across a question in my book that I'm having a hard time solving. Pretending I'm the DBA of this bookstore... I need to determine which books are generating less than 55% profit and how many copies of the books have been sold. Here is what I have tried doing: ``` SELECT title, COUNT(*) FROM books WHERE (retail-cost) >= .55 * cost group by title; ``` But when I run that, I just get this: ``` DATABASE IMPLEMENTATION 1 PAINLESS CHILD-REARING 1 HOW TO GET FASTER PIZZA 1 SHORTEST POEMS 1 BIG BEAR AND LITTLE DOVE 1 BODYBUILD IN 10 MINUTES A DAY 1 HOLY GRAIL OF ORACLE 1 HOW TO MANAGE THE MANAGER 1 COOKING WITH MUSHROOMS 1 BUILDING A CAR WITH TOOTHPICKS 1 ``` Clearly, it's not right. How do I do this? Here is my table structure: ``` CREATE TABLE Books (ISBN VARCHAR2(10), Title VARCHAR2(30), PubDate DATE, PubID NUMBER (2), Cost NUMBER (5,2), Retail NUMBER (5,2), Discount NUMBER (4,2), Category VARCHAR2(12), CONSTRAINT books_isbn_pk PRIMARY KEY(isbn), CONSTRAINT books_pubid_fk FOREIGN KEY (pubid) REFERENCES publisher (pubid)); CREATE TABLE ORDERITEMS ( Order# NUMBER(4), Item# NUMBER(2), ISBN VARCHAR2(10), Quantity NUMBER(3) NOT NULL, PaidEach NUMBER(5,2) NOT NULL, CONSTRAINT orderitems_pk PRIMARY KEY (order#, item#), CONSTRAINT orderitems_order#_fk FOREIGN KEY (order#) REFERENCES orders (order#) , CONSTRAINT orderitems_isbn_fk FOREIGN KEY (isbn) REFERENCES books (isbn) , CONSTRAINT oderitems_quantity_ck CHECK (quantity > 0) ); ```
You need to make a join to see how many of your books has been sold ``` SELECT b.ISBN, b.Title, COUNT(ISBN) FROM books AS b INNER JOIN ORDERITEMS AS oi ON oi.ISBN = b.ISBN WHERE Retail >= .55 * Cost GROUP BY b.ISBN, b.Title; ```
in your posted request `... count(*)... group by title` returns the number of rows for each title in the table Books, that' s why you always obtain 1. What you want is to sum the quantity in orderItems, and get the title with isbn matching value in table books. I would do something like this : ``` select Books.title, sum(ORDERITEMS.Quantity) as numberOfSales from ORDERITEMS left join Books on ORDERITEMS.isbn = Books.isbn and Books.cost <= (retail-cost/.55) group by title ```
How to write this select statement that counts if profit is < 55% of the cost of the item?
[ "", "sql", "database", "" ]
I have a table of `users` that may or may not have `photos` uploaded. I want to delete any users that *don't* have photos. Right now I can get the users with no photos like this: ``` SELECT a.id FROM users a LEFT JOIN images b ON a.id = b.user_id WHERE b.user_id is null ``` However, I'm a little stumped on how to actually make the delete happen. This doesn't work: ``` DELETE FROM users c WHERE c.id IN ( SELECT a.id FROM users a LEFT JOIN images b ON a.id = b.user_id WHERE b.user_id is null ) ``` And neither does this: ``` DELETE FROM users a LEFT JOIN images b ON a.id = b.user_id WHERE b.user_id is null ``` What's the right way to do a delete on the results of a JOIN?
I am guessing your 2nd example failing because it is referencing the table in the subquery, but it shouldn't actually need to: ``` DELETE FROM users c WHERE c.id NOT IN ( SELECT DISTINCT user_id FROM images); ``` Otherwise, I would've done this: ``` DELETE FROM users USING users AS u LEFT JOIN images AS i ON u.id = i.user_id WHERE i.user_id IS NULL; ```
``` DELETE users WHERE NOT EXISTS ( SELECT 1 FROM images i WHERE users.user_id = i.user_id) ```
delete results of a select with a JOIN
[ "", "mysql", "sql", "join", "" ]
How can I create a select query to select every month and year column per column, so the result should be like this: <http://www.google.de/url?source=imglanding&ct=img&q=http://www.calenweb.com/png/en/2015/2015-yearly-calendar.png&sa=X&ei=Ma1IVeHLLtHjavfsgfAI&ved=0CAkQ8wc&usg=AFQjCNHw0gwp3G9-bSGGNspsZJjMqZbjBA> Of course, the format, colors... doesn't matter. Any ideas? Is the problem clear?
You can use PIVOT to arrange the data and a tally table to calculate it: ``` DECLARE @year CHAR(4) = 2015 ;WITH N1 (N) AS (SELECT 1 FROM (VALUES (1), (1), (1), (1), (1), (1), (1), (1)) n (N)), N2 (N) AS (SELECT ROW_NUMBER() OVER(ORDER BY N1.N)-1 FROM N1 AS N1 CROSS JOIN N1 AS N2 CROSS JOIN N1 N3), CTE as ( SELECT month(dateadd(d, n, @year)) mon, day(dateadd(d, n, @year)) monthday, convert(char(2), dateadd(d, n, @year), 5)+ ' ' + left(datename(weekday, dateadd(d, n, @year)), 1) day FROM n2 WHERE @year < dateadd(year, 1, @year) - n ) SELECT [1] JAN, [2] FEB, [3] MAR, [4] APR, [5] MAY, [6] JUN, [7] JUL, [8] AUG, [9] NOV, [10] OCT, [11] NOV, [12] [DEC] FROM CTE PIVOT (min([day]) FOR mon in([1],[2],[3],[4],[5],[6],[7],[8],[9],[10],[11],[12]) )AS p ORDER BY 1 ``` [Fiddle](http://sqlfiddle.com/#!6/9eecb7d/39)
``` <?php $monthNames = Array("January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"); ?> <?php if (!isset($_REQUEST["month"])) $_REQUEST["month"] = date("n"); if (!isset($_REQUEST["year"])) $_REQUEST["year"] = date("Y"); ?> <?php $cMonth = $_REQUEST["month"]; $cYear = $_REQUEST["year"]; $prev_year = $cYear; $next_year = $cYear; $prev_month = $cMonth-1; $next_month = $cMonth+1; if ($prev_month == 0 ) { $prev_month = 12; $prev_year = $cYear - 1; } if ($next_month == 13 ) { $next_month = 1; $next_year = $cYear + 1; } ?> <table width="200"> <tr align="center"> <td bgcolor="#999999" style="color:#FFFFFF"> <table width="100%" border="0" cellspacing="0" cellpadding="0"> <tr> <td width="50%" align="left"> <a href="<?php echo $_SERVER["PHP_SELF"] . "?month=". $prev_month . "&year=" . $prev_year; ?>" style="color:#FFFFFF">Previous</a></td> <td width="50%" align="right"><a href="<?php echo $_SERVER["PHP_SELF"] . "?month=". $next_month . "&year=" . $next_year; ?>" style="color:#FFFFFF">Next</a> </td> </tr> </table> </td> </tr> <tr> <td align="center"> <table width="100%" border="0" cellpadding="2" cellspacing="2"> <tr align="center"> <td colspan="7" bgcolor="#999999" style="color:#FFFFFF"><strong><?php echo $monthNames[$cMonth-1].' '.$cYear; ?></strong></td> </tr> <tr> <td align="center" bgcolor="#999999" style="color:#FFFFFF"><strong>S</strong></td> <td align="center" bgcolor="#999999" style="color:#FFFFFF"><strong>M</strong></td> <td align="center" bgcolor="#999999" style="color:#FFFFFF"><strong>T</strong></td> <td align="center" bgcolor="#999999" style="color:#FFFFFF"><strong>W</strong></td> <td align="center" bgcolor="#999999" style="color:#FFFFFF"><strong>T</strong></td> <td align="center" bgcolor="#999999" style="color:#FFFFFF"><strong>F</strong></td> <td align="center" bgcolor="#999999" style="color:#FFFFFF"><strong>S</strong></td> </tr> <?php $timestamp = mktime(0,0,0,$cMonth,1,$cYear); $maxday = date("t",$timestamp); $thismonth = getdate ($timestamp); $startday = $thismonth['wday']; for ($i=0; $i<($maxday+$startday); $i++) { if(($i % 7) == 0 ) echo "<tr>n"; if($i < $startday) echo "<td></td>n"; else echo "<td align='center' valign='middle' height='20px'>". ($i - $startday + 1) . "</td>n"; if(($i % 7) == 6 ) echo "</tr>n"; } ?> </table> </td> </tr> </table> ```
Select year overview like columns
[ "", "sql", "sql-server", "calendar", "" ]
This must be simple, but I think I'm lost. I have a table A: ``` name id Tom 1 Barbara 2 Gregory 3 ``` ...and table B: ``` id nickname preferred 1 Spiderman 0 1 Batman 1 2 Powerpuff 0 3 Donald Duck 0 3 Hulk 1 ``` How do I query the table to get a nickname when it is preferred (1), or any other nickname if preferred is not available. So the result for Tom would be "Batman", while the result for Barbara would be "Powerpuff".
Try below Query: Which 1. selects row with value A, otherwise, 2. select row with value B using **LEFT JOIN**, ``` SELECT A.name, B.nickname FROM A LEFT JOIN ( SELECT MAX(preferred) AS preferred, id FROM B GROUP BY id )AS B1 ON A.id = B1.id LEFT JOIN B ON B.preferred = B1.preferred AND B.id = B1.id ```
Just an immediate solution: ``` select a.id, b.nickname from a join b on a.id = b.id and b.prefered = 1 union all select a.id, b.nickname from a join b on a.id = b.id and b.prefered = 0 where a.id not in( select a.id from a join b on a.id = b.id and b.prefered = 1 ) ``` Fiddle <http://sqlfiddle.com/#!7/0b7db/1>
In a left join, select row with value A, if not select row with value B
[ "", "sql", "join", "sqlite", "" ]
I'm having some trouble joining the contents of two tables. Here's the current situation: **Bought** `Article``Bought``Year` **Sold** `Article``Sold``Year` **Desired Result** `Article``Bought``Sold``Year` I've tried the following to achieve the desired result: ``` SELECT b.article, b.bought, s.sold, b.year FROM Bought AS b LEFT JOIN Sold as s ON s.article = b.article AND s.year = b.year WHERE b.article = '1' ORDER BY b.year ``` This only returns the result for 2011 (Where both values are present). Another try using a third table holding all articles returned the same bad result and it has two year columns which is not ideal: ``` SELECT art.article, b.bought, s.sold, b.year, s.year FROM articles AS art LEFT OUTER JOIN bought AS b ON art.article = b.article LEFT OUTER JOIN Sold AS s ON art.article = s.article AND (b.year = s.year OR b.year IS NULL OR s.year IS NULL) WHERE art.article = '1' ``` I've tried using different kinds of joins with the last SQL statement but none of them seem to work. How can I achieve the desired result?
It sounds like you want a `full outer join`: ``` SELECT coalesce(b.article, s.article) as article, b.bought, s.sold, coalesce(b.year, s.year) as year FROM Bought b FULL OUTER JOIN Sold s ON s.article = b.article AND s.year = b.year WHERE (b.article = '1' OR s.article = '1') ORDER BY year ```
You should use `full outer join` : ``` select coalesce(b.article, s.article) , b.bought , s.sold , coalesce (b.year, s.year) as year from Bought b full join Sold s on s.article = b.article and s.year = b.year where coalesce(b.article, s.article) = '1' order by year ```
SQL join two tables with null values in either table
[ "", "sql", "postgresql", "" ]
``` SELECT [Code], [Due Date Calculation], --original string that is causing the error --SUBSTRING([Due Date Calculation], 1, LEN([Due Date Calculation]) - 1) as OG, --here are the tests that run fine by themselves LEN([Due Date Calculation]) - 1 as Test1, SUBSTRING([Due Date Calculation], 1, 2) AS Test2, SUBSTRING([Due Date Calculation], 1, LEN([Due Date Calculation])) AS Test3 FROM [TEST] ``` Here is the error I am getting: > ***Msg 537, Level 16, State 2, Line 1 > Invalid length parameter passed to the LEFT or SUBSTRING function.*** I know it has something to do with the way SQL is rendering the data. The data displays a small upside down 'L' when I query it using SQL, but it simply shows a 'D' in the front end. I don't have a good enough reputation to include the images. An example of a typical Codes are `30D`, `60D`, `120D`, `365D`, etc. I need to drop the trailing D and display what is left. Thanks for the help. Here are the results from the SQL Query: [<https://drive.google.com/file/d/0B1cL-bzbZ4IzU2ctYlRNZnJhZjg/view?usp=sharing][1]>
If you want to cut off the last character, just do it like this. ``` DECLARE @String VARCHAR(100) = '123D' SELECT SUBSTRING(@String,0,LEN(@String)) ``` Or if you have multiple characters at the end, then try this which will grab until the numbers stop. ``` DECLARE @String VARCHAR(100) = '123D' SELECT SUBSTRING(@String,0,PATINDEX('%[^0-9]%',@String)) ``` Both have same results: ``` 123 ```
If i understand your problem correctly you are looking for this ``` Select SUBSTRING([Due Date Calculation],0,CHARINDEX('D',[Due Date Calculation])) As [Due Date Calculation] FROM Test ``` [**SQLFIDDLE**](http://sqlfiddle.com/#!6/17fbf/2)
SQL Issue using SUBSTRING and LEN
[ "", "sql", "sql-server", "t-sql", "substring", "special-characters", "" ]
I have a column with few different `ID`'s ``` abc_1234 abc_2345 bcd_3456/ cde_4567/ ``` And I want a new column that takes off the `/` if it exists ``` abc_1234 abc_2345 bcd_3456 cde_4567 ``` I know I'll be using a combination of IF/THEN, `LEFT`, and `LEN`, but I don't know the syntax. Help is appreciated! Thanks!
(*In case your are using SQL Server RDBMS*) You can try the following combination of `right` and `left`: ``` case when right(col, 1) = '/' then left(col, len(col)-1) else col end ``` [**SQLFiddle**](http://sqlfiddle.com/#!6/855d2/1) (*In case your are using MySQL RDBMS*) ``` trim(trailing '/' from col); ``` [**SQLFiddle**](http://sqlfiddle.com/#!2/469d4e/4)
If your using SQL Server try this ``` SELECT REPLACE(col,'/','') ``` [Replace (Transact-SQL)](https://msdn.microsoft.com/en-us/library/ms186862.aspx)
SQL Take off last character if a certain one exists in a string
[ "", "mysql", "sql", "sql-server", "" ]
I'd like to select a particular value from a table while using an information from another database that is set based on a current database's value. So a select case to find the operator code and set the DB path.. then use the same path and collate the result. ``` DECLARE @DB varchar (1000) CASE WHEN @Operator= 1 THEN SET @DB = '{SERVERNAME\ENTITY\DBNAME}' WHEN @Operator= 2 THEN SET @DB = '{SERVERNAME2\ENTITY2\DBNAME2}' WHEN @Operator= 3 THEN SET @DB = '{SERVERNAME3\ENTITY3\DBNAME3}' Select transItem_item collate SQL_Latin1General_CI_AS FROM Group_Transactions INNER JOIN @DB.Table_Trans ON (transItem.item_id collate SQL_Latin1General_CI-AS = Table_Trans.item_id) Where ---Condition ```
Control flow method (likely to be the most efficient): ``` IF @Operator = 1 BEGIN SELECT stuff FROM Group_Transactions INNER JOIN "Server1\Instance1".Database1.Schema.Table_Trans ON Group_Transactions... = Table_Trans... WHERE things... ; END ELSE IF @Operator = 2 BEGIN SELECT stuff FROM Group_Transactions INNER JOIN "Server2\Instance2".Database2.Schema.Table_Trans ON Group_Transactions... = Table_Trans... WHERE things... ; END ELSE IF @Operator = 3 BEGIN SELECT stuff FROM Group_Transactions INNER JOIN "Server3\Instance3".Database3.Schema.Table_Trans ON Group_Transactions... = Table_Trans... WHERE things... ; END ; ``` Single [conditional] query method: ``` SELECT Group_Transactions.stuff , trans1.other_thing As other_thing1 , trans2.other_thing As other_thing2 , trans3.other_thing As other_thing3 , Coalesce(trans1.other_thing, trans2.other_thing, trans3.other_thing) As other_thing FROM Group_Transactions LEFT JOIN "Server1\Instance1".Database1.Schema.Table_Trans As trans1 ON trans1... = Group_Transactions... AND trans1.things... AND @Operator = 1 LEFT JOIN "Server2\Instance2".Database2.Schema.Table_Trans As trans2 ON trans2... = Group_Transactions... AND trans2.things... AND @Operator = 2 LEFT JOIN "Server3\Instance3".Database3.Schema.Table_Trans As trans3 ON trans3... = Group_Transactions... AND trans3.things... AND @Operator = 3 ; ```
If this is TSQL (I am guessing from your colation names) then you are best trying out [OPENQUERY](https://msdn.microsoft.com/en-us/library/ms188427.aspx) to run your join against another database server. If you are querying a database on the same server you could build your query up as a parameter and then run it using [EXEC](https://stackoverflow.com/questions/5255463/exec-sp-executesql-sql-and-exec-sql-sql-server).
SELECT case using a variable which can be set based on a parameter
[ "", "sql", "sql-server", "" ]
What is the expected behaviour due to SQL Standard when we perform `UNION` on two tables with different data types: ``` create table "tab1" ("c1" varchar(max)); create table "tab2" ("c3" integer); insert into tab1 values(N'asd'), (N'qweqwe'); insert into tab2 values(123), (345); select c_newname as myname from ( select "c1" as c_newname from "tab1" union all select "c3" from "tab2" ) as T_UNI; ``` `MS SQL Server` gives > Conversion failed when converting the varchar value 'asd' to data type > int. but what is defined in the standard?
If you want to use `union all` columns in every query need to have the same type.`C3` must be converteted to varchar because `c1` is varchar. Try below solution ``` create table "tab1" ("c1" varchar(max)); create table "tab2" ("c3" integer); insert into tab1 values(N'asd'), (N'qweqwe'); insert into tab2 values(123), (345); select c_newname as myname from ( select "c1" as c_newname from "tab1" union all select cast("c3" as varchar(max)) from "tab2" ) as T_UNI; ``` I replaced `"tab3"` with `"tab1"` - I think it's typo.
From [T-SQL UNION](https://msdn.microsoft.com/en-us/library/ms180026.aspx) page: > The following are basic rules for combining the result sets of two > queries by using UNION: > > * The number and the order of the columns must be the same in all queries. > * The data types must be compatible. When one datatype is `VARCHAR` and other is `INTEGER` then SQL Server will implicitly attempt to convert `VARCHAR` to `INTEGER` (the rules are described in the precedence table). If conversion fails for any row, the query fails. So this works: ``` INSERT INTO #tab1 VALUES(N'123'), (N'345'); INSERT INTO #tab2 VALUES(123), (345); SELECT C1 FROM #tab1 UNION ALL SELECT C2 FROM #tab2 ``` But this does not: ``` INSERT INTO #tab1 VALUES(N'ABC'), (N'345'); INSERT INTO #tab2 VALUES(123), (345); SELECT C1 FROM #tab1 UNION ALL SELECT C2 FROM #tab2 -- Conversion failed when converting the varchar value 'ABC' to data type int. ``` The rules for conversion are described here: [T-SQL Data Type Precedence](https://msdn.microsoft.com/en-us/library/ms190309.aspx) --- Having said that, you can *explicitly* convert your integer data to varchar in order to make the query work (the datatype of result would be varchar).
UNION ALL two SELECTs with different column types - expected behaviour?
[ "", "sql", "sql-server", "t-sql", "standards", "union-all", "" ]
I am having trouble writing a query for the following problem. I have tried some existing queries but cannot get the results I need. I have a results table like this: ``` userid score timestamp 1 50 5000 1 100 5000 1 400 5000 1 500 5000 2 100 5000 3 1000 4000 ``` The expected output of the query is like this: ``` userid score 3 1000 1 1000 2 100 ``` I want to select a top list where I have n best scores summed for each user and if there is a draw the user with the lowest timestamp is highest. I really tried to look at all old posts but could not find one that helped me. Here is what I have tried: ``` SELECT sum(score) FROM ( SELECT score FROM results WHERE userid=1 ORDER BY score DESC LIMIT 3 ) as subquery ``` This gives me the results for one user, but I would like to have one query that fetches all in order.
This is a pretty typical [greatest-n-per-group](https://stackoverflow.com/questions/tagged/greatest-n-per-group) problem. When I see those, I usually use a correlated subquery like this: ``` SELECT * FROM myTable m WHERE( SELECT COUNT(*) FROM myTable mT WHERE mT.userId = m.userId AND mT.score >= m.score) <= 3; ``` This is not the whole solution, as it only gives you the top three scores for each user in its own row. To get the total, you can use `SUM()` wrapped around that subquery like this: ``` SELECT userId, SUM(score) AS totalScore FROM( SELECT userId, score FROM myTable m WHERE( SELECT COUNT(*) FROM myTable mT WHERE mT.userId = m.userId AND mT.score >= m.score) <= 3) tmp GROUP BY userId; ``` Here is an [SQL Fiddle](http://sqlfiddle.com/#!9/8e9c9/2) example. **EDIT** Regarding the ordering (which I forgot the first time through), you can just order by totalScore in descending order, and then by MIN(timestamp) in ascending order so that users with the lowest timestamp appears first in the list. Here is the updated query: ``` SELECT userId, SUM(score) AS totalScore FROM( SELECT userId, score, timeCol FROM myTable m WHERE( SELECT COUNT(*) FROM myTable mT WHERE mT.userId = m.userId AND mT.score >= m.score) <= 3) tmp GROUP BY userId ORDER BY totalScore DESC, MIN(timeCol) ASC; ``` and here is an updated [Fiddle](http://sqlfiddle.com/#!9/8e9c9/4) link. **EDIT 2** As JPW pointed out in the comments, this query will not work if the user has the same score for multiple questions. To settle this, you can add an additional condition inside the subquery to order the users three rows by timestamp as well, like this: ``` SELECT userId, SUM(score) AS totalScore FROM( SELECT userId, score, timeCol FROM myTable m WHERE( SELECT COUNT(*) FROM myTable mT WHERE mT.userId = m.userId AND mT.score >= m.score AND mT.timeCol <= m.timeCol) <= 3) tmp GROUP BY userId ORDER BY totalScore DESC, MIN(timeCol) ASC; ``` I am still working on a solution to find out how to handle the scenario where the userid, score, and timestamp are all the same. In that case, you will have to find another tiebreaker. Perhaps you have a primary key column, and you can choose to take a higher/lower primary key?
Query for selecting top three scores from table. SELECT score FROM result GROUP BY `id` ORDER BY `score` DESC LIMIT 3;
Select sum of top three scores for each user
[ "", "mysql", "sql", "greatest-n-per-group", "" ]
I have this table ``` ID AGE ACCNUM NAME -------------------------------- 1 10 55409 Intro 2 6 55409 Chapter1 3 4 55409 Chapter2 4 3 69591 Intro 5 6 69591 Outro 6 0 40322 Intro ``` And I need a query that returns the two max age from each `ACCNUM` in this case, records: ``` 1, 2, 4, 5, 6 ``` I have tried too many queries but nothing works for me. I tried this query ``` Select T1.accnum, T1.age from table1 as T1 inner join (select accnum, max(age) as max from table1 group by accnum) as T2 on T1.accnum = T2.accnum and (T1.age = T2.max or T1.age = T2.max -1) ```
TSQL Ranking Functions: `Row_Number()` <https://msdn.microsoft.com/en-us/library/ms186734.aspx> ``` select id, age, accnum, name from ( select id, age, accnum, name, ROW_NUMBER() Over (Partition By accnum order by age desc) as rn from yourtable ) a where a.rn <= 2 ```
You can use [`row_number()`](https://msdn.microsoft.com/en-us/library/ms186734.aspx): ``` select accnum , age from ( select accnum , age , row_number() over(partition by accnum order by age desc) as r from table1 as T1) t where r < 3 ```
Select Max two rows of each account SQL Server
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have the below statement in my VBA: ``` DoCmd.RunSQL "UPDATE customer SET Status = 'Premier' WHERE customer_id = 41308408 AND location IN ('London','New York') AND Status = ''" ``` which is meant to update a table called "customer". I can see in my table there are about 20 entries where the customer\_id is 41308408 (i.e. if i filter the column for that value), with the location being either London or New York for each entry. The Status column is blank for each of these. I execute the above code, and it compiles OK, but it says "You are about to update 0 rows". I would be expecting that to be 20, as per above. Any ideas?
Maybe `status` isn't blank, maybe it's null? Try below solution: ``` UPDATE customer SET Status = 'Premier' WHERE customer_id = 41308408 AND location IN ('London','New York') AND Status is null ```
The blank is probably a `null` in the database. `Null`s are values - they are the lack thereof. You can't query them with the `=` operator, you need to treat them explicitly with the `is` operator: ``` DoCmd.RunSQL "UPDATE customer SET Status = 'Premier' WHERE customer_id = 41308408 AND location IN ('London','New York') AND Status IS NULL" ```
sql update table - statement not working
[ "", "sql", "ms-access", "vba", "sql-update", "" ]
I have a query like this. I want to use the `PaidAmount` column to calculate another column. Is it possible? ``` SELECT B.SubProjectCode, A.StatementBreakDown, A.Description, A.ApplicableInvoices, A.AgreedAmount, A.SettlePercentage, (A.AgreedAmount * (A.SettlePercentage/100)) DueToSettle, (SELECT SUM(C.PaymentAmount) FROM VendorSOAPayments C WHERE C.VendorSOAID = A.VendorSOAID ) PaidAmount, (DueToSettle - PaidAmount ) BalanceAmt FROM VendorSOA A INNER JOIN SubProject B ON A.SubProjectID = B.SubProjectID ```
Try this ``` Select *,(DueToSettle - PaidAmount ) BalanceAmt from (SELECT B.SubProjectCode, A.StatementBreakDown, A.Description, A.ApplicableInvoices, A.AgreedAmount, A.SettlePercentage, (A.AgreedAmount * (A.SettlePercentage/100)) DueToSettle, (SELECT SUM(C.PaymentAmount) FROM VendorSOAPayments C WHERE C.VendorSOAID = A.VendorSOAID ) PaidAmount FROM VendorSOA A INNER JOIN SubProject B ON A.SubProjectID = B.SubProjectID) as Main ```
Use common table expression (CTE), it will not harm performance. ``` ;WITH CTE as ( SELECT B.SubProjectCode, A.StatementBreakDown, A.Description, A.ApplicableInvoices, A.AgreedAmount, A.SettlePercentage, (A.AgreedAmount * (A.SettlePercentage/100)) DueToSettle, (SELECT SUM(C.PaymentAmount) FROM VendorSOAPayments C WHERE C.VendorSOAID = A.VendorSOAID ) PaidAmount, (DueToSettle - PaidAmount ) BalanceAmt FROM VendorSOA A INNER JOIN SubProject B ON A.SubProjectID = B.SubProjectID ) SELECT SubProjectCode, StatementBreakDown, Description, ApplicableInvoices, AgreedAmount, SettlePercentage, DueToSettle, PaidAmount, (DueToSettle - PaidAmount ) BalanceAmt FROM CTE ```
Use subquery columns in another column
[ "", "sql", "sql-server", "" ]
I have few SQL queries which has very low query running performance and I want to check the query execution plan for this query. I am trying to execute the below query but its not showing any query execution plan. Its only display message plan FOR succeeded. I dont know is there any settings that we have to do in oracle sql developer to vies explain plan for query : ``` EXPLAIN PLAN FOR Select SO.P_OPTION_ID FROM SIMSIM JOIN P_TYPE PT on PT.KEY=SIM.P_TYPE_KEY JOIN P_CONFIG PC ON PC.ID=PT.PRODUCT_CONFIG_ID JOIN P_OPTION PO ON PO.OPTION_KEY=PC.DEFAULT_PRODUCT_OPTIONS JOIN S_OPTION SO ON SO.SERVICE_ID=SIM.ASSIGNED_TO_SERVICE_ID JOIN AVV_NO AN ON SIM.ASSIGNED_ANUMBER_ID = AN.ID where SO.STATUS_ID IN (20,40) and SO.ID < to_char(SYSDATE - numtodsinterval ( 1,'MINUTE' ), 'YYYYMMDDHH24MISS')||'0000' and SO.ID > to_char(SYSDATE - numtodsinterval ( 1, 'HOUR' ), 'YYYYMMDDHH24MISS')||'0000' and NOT EXISTS(SELECT ID from TEMP_BPL T WHERE T.ID = SO.ID ); ```
> EXPLAIN PLAN FOR In **SQL Developer**, you don't have to use `EXPLAIN PLAN FOR` statement. Press **`F10`** or click the **Explain Plan icon**. ![enter image description here](https://i.stack.imgur.com/UeJza.jpg) It will be then displayed in the **Explain Plan window**. If you are using **SQL\*Plus** then use **DBMS\_XPLAN**. For example, ``` SQL> EXPLAIN PLAN FOR 2 SELECT * FROM DUAL; Explained. SQL> SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY); PLAN_TABLE_OUTPUT -------------------------------------------------------------------------- Plan hash value: 272002086 -------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 2 | 2 (0)| 00:00:01 | | 1 | TABLE ACCESS FULL| DUAL | 1 | 2 | 2 (0)| 00:00:01 | -------------------------------------------------------------------------- 8 rows selected. SQL> ``` See [**How to create and display Explain Plan**](http://lalitkumarb.wordpress.com/2014/05/31/oracle-explain-plan/)
Explain only shows how the optimizer thinks the query will execute. To show the real plan, you will need to run the sql once. Then use the same session run the following: ``` @yoursql select * from table(dbms_xplan.display_cursor()) ``` This way can show the real plan used during execution. There are several other ways in showing plan using dbms\_xplan. You can Google with term "dbms\_xplan".
How do I view the Explain Plan in Oracle Sql developer?
[ "", "sql", "oracle", "oracle-sqldeveloper", "sql-execution-plan", "" ]
I have following MySQL queries: ``` SELECT * FROM bookings WHERE record_id = 7 AND status = 'available' AND startdate >= '2015-05-02' AND startdate <= '2015-05-09' UNION ALL SELECT * FROM bookings WHERE record_id = 7 AND status = 'available' AND startdate >= '2015-05-11' AND startdate <= '2015-05-12' ``` Is it possible to combine these two queries, instead of using `UNION ALL` ?
You can use `OR` operator instead as below ``` SELECT * FROM bookings WHERE record_id = 7 AND status = 'available' AND ((startdate >= '2015-05-11' AND startdate <= '2015-05-12') or (startdate >= '2015-05-02' AND startdate <= '2015-05-09')) ```
This schould work: ``` SELECT * FROM bookings WHERE record_id = 7 AND status = 'available' AND ((startdate >= '2015-05-02' AND startdate <= '2015-05-09') or (startdate >= '2015-05-11' AND startdate <= '2015-05-12')) ```
Union all, combining query results
[ "", "mysql", "sql", "select", "union-all", "" ]
Attempting to get duplicate values of produce.prod\_id using distinct, having clauses.but not giving out the expected result. Here's my data: ``` PRODUCE --------- ----------- PROD_ID PROD_NAME --------- ----------- 1 APPLES 2 PEARS 3 ORANGES 4 BANANAS 5 PEACHES BUYERS ---------- ------------ BUYER_ID BUYER_NAME ---------- -------------- 1 ADAM BARR 2 SEAN CHAI 3 EVA CORETS 4 ERIN O`MELIA SALES ---------- --------- ------ BUYER_ID PROD_ID QTY ---------- --------- ------ 1 2 15 1 3 5 4 1 37 3 5 11 4 3 1005 ``` and here's my code: ``` select produce.prod_name, buyer.BUYER_NAME, SALES.PROD_ID from produce inner join SALES on produce.PROD_ID = SALES.PROD_ID inner join buyer on SALES.BUYER_ID = buyer.BUYER_ID group by produce.prod_name, buyer.BUYER_NAME, SALES.PROD_ID having count(SALES.PROD_ID) > 1; ``` Expected result: ``` PROD_Name Buyer_Name ----------- -------------- Oranges ADAM BARR Oranges ERIN O`MELIA ```
You need to use analytical function for your requirement ``` WITH PRODUCE(PROD_ID, PROD_NAME) AS ( SELECT 1, 'APPLES' from dual union all select 2, 'PEARS' from dual union all select 3, 'ORANGES' from dual union all select 4, 'BANANAS' from dual union all select 5, 'PEACHES' from dual), BUYERS (BUYER_ID, BUYER_NAME) as ( select 1, 'ADAM BARR' from dual union all select 2, 'SEAN CHAI' from dual union all select 3, 'EVA CORETS' from dual union all select 4, 'ERIN O`MELIA' from dual), SALES(BUYER_ID, PROD_ID, QTY) as ( select 1, 2, 15 from dual union all select 1, 3, 5 from dual union all select 4, 1, 37 from dual union all select 3, 5, 11 from dual union all select 4, 3, 1005 from dual), -- End of data preparation TABLE_ AS ( SELECT produce.prod_name, buyers.buyer_name, sales.prod_id, COUNT(1) OVER (PARTITION BY sales.prod_id) p_count FROM produce INNER JOIN sales ON produce.prod_id = sales.prod_id INNER JOIN buyers ON sales.buyer_id = buyers.buyer_id) SELECT prod_name, buyer_name, prod_id FROM table_ WHERE p_count > 1; ``` Output: ``` | PROD_NAME | BUYER_NAME | PROD_ID | |-----------|--------------|---------| | ORANGES | ERIN O`MELIA | 3 | | ORANGES | ADAM BARR | 3 | ``` Update: You simplified query would be: ``` With TABLE_ AS ( SELECT produce.prod_name, buyers.buyer_name, sales.prod_id, COUNT(1) OVER (PARTITION BY sales.prod_id) p_count FROM produce INNER JOIN sales ON produce.prod_id = sales.prod_id INNER JOIN buyers ON sales.buyer_id = buyers.buyer_id) SELECT prod_name, buyer_name, prod_id FROM table_ WHERE p_count > 1; ```
you don't need distinct. you only need group by and having. you also cant group by buyer\_name to get a count > 1. that is where the issue lies, you can to do this as a set of nested queries ``` select dupes.prod_name, buyers.buyer_name from ( select produce.prod_name, SALES.PROD_ID, count(SALES.PROD_ID) from produce inner join SALES on produce.PROD_ID=SALES.PROD_ID group by produce.prod_name, SALES.PROD_ID having count(*)>1 ) as dupes inner join sales on sales.prod_id = dupes.prod_id inner join buyers on buyers.buyer_id = sales.buyer_id ```
HAVING COUNT() not giving out the expected result
[ "", "sql", "oracle", "count", "having-clause", "" ]
I have following Oracle database. A certain ID can have an accepted status, but can continue in a failed status, and return back in an accepted status. This can happen X time, so it can be called a loop ``` ID STATUS TIME INSERT ------------------------------ 1 Accepted 01:00:00 1 Failed 02:00:00 1 Accepted 02:30:00 2 Accepted 02:33:00 ``` I want to calculate the age of the oldest item with an accepted status. This means the result of SYSDATE - TIME INSERT > As Result of this data I expect the 3d record time value, 02:30:00 I need to check for the youngest Time of every ID and than the oldest lifetime of all the youngest lifetimes of the ID's. This looks quite complicated for me. The workflow steps are: 1. the minimum time value of a specific id in the state Accepted 2. Step 1 for every id in the accepted state 3. the max value of all the min values I calculated in step 2. At this moment I have following code ``` select min(TO_CHAR(TO_DATE('20000101','yyyymmdd')+(SYSDATE - dm1.time_insert),'hh24:mi:ss')) from db dm1 where dm1.status='accepted' group by dm1.id; ``` This gives me a sequence of the min values of all records. But now I need the max value from all this min values, how do I do that? Anyone who can solve this case?
You can just find max(time\_insert) and group by id. Take a look at sample query ``` with src as ( select 1 as id, 'Accepted' as status, to_date('2015-05-05 01:00:00','yyyy-mm-dd hh24:mi:ss') as time_insert from dual union all select 1 as id, 'Failed' as status, to_date('2015-05-05 02:00:00','yyyy-mm-dd hh24:mi:ss') as time_insert from dual union all select 1 as id, 'Accepted' as status, to_date('2015-05-05 02:30:00','yyyy-mm-dd hh24:mi:ss') as time_insert from dual union all select 2 as id, 'Accepted' as status, to_date('2015-05-05 02:33:00','yyyy-mm-dd hh24:mi:ss') as time_insert from dual) select id, max(time_insert) as time_insert, max(sysdate - time_insert) as lifetime_max, min(sysdate - time_insert) as liftime_min from src where status ='Accepted' group by id ``` > Edit: Is this what you want to achieve: ``` with src as ( select 1 as id, 'Accepted' as status, to_date('2015-05-05 01:00:00','yyyy-mm-dd hh24:mi:ss') as time_insert from dual union all select 1 as id, 'Failed' as status, to_date('2015-05-05 02:00:00','yyyy-mm-dd hh24:mi:ss') as time_insert from dual union all select 1 as id, 'Accepted' as status, to_date('2015-05-05 02:30:00','yyyy-mm-dd hh24:mi:ss') as time_insert from dual union all select 2 as id, 'Accepted' as status, to_date('2015-05-05 02:33:00','yyyy-mm-dd hh24:mi:ss') as time_insert from dual) , src2 as ( select max(time_insert) as time_insert, max(sysdate - time_insert) as lifetime_max, min(sysdate - time_insert) as lifetime_min from src where status ='Accepted') select max(lifetime_min) from src2 ```
Hope this will help you: ``` -- this will give you the minimum date of all id's select * from tab1 where TIME_INSERT in (select min(TIME_INSERT) from tab1 )and STATUS ='Accepted' ``` ``` -- this will give you the minimum date of a specefique id select * from tab1 where TIME_INSERT in (select min(TIME_INSERT) from tab1 ) and STATUS ='Accepted' and ID =1 ``` check this ``` select id, TIME_INSERT from (select TIME_INSERT, ID , min(TIME_INSERT) over (partition by ID) maxid from tab1) where TIME_INSERT = maxid and TIME_INSERT in(select max(TIME_INSERT) from tab1) group by ID,TIME_INSERT ```
SQL: age of oldest items
[ "", "sql", "oracle", "select", "max", "" ]
A table `users` has three columns: `id, name, pass`. Another table `logins` has `user_id` column, an `isright` boolean (tinyint) column which says whether the login was successful or not and a `date` column. I need a simple `left join` to get the user's name and his password **(1)**, the last login datetime (successful or not) **(2)** and the count of the logins for the specific user since his last successful login **(3)**. (1) and (2) I can achieve using ``` SELECT name, pass, MAX(date) FROM users LEFT JOIN logins ON logins.id = users.id -- here either "GROUP BY users.id" or "WHERE users.id = 1234" ``` But (3) seems to be harder. I googled it and found many similar question but none of them was asking on exactly how to count rows after specific condition is true. (It's even more complicated - count the logins for *that* user, not everyone) I don't even know how to do it in a separate query (I'd prefer having one query for the 3 things and I suppose I'd have to use a subquery, although I prefer joins). SQL fiddle with the tables and some data: <http://sqlfiddle.com/#!9/a932b> Any ideas?
The straight-forward way is to have two derived tables: One to get the last login date per user, the other to get the last successful login date per user. Then select from users, outer join the two derived tables and look whether the last login was successful and count the (failed) logins after the last successful login. (With another DBMS you would rather use analytic functions that MySQL lacks.) ``` select users.name, users.pass, ( select max(isright) from logins where user_id = students.id and date = last_login.date ) as last_login_successful, ( select count(*) from logins where user_id = students.id and date > last_successful_login.date ) as last_logins_failed from users left outer join ( select user_id, max(date) as date from logins group by user_id ) last_login on last_login.user_id = users.id left outer join ( select user_id, max(date) as date from logins where isright = 1 group by user_id ) last_successful_login on last_successful_login.user_id = users.id; ``` This gives you four possibilities per user: 1. The user never tried to login. last\_login\_successful is null and last\_logins\_failed is meaningless. 2. The user's logins all failed. last\_login\_successful is 0 and last\_logins\_failed is meaningless. 3. The user's last login was successful. last\_login\_successful is 1 and last\_logins\_failed is meaningless. 4. The user logged in successfully once, but failed at least th last time they tried. last\_login\_successful is 0 and last\_logins\_failed is the number of failures after last successful login. And here is a fiddle: <http://sqlfiddle.com/#!9/57b7d/1>. EDIT: To also count failed logins when a user never logged in: If a user never logged in, their last\_login.date is null. In last\_logins\_failed you want to count all records for which the last login occurred *before* OR *never*: ``` ( select count(*) from logins where user_id = students.id and (date > last_successful_login.date or last_successful_login.date is null) ) as last_logins_failed ```
I guess you could do something like ``` select count(*) from logins as l join users as u on l.user_id = u.id where l.timestamp > (select max(timestamp) from logins where user_id = u.id and isright = 1) ``` Get the last timestamp of the login that was successful (subquery) for user then get a count for all the logins where the timestamp is greater than that of the user, this automatically gives you only the unsuccessful logins because you took the last successful one as a reference timestamp/datetime
Count rows after specific one
[ "", "mysql", "sql", "subquery", "left-join", "" ]
I know this might probably sound like a stupid question, but please bear with me. In SQL-server we have ``` SELECT TOP N ... ``` now in that we can get the first n rows in *ascending order* (by default), cool. If we want records to be sorted on any other column, we just specify that in the order by clause, something like this... ``` SELECT TOP N ... ORDER BY [ColumnName] ``` Even more cool. But what if I want the `last` row? I just write something like this... ``` SELECT TOP N ... ORDER BY [ColumnName] DESC ``` But there is a slight *concern* with that. I said concern and not *issue* because it isn't actually an issue. By this way, I could get the last row based on that column, but what if I want *the last row that was inserted.* I know about `SCOPE_IDENTITY`, `IDENT_CURRENT` and `@@IDENTITY`, but consider a `heap` (a table without a `clustered index`) without any identity column, and multiple accesses from many places (**please** don't go into this too much as to how and when these multiple operation are happening, this doesn't concern the main thing). So in this case there doesn't seems to be an *easy* way to find which row was actually inserted last. Some might answer this as > If you do a select \* from [table] the last row shown in the sql result window will be the last one inserted. To anything thinking about this, **this is not actually the case**, at least not always and one that you can always rely on (**[msdn](http://bit.ly/1IfOb0t), please read the `Advanced Scanning` section**). So the question boils down to this, as in the title itself. Why doesn't SQL Server have a ``` SELECT LAST ``` or say ``` SELECT BOTTOM ``` or something like that, where we don't have to specify the `Order By` and then it would give the *last record inserted in the table at the time of executing the query* (again I am not going into details about how would this result in case of uncommitted reads or phantom reads). But if still, someone argues that we can't talk about this without talking about these read levels, then, for them, we could make it behave as the same way as `TOP` work but just the opposite. But if your argument is then we don't need it as we can always do ``` SELECT TOP N ... ORDER BY [ColumnName] DESC ``` then I really don't know what to say here. I *know* we can do that, but are there any relation based reason, or some semantics based reason, or some other reason *due to which* we don't have *or* can't have this `SELECT LAST/BOTTOM`. I am not looking for way to does `Order By`, I am looking for reason as to why do don't have it or can't have it. *Extra* I don't know much about how NOSQL works, but I've worked (just a little bit) with `mongodb` and `elastic search`, and there too doesn't seems to be anything like this. Is the reason they don't have it is because no one ever had it before, or is it for some reason not plausible? **UPDATE** I *don't* need to know that I need to specify order by descending or not. Please read the question and understand my concern before answering or commenting. I know how will I get the last row. That's not even the question, the main question boils down to why no `select last/bottom` like it's counterpart. **UPDATE 2** After the answers from [Vladimir](https://stackoverflow.com/a/30074665/710925) and [Pieter](https://stackoverflow.com/a/30075694/710925), I just wanted to update that I know the the order is not guaranteed if I do a `SELECT TOP` without `ORDER BY`. I know from what I wrote earlier in the question might make an impression that I don't know that's the case, but if you just look a further down, I *have* given a link to [msdn](http://bit.ly/1IfOb0t) and have mentioned that the `SELECT TOP` without `ORDER BY` doesn't guarantees any ordering. So please don't add this to your answer that my statement in wrong, as I have already clarified that myself after a couple of lines (where I provided the link to `msdn`).
You can think of it like this. `SELECT TOP N` without `ORDER BY` returns *some* `N` rows, neither first, nor last, just *some*. Which rows it returns is not defined. You can run the same statement 10 times and get 10 different sets of rows each time. So, if the server had a syntax `SELECT LAST N`, then result of this statement without `ORDER BY` would again be undefined, which is exactly what you get with existing `SELECT TOP N` without `ORDER BY`. --- You have stressed in your question that you know and understand what I've written below, but I'll still keep it to make it clear for everyone reading this later. Your first phrase in the question > In SQL-server we have `SELECT TOP N ...` now in that we can get the > first n rows in ascending order (by default), cool. is not correct. With `SELECT TOP N` without `ORDER BY` you get N "random" rows. Well, not really random, the server doesn't jump randomly from row to row on purpose. It chooses some deterministic way to scan through the table, but there could be many different ways to scan the table and server is free to change the chosen path when it wants. This is what is meant by "undefined". The server doesn't track the order in which rows were inserted into the table, so again your assumption that results of `SELECT TOP N` without `ORDER BY` are determined by the order in which rows were inserted in the table is not correct. --- So, the answer to your final question > why no `select last/bottom` like it's counterpart. is: * without `ORDER BY` results of `SELECT LAST N` would be exactly the same as results of `SELECT TOP N` - undefined. * with `ORDER BY` result of `SELECT LAST N ... ORDER BY X ASC` is exactly the same as result of `SELECT TOP N ... ORDER BY X DESC`. So, there is no point to have two key words that do the same thing. --- There is a good point in the Pieter's answer: the word `TOP` is somewhat misleading. It really means `LIMIT` result set to some number of rows. By the way, since SQL Server 2012 they added support for ANSI standard [`OFFSET`](https://msdn.microsoft.com/en-us/library/ms188385.aspx): > ``` > OFFSET { integer_constant | offset_row_count_expression } { ROW | ROWS } > [ > FETCH { FIRST | NEXT } {integer_constant | fetch_row_count_expression } { ROW | ROWS } ONLY > ] > ``` Here adding another key word was justified that it is ANSI standard **AND** it adds important functionality - pagination, which didn't exist before. --- I would like to thank @Razort4x here for providing a very good link to [MSDN](https://technet.microsoft.com/en-us/library/ms191475(v=sql.105).aspx) in his question. The "Advanced Scanning" section there has an excellent example of mechanism called "merry-go-round scanning", which demonstrates why the order of the results returned from a `SELECT` statement cannot be guaranteed without an `ORDER BY` clause. This concept is often misunderstood and I've seen many question here on SO that would greatly benefit if they had a quote from that link. --- The answer to your question > Why doesn't SQL Server have a `SELECT LAST` or say `SELECT BOTTOM` or > something like that, where we don't have to specify the `ORDER BY` and > then it would give the **last record inserted in the table at the time > of executing the query** (again I am not going into details about how > would this result in case of uncommitted reads or phantom reads). is: The devil is in the details that you want to omit. To know which record was the "last inserted in the table at the time of executing the query" (and to know this in a somewhat consistent/non-random manner) the server would need to keep track of this information somehow. Even if it is possible in all scenarios of multiple simultaneously running transactions, it is most likely costly from the performance point of view. Not every `SELECT` would request this information (in fact very few or none at all), but the overhead of tracking this information would always be there. So, you can think of it like this: by default the server doesn't do anything specific to know/keep track of the order in which the rows were inserted, because it affects performance, but if you need to know that you can use, for example, `IDENTITY` column. Microsoft could have designed the server engine in such a way that it required an `IDENTITY` column in every table, but they made it optional, which is good in my opinion. I know better than the server which of my tables need `IDENTITY` column and which do not. ## **Summary** I'd like to summarise that you can look at `SELECT LAST` without `ORDER BY` in two different ways. 1) When you expect `SELECT LAST` to behave in line with existing `SELECT TOP`. In this case result is undefined for both `LAST` and `TOP`, i.e. result is effectively the same. In this case it boils down to (not) having another keyword. Language developers (T-SQL language in this case) are always reluctant to add keywords, unless there are good reasons for it. In this case it is clearly avoidable. 2) When you expect `SELECT LAST` to behave as `SELECT LAST INSERTED ROW`. Which should, by the way, extend the same expectations to `SELECT TOP` to behave as `SELECT FIRST INSERTED ROW` or add new keywords `LAST_INSERTED`, `FIRST_INSERTED` to keep existing keyword `TOP` intact. In this case it boils down to the performance and added overhead of such behaviour. At the moment the server allows you to avoid this performance penalty if you don't need this information. If you do need it `IDENTITY` is a pretty good solution if you use it carefully.
There is no select last because there is no need for it. Consider a "select top 1 \* from table" . Top 1 would get you the first row that is returned. And then the process stops. But there is no guarantees about ordering if you don't specify an order by. So it may as well be any row in the dataset you get back. Now do a "select last 1 \* from table". Now the database will have to process all the rows in order to get you the last one. And because ordering is non-deterministic, it may as well be the same result as from the select "top 1". See now where the problem comes? Without an order by top and last are actually the same, only "last" will take more time. And with an order by, there's really only a need for top. > ``` > SELECT TOP N ... > ``` > > now in that we can get the first n rows in ascending order (by > default), cool. If we want records to be sorted on any other column, > we just specify that in the order by clause, something like this... What you say here is totally wrong and absolutely NOT how it works. There is no guarantee on what order you get. Ascending order on **what** ? ``` create table mytest(id int, id2 int) insert into mytest(id,id2)values(1,5),(2,4),(3,3),(4,2),(5,1) select top 1 * from mytest select * from mytest create clustered index myindex on mytest(id2) select top 1 * from mytest select * from mytest insert into mytest(id,id2)values(6,0) select top 1 * from mytest ``` Try this code line by line and see what you get with the last "select top 1".....you get in this case the last inserted record. **update** I think you understand that "select top 1 \* from table" basically means: "Select a random row from the table". So what would last mean? "Select the last random row from the table?" Wouldn't the last random row from a table be conceptually the same as saying any 1 random row from the table? And if that's true, top and last are the same, so there is no need for last. **Update 2** In hindsight I was happier with the syntax mysql uses : LIMIT. Top doesn't say anything about ordering it is only there to specify the number of rows to be returned. > Limits the rows returned in a query result set to a specified number of rows or percentage of rows in SQL Server 2014.
Why is there no `select last` or `select bottom` in SQL Server like there is `select top`?
[ "", "sql", "sql-server", "sql-server-2008", "sql-server-2012", "" ]
Having the following dataset. I need some help with a sql statement that would give me the latest row based on `PING_DATE` with unique `PING_DESTINATION` and `PING_SOURCE` with added column with the `AVG` of `PING_AVG` for all rows within the last 10 minutes. ``` PING_DATE | PACKET_LOSS | PING_MIN | PING_AVG | PING_MAX | PING_SOURCE | PING_DESTINATION ------------------------------------------------------------------------------------------------------- 5/5/2015 12:58:18 PM | 0 | 68 | 68 | 72 | site1 | orange15 5/5/2015 12:58:43 PM | 0 | 68 | 71 | 76 | site1 | orange15 5/5/2015 12:59:11 PM | 0 | 68 | 68 | 72 | site1 | pear11 5/5/2015 1:09:47 PM | 0 | 68 | 70 | 76 | site1 | pear11 5/5/2015 1:43:59 PM | 0 | 68 | 69 | 72 | site1 | pear11 5/5/2015 1:45:41 PM | 0 | 68 | 69 | 72 | site1 | pear11 5/5/2015 2:03:43 PM | 0 | 68 | 68 | 72 | site1 | pear11 5/5/2015 3:01:53 PM | 0 | 68 | 68 | 72 | site1 | pear11 5/5/2015 3:02:05 PM | 0 | 68 | 69 | 72 | site1 | pear11 5/5/2015 3:00:59 PM | 20 | 68 | 68 | 68 | site1 | pear11 5/5/2015 3:01:07 PM | 0 | 68 | 68 | 72 | site1 | pear11 5/5/2015 3:01:14 PM | 0 | 68 | 70 | 72 | site1 | pear11 5/5/2015 12:46:55 PM | 3 | 3 | 3 | 3 | site1 | lemon1 ``` Query Result: ``` PING_DATE | PACKET_LOSS | PING_MIN | PING_AVG | PING_MAX | PING_SOURCE | PING_DESTINATION | 10minavg ------------------------------------------------------------------------------------------------------------------ 5/5/2015 12:58:43 PM | 0 | 68 | 71 | 76 | site1 | orange15 | 71 5/5/2015 3:01:14 PM | 0 | 68 | 70 | 72 | site1 | pear11 | 65 5/5/2015 12:46:55 PM | 3 | 3 | 3 | 3 | site1 | lemon1 | 3 ```
For "last 10 minutes average" being "last 10 minutes in each group" this is the query you are looking for: ``` with xyz as ( select X.*, row_number() over ( partition by ping_destination, ping_source order by ping_date desc ) as latest_row#, avg(ping_avg) over ( partition by ping_destination, ping_source order by ping_date asc range between interval '10' minute preceding and current row ) as the_10_min_avg from ping_table X ) select * from xyz where latest_row# = 1 ; ``` For "last 10 minutes average" being "from 10 minutes ago until now" this is the query you are looking for: ``` with xyz as ( select X.*, row_number() over ( partition by ping_destination, ping_source order by ping_date desc ) as latest_row#, avg(ping_avg) over ( partition by ping_destination, ping_source ) as the_10_min_avg from ping_table X where X.ping_date >= systimestamp - interval '10' minute ) select * from xyz where latest_row# = 1 ; ```
Here is a straight forward query based on the question. Edited based on the sample output. For last 10 minutes from now, use systemtimestamp instead of i.latest\_ping in the snippet "(i.latest\_ping - interval '10' minute)". Use i.latest\_ping for last 10 minutes from max\_ping\_time for that source-dest pair. ``` select o.*, (select avg(ping_avg) from ping_info a where a.ping_source = i.ping_source and a.ping_dest = i.ping_dest and a.ping_date >= (systemtimestamp - interval '10' minute) ) last_10min_avg from ping_info o, (select ping_source, ping_dest, max(ping_date) latest_ping from ping_info group by ping_source, ping_dest) i where o.ping_source = i.ping_source and o.ping_dest = i.ping_dest and o.ping_date = i.latest_ping; ```
Oracle query latest row with average for specific column?
[ "", "sql", "oracle", "average", "window-functions", "" ]
I have a table in which I am selecting the following columns ``` Work_Role Date_Invoice Bill_Amnt ``` I have multiple work roles invoiced at multiple dates. I would like to summarize this by work role and see the amount billed by year on each column, only 2014 and 2015. Something like this: ``` Work_Role Bill_2014 Bill_2015 P1 xxx,xxx xxx,xxx P3 xxx,xxx xxx,xxx E1 xxx,xxx xxx,xxx ```
Use the `case` expression to conditionally sum the bill\_amnt depending on year: ``` select Work_Role, sum(case when year(date_invoice) = 2014 then bill_amnt end) as "Bill_2014", sum(case when year(date_invoice) = 2015 then bill_amnt end) as "Bill_2015" from your_table group by Work_Role ```
Use `conditional case expression`: ``` select role, sum(case when year(bill_date) = 2014 then amount else 0 end), sum(case when year(bill_date) = 2015 then amount else 0 end) from table group by role ```
SQL Summarize and Group
[ "", "sql", "sql-server", "t-sql", "" ]
I have the following table : ``` **Country Name Number** us John 45 us Jeff 35 fr Jean 31 it Luigi 25 fr Maxime 23 ca Justin 23 ``` This table is order by Number. I want to have a query that for each country give me the name with highest number : ``` **Country Name Number** us John 45 fr Jean 31 it Luigi 25 ca Justin 23 ``` I try to use distinct but I can't only make it on country if I want to print the all thing... Have an idea ?' EDIT : The table is obtain by a subquery
[SQL Fiddle](http://sqlfiddle.com/#!4/a3074/4) **Oracle 11g R2 Schema Setup**: ``` CREATE TABLE Countries AS SELECT 'us' AS Country, 'John' AS Name, 45 AS "Number" FROM DUAL UNION ALL SELECT 'us' AS Country, 'Jeff' AS Name, 35 AS "Number" FROM DUAL UNION ALL SELECT 'fr' AS Country, 'Jean' AS Name, 31 AS "Number" FROM DUAL UNION ALL SELECT 'it' AS Country, 'Luigi' AS Name, 25 AS "Number" FROM DUAL UNION ALL SELECT 'fr' AS Country, 'Maxime' AS Name, 23 AS "Number" FROM DUAL UNION ALL SELECT 'ca' AS Country, 'Justin' AS Name, 23 AS "Number" FROM DUAL; ``` **Query 1**: ``` SELECT Country, MAX( Name ) KEEP ( DENSE_RANK FIRST ORDER BY "Number" DESC ) AS "Name", MAX( "Number" ) AS "Number" FROM Countries GROUP BY Country ``` **[Results](http://sqlfiddle.com/#!4/a3074/4/0)**: ``` | COUNTRY | Name | Number | |---------|--------|--------| | ca | Justin | 23 | | fr | Jean | 31 | | it | Luigi | 25 | | us | John | 45 | ```
Try below query: ``` SELECT Country, MAX(numbeer) FROM Table_Name GROUP BY Country ``` PFB updated query to include Name: ``` SELECT t1.* FROM table1 t1 INNER JOIN (SELECT country, max(numbeer) as numbeer FROM table1 GROUP BY country) t2 ON t1.country=t2.country AND t1.numbeer=t2.numbeer; ```
Oracle Sql : distinct value in a specific field
[ "", "sql", "oracle", "distinct", "" ]
The below `create table` statement is giving an error but everything seams OK. ``` explain create table inventory(`id` int primary key, name varchar(30), price float, category varchar(25)); ``` > ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'create table inventory(`id` int primary key, name varchar(30), price float, cate' at line 1
[`explain`](https://dev.mysql.com/doc/refman/5.6/en/explain.html) is used to get an execution plan of a `select` query (or the structure of an existing table, although it's more common to use `describe` for that). You cannot apply it to a `create table` statement. Just remove the `explain` keyword and you should be fine.
Read manual [EXPLAIN](https://dev.mysql.com/doc/refman/5.0/en/explain.html) and [EXPLAIN Output Format](https://dev.mysql.com/doc/refman/5.0/en/explain-output.html) When you precede a `SELECT` statement with the keyword `EXPLAIN`, MySQL displays information from the optimizer about the statement execution plan. That is, MySQL explains how it would process the statement, including information about how tables are joined and in which order. For information about using `EXPLAIN` to obtain execution plan information,
ERROR 1064 (42000) in Create table statement in MYSQL
[ "", "mysql", "sql", "ddl", "create-table", "" ]
Sorry for the title. Not really sure how to word it. Edits appreciated. I have a table, `bin_content`, which has a column, `LotId` that I'm trying to do a calculation on. The calculation is a bit complex, but I've boiled it down to the part that is failing in the sql below: ``` SELECT bc.LotId, bc.ModelId, x.datecode FROM ( SELECT LEFT(bc.LotId,4) as datecode ) x, bin_content bc WHERE bc.Quantity <> 0; ``` When I run this, I'm getting: > SQL Error (1109): Unknown table 'bc' in field list. The error is referring to the reference on the third line of code: ``` LEFT(bc.LotId,4) as datecode ``` I'm wanting to access the value of `LotId` in the current record. What's the right way to do this? EDIT: Here's the actual calculation: ``` SELECT bc.LotId, bc.ModelId, x.monthAge FROM ( SELECT @reportdate := CURDATE(), @datecode := LEFT(bc.LotId,4), @dcyear := CONCAT("20",LEFT(@datecode,2)), @dcweek := SUBSTRING(@datecode,3,2), @dcyearstart := CONCAT(@dcyear,'-01-01'), @firstWeekLength := MOD((8-DAYOFWEEK(@dcyearstart)),7), @builddate := ADDDATE(@dcyearstart,IF(@dcweek = 1, 0, @firstWeekLength + (7*(@dcweek-2)))), @partialmonth := (DAY(@reportdate) - DAY(@builddate))/31, @monthAge := ((YEAR(@reportdate) - @dcyear) * 12) + (MONTH(@reportdate) - MONTH(@builddate)) + @partialmonth as monthAge ) x, bin_content bc WHERE bc.Quantity <> 0; ``` The line below is the problem child: ``` @datecode := LEFT(bc.LotId,4), ``` If I can get that line to return a value, the rest works. The basic idea is that the `bc.LotId` has a date code in the first four digits (`YYWW`, where `YY`=year, and `WW`=week number). I'm calculating the number of months that have passed since that date code.
Edited answer after the question was updated: Since you only rely on a single row for this calculation, you can move all the columns to the inner query and select from there: ``` SELECT x.LotId, x.ModelId, x.monthAge FROM (SELECT bc.LotId, bc.ModelId, @reportdate := CURDATE(), @datecode := LEFT(bc.LotId,4), @dcyear := CONCAT("20",LEFT(@datecode,2)), @dcweek := SUBSTRING(@datecode,3,2), @dcyearstart := CONCAT(@dcyear,'-01-01'), @firstWeekLength := MOD((8-DAYOFWEEK(@dcyearstart)),7), @builddate := ADDDATE(@dcyearstart,IF(@dcweek = 1, 0, @firstWeekLength + (7*(@dcweek-2)))), @partialmonth := (DAY(@reportdate) - DAY(@builddate))/31, @monthAge := ((YEAR(@reportdate) - @dcyear) * 12) + (MONTH(@reportdate) - MONTH(@builddate)) + @partialmonth as monthAge FROM bin_content bc WHERE bc.Quantity <> 0) x ```
The issue is because `bin_content` is not in the scope of your subquery. Removing a lot of the code, you have a skeleton like this: ``` SELECT stuff FROM( SELECT stuff ) x, bin_content bc... ``` There is no `FROM` clause inside your inner select query, so `bc` cannot be referenced. The query is complex, so I'm not sure if making it like this will work: ``` SELECT stuff FROM( SELECT stuff FROM bin_content bc ) x, bin_content bc... ``` but the issue is definitely as a result of `bc` not being in the proper scope.
How to access column from other table in calculation?
[ "", "mysql", "sql", "select", "" ]
I got a problem with a select query I need to `select coid,model,km,year` for all the vehicles which have `AC` and `MP3`. I wrote this SQL: ``` select distinct vehicle.vehid, model, km, year from vehicle, models, extras, veh_extras where models.modid = vehicle.modid and vehicle.vehid = veh_extras.vehid and extras.extraid = veh_extras.extraid and (descr = 'AC' or descr = 'mp3') ``` but I think it's wrong. `Extras.desc` is the column which take the description of the extra. [schema link](http://prntscr.com/7296v7)
``` SELECT v.vehid, m.model, v.km, v.year FROM vehicle v JOIN model m ON v.modid = m.modid WHERE EXISTS ( SELECT 'a' FROM extras e JOIN veh_extras ve ON e.id = ve.extraid WHERE ve.vehid = v.vehid AND e.descr = 'AC' ) AND EXISTS ( SELECT 'a' FROM extras e JOIN veh_extras ve ON e.id = ve.extraid WHERE ve.vehid = v.vehid AND e.descr = 'mp3' ) ``` This is probably not the best way... but if you need to search for more extras simply add another EXISTS condition
Assuming you want vehicles that have both AC and MP3, then one option would be to join to the `veh_extras` table multiple times: ``` select distinct v.vehid, m.model from vehicle v join models m on m.modid = v.modid join veh_extras ve on v.vehid = ve.vehid join extras e on ve.extraid = e.extraid and e.descr = 'AC' join veh_extras ve2 on v.vehid = ve2.vehid join extras e2 on ve2.extraid = e2.extraid and e2.descr = 'MP3' ``` * [SQL Fiddle Demo](http://www.sqlfiddle.com/#!6/7d36f/4) --- Another option would be to use case aggregation: ``` select v.vehid, m.model from vehicle v join models m on m.modid = v.modid join veh_extras ve on v.vehid = ve.vehid join extras e on ve.extraid = e.extraid group by v.vehid, m.model having sum(case when e.descr = 'AC' then 1 else 0 end) > 0 and sum(case when e.descr = 'MP3' then 1 else 0 end) > 0 ``` * [More Fiddle](http://www.sqlfiddle.com/#!6/7d36f/5) BTW -- I left out some of the remedial columns -- easy to add those back...
Select query including a link table
[ "", "sql", "sql-server", "" ]
I've been trying to get a query that, based on a given condition (if isCurrent = 1 or not) should give me just one value/row based on the CurriculumId /which will be a parameter on a stored procedure). This value should, in case isCurrent = 1 return to me the item with the most current StartDate but if isCurrent = 0 then it should give me the one with the most current EndDate. The thing is that I only want one item per CurriculumId, ideally the one with isCurrent = 1 and the most current StartDate (ignoring the remaining rows) but, if there are no experiences with isCurrent = 1, then it should return to me the one with the most current EndDate. My previous query was almost working but I still got the one with the most current StartDate (isCurrent = 1) AND the one with the most current EndDate when I want to retrieve just one or another. I've come to the query bellow: ``` SELECT table.IntProfessionalExperienceId, table.IsCurrent, table.StartDate, table.EndDate FROM table WHERE table.CurriculumId = 12 AND CASE table.IsCurrent WHEN 1 THEN table.StartDate = ( SELECT max(table.StartDate) FROM table WHERE table.IsCurrent = 1 AND table.CurriculumId = 12 GROUP BY table.CurriculumId ) ELSE table.EndDate = ( SELECT max(table.EndDate) FROM table WHERE table.CurriculumId = 12 GROUP BY table.CurriculumId ) END ``` Individually, the queries seem to be working OK and returning the supposed value although when ran as a whole I get the following errors: ``` Msg 102, Level 15, State 1, Line 8 Incorrect syntax near '='. Msg 102, Level 15, State 1, Line 14 Incorrect syntax near ')'. Msg 102, Level 15, State 1, Line 21 Incorrect syntax near ')'. ``` What in my syntax is wrong? I know from reading the errors what is wrong with the query but I just don't know how to fix it. And is it just the syntax or am I doing the query wrong to start with?
Give each row a rank in its curriculumid group, using ROW\_NUMBER with an appropriate order by clause. Then only take the records ranked 1 (i.e. best matching). ``` select intprofessionalexperienceid, iscurrent, startdate, enddate from ( select mytable.* row_number() over ( partition by curriculumid order by case when iscurrent = 1 then 1 else 2 end, case when iscurrent = 1 then startdate else enddate end desc ) as rn from mytable ) ranked where rn = 1; ``` (I know this doesn't actually answer your question, but is the straight-forward way to approach the problem in my opinion.)
Split this into multiple conditions, like this: ``` SELECT table.IntProfessionalExperienceId, table.IsCurrent, table.StartDate, table.EndDate FROM table WHERE table.CurriculumId = 12 AND ( ( Table.IsCurrent = 1 AND table.StartDate = ( SELECT max(table.StartDate) FROM table WHERE table.IsCurrent = 1 AND table.CurriculumId = 12 GROUP BY table.CurriculumId ) ) OR ( ISNULL(table.IsCurrent,0) != 1 AND table.EndDate = ( SELECT max(table.EndDate) FROM table WHERE table.CurriculumId = 12 GROUP BY table.CurriculumId ) ) ) ``` EDIT: another, arguably simpler approach would be to pre-aggregate the data you want in your WHERE clause so that you only need to call it a single time, rather than evaluate each row separately. Something like the following: ``` SELECT table.IntProfessionalExperienceId, table.IsCurrent, table.StartDate, table.EndDate FROM table INNER JOIN ( SELECT MAX(table.EndDate) AS MaxEndDate, MAX(CASE WHEN table.IsCurrent = 1 THEN table.StartDate END) AS MaxCurrentStartDate FROM table WHERE CurriculumID = 12 ) MaxDates ON (Table.IsCurrent = 1 AND Table.StartDate = MaxDates.MaxCurrentStartDate) OR (ISNULL(Table.IsCurrent, 0) != 1 AND Table.EndDate = MaxDates.MaxEndDate) WHERE table.CurriculumId = 12 ```
SQL Case on Where clause to different columns
[ "", "sql", "sql-server", "" ]
I have the following table: ``` user_id post_streak streak_date streak first_name club_id -------- ----------- ------------ --------- ----------- -------- 18941684 1 2015-05-05 15:36:18 3 user 1000 ``` I want to change streak to 0 if it has been longer then 12 days. current query: ``` select first_name, streak, user_id from myTable where club_id = 1000 and post_streak = 1 and streak_date between date_sub(now(),INTERVAL 12 DAY) and now() order by streak desc; ``` Which doesn't show results older then 12 days. I want to show all results but change "streak" to 0 if it has been longer the 12 days. What is the best way to go about this?
``` UPDATE table SET (streak) VALUES (0) WHERE streak_date < DATEADD(DAY, -12, NOW() ); SELECT first_name, streak, user_id from myTable WHERE club_id = 1000 AND post_streak = 1 ORDER BY streak DESC; ``` First query will set all streak values to 0 for records that have streak\_date of more than 12 days ago Second query will get a list of all your records that have a club\_id of 1000 and a post\_streak of 1
<http://sqlfiddle.com/#!9/d8bbd/6> ``` select user_id, first_name, streak_date, IF(streak_date between date_sub(now(),INTERVAL 12 DAY) and now(),streak,0) from myTable where club_id = 1000 and post_streak = 1 order by streak desc; ```
Change MySql value based on time that has past
[ "", "mysql", "sql", "" ]
The set up is a contact table, employee table, and an Employee\_contact table with a many to many relationship. I want to know how to combine these two queries into one for a combined result set. ``` SELECT FirstName, LastName, (ContactNumber) AS Home FROM Employees AS E JOIN Employees_Contacts AS EC ON E.EmployeeID = EC.EmployeeID JOIN Contacts AS C on EC.ContactID = C.ContactID WHERE ContactType = 'Home Phone' SELECT FirstName, LastName, (ContactNumber) AS Fax FROM Employees AS E JOIN Employees_Contacts AS EC ON E.EmployeeID = EC.EmployeeID JOIN Contacts AS C on EC.ContactID = C.ContactID WHERE ContactType = 'Home Fax'; ```
One option is to use conditional aggregation: ``` SELECT FirstName, LastName, MAX(CASE WHEN ContactType = 'Home Phone' THEN ContactNumber END) AS Home, MAX(CASE WHEN ContactType = 'Home Fax' THEN ContactNumber END) AS Fax FROM Employees AS E JOIN Employees_Contacts AS EC ON E.EmployeeID = EC.EmployeeID JOIN Contacts AS C on EC.ContactID = C.ContactID WHERE ContactType IN ('Home Phone','Home Fax') GROUP BY FirstName, LastName ```
Using Case Statement ``` SELECT FirstName, LastName, CASE WHEN ContactType = 'Home Phone' then ContactNumber ELSE NULL END AS Home, CASE WHEN ContactType = 'Home Fax' then ContactNumber ELSE NULL END AS Fax FROM Employees AS E JOIN Employees_Contacts AS EC ON E.EmployeeID = EC.EmployeeID JOIN Contacts AS C on EC.ContactID = C.ContactID ```
I need the results from these two queries to be combined into one statement
[ "", "sql", "many-to-many", "" ]
I have a table named `Inventory` with the following structure: ``` Location_ID |Item_ID |Stock 1 |A |100 1 |B |500 1 |C |300 2 |A |10 2 |B |20 ``` field `location_ID` and `item_ID` are composite key. I want to produce the following data from that single table: ``` Item_ID |Stock_1 |Stock_2 A |100 |10 B |500 |20 C |300 |0 ``` I tried writing several self join queries but it doesn't work. There is also another problem: `Item_ID` C does not exist on `location_ID` 2. How can we put the value '0' on the resulting table if it does not exist? Can someone with brighter mind shed any light?
``` select DIS_ITME_ID, IFNULL ((select stock from inventory where location_id = 1 and item_id = DIS_ITEM_ID), 0) as stock_1, IFNULL ((select stock from inventory where location_id = 2 and item_id = DIS_ITEM_ID), 0) as stock_2 from (select distinct item_ID as DIS_ITEM_ID from inventory) ```
I know it's probably too late but there is a simpler way: ``` SELECT Item_Id, SUM( CASE WHEN Location_ID = 1 THEN Stock ELSE 0 END) As Stock1, SUM( CASE WHEN Location_ID = 2 THEN Stock ELSE 0 END) As Stock2 FROM Inventory GROUP BY Item_Id ``` [sqlfiddle](http://sqlfiddle.com/#!9/bca28/24)
MySQL Self Join Table
[ "", "mysql", "sql", "" ]
I have this query: ``` select (select GETDATE()) as Date, (select ROUND(sum(QuantityInStore * AveragePrice),2) as "Photo" from inventoryinfo, InventoryStoreInfo WHERE InventoryCategoryID IN ('3','6','19','22','23','40','32','56','52','41')) ``` The results are fine, but the column names do not work for the "Photo" column. The Date does show. Photo reads (No Column Name) instead.
Try this: ``` select (select GETDATE()) as Date, (select ROUND(sum(QuantityInStore * AveragePrice),2) from inventoryinfo, InventoryStoreInfo WHERE InventoryCategoryID IN ('3','6','19','22','23','40','32','56','52','41') ) as Photo ```
Try this query ``` select GETDATE() AS [Date] ,ROUND(sum(QuantityInStore * AveragePrice),2) as [Photo] ,ROUND(sum(QuantityInStore * AveragePrice),2) as COG_Facilities -- Add other outputs here from inventoryinfo, InventoryStoreInfo WHERE InventoryCategoryID IN ('3','6','19','22','23','40','32','56','52','41') ```
Column Names in SQL Query
[ "", "sql", "sql-server", "t-sql", "" ]
I have the following which outputs `2015-05-06 10:30:00` `SELECT DATE_FORMAT( NOW( ) , '%Y-%m-%d' ' 10:30:00' )` I would now like to increase the day so it outputs, `2015-05-07 10:30:00` Using `DATE_ADD` I have been able to incease the date, however its now removed the time, `2015-05-07` `SELECT (DATE(DATE_ADD(DATE_FORMAT(NOW(),'%Y-(%m)-%d' ' 10:30:00'), INTERVAL +1 DAY)))` How could i preserve the custom time? I would like the output to look like such: `2015-05-07 10:30:00`. Thanks!
Pretty much already there, removing the extra ( not required ) *Date* cast : ``` SELECT (DATE_ADD(DATE_FORMAT(NOW(),'%Y-(%m)-%d' ' 10:30:00'), INTERVAL +1 DAY)); ``` which gives **output** > 2015-05-07 10:30:00
You `concat()` instead of just putting the two strings next to each other: ``` SELECT CONCAT(DATE_FORMAT(NOW( ) , '%Y-%m-%d'), ' 10:30:00' ) ``` And for tomorrow: ``` SELECT CONCAT(DATE_FORMAT(DATE_ADD(NOW( ), INTERVAL 1 DAY), '%Y-%m-%d'), ' 10:30:00' ) ```
MYSQL DATE_FORMAT to return additional day plus custom time
[ "", "mysql", "sql", "" ]
Function to get time stamp to date format for example ``` Date Field 2014-06-15 00:00:00.000 ``` I want it to show the date as 6/15/2014 in another column call day. What function do i use in sql server? Thanks
If the source column is of type `Datetime` and target column can be of `varchar` then you can simply write as: ``` SELECT CONVERT(VARCHAR(10), mydatetimecolumn, 101) AS [MM/DD/YYYY] From Testtable ``` else if source column is of type `varchar` and you are sure that all dates are valid then you can write as: ``` SELECT CONVERT(VARCHAR(10), cast('2014-06-15 00:00:00.000' as datetime), 101) AS [MM/DD/YYYY] ```
## For SQL 2012 and above: ``` DECLARE @TimeStamp DATETIME = '2014-06-15 00:00:00.000' SELECT FORMAT(@TimeStamp,'d','en-US') AS Format_Date ``` Results: ``` Format_Date --------------- 6/15/2014 ```
SQL Server Convert Time Stamp in date
[ "", "sql", "sql-server", "" ]
I have a database table `foo` and I want to replace `null` values present in specific columns with empty string. For removing null values present in particular column `x`, I would do like ``` update foo set x = '' where x is null; ``` Is there anyway I would do like, ``` update foo set x,y,z = '' where x,y,z is null; ```
You can: ``` set x = case when x is null then '' else x, y = case when y is null then '' else y, z = case when z is null then '' else z ... ```
This will update all rows, where *any* of columns *x*, *y* or *z* has NULL value and uses [COALESCE function](https://dev.mysql.com/doc/refman/5.0/en/comparison-operators.html#function_coalesce) to modify values only in columns where value is NULL: ``` UPDATE foo SET x = COALESCE(x, ''), y = COALESCE(y, ''), z = COALESCE(z, '') WHERE NULL IN (x, y, z) ```
How to replace values present in multiple columns?
[ "", "mysql", "sql", "" ]
Using PostgreSQL 9.4.1, I am trying to identify/display the occurrences of values over 3 different columns. See below (apologies for the formatting, I can't get a proper table format. Type, type1 and type2 are the column names. The table name is `documents` ``` CREATE TABLE documents AS SELECT * FROM ( VALUES ('USA','China','Africa'), ('China','USA','Chemicals'), ('Chemicals','Africa','USA') ) AS t(type,type1,type2); ``` Below is \d+ of the table: ``` Column | Type | Modifiers ----------------+--------+-------------------------------------------------------- id | bigint | not null default nextval('documents_id_seq'::regclass) title | text | description | text | source | text | url | text | emaillink | text | emailurl | text | type | text | language | text | author | text | publisheddate | date | default ('now'::text)::date comments | text | classification | text | submittedby | text | localurl | text | type1 | text | type2 | text | Indexes: "documents_pkey" PRIMARY KEY, btree (id) ``` I would like a query that returns: ``` Africa - 2 Chemicals - 2 China - 2 USA - 3 ``` This is a query likely to get run fairly liberally, so I'd like to avoid expensive queries if at all possible.
You can use `union all` to pivot the columns into rows and then do a group by to count the occurrences for each type ``` select type, count(*) from ( select type1 as type from mytable union all select type2 from mytable union all select type3 from mytable ) t1 group by type ```
Try this: ``` SELECT WORD, COUNT(1) OCCURENCES FROM ( SELECT Type FROM TableName UNION ALL SELECT Type1 FROM TableName UNION ALL SELECT Type2 FROM TableName) GROUP BY WORD; ```
PostgreSQL: Display and count distinct occurrences of values across multiple columns
[ "", "sql", "postgresql", "" ]
I want to select the balance of the last 2 days(today and yesterday) which the code look something like this ``` select a.balance as "today",b.balance as "yesterday" from account a,account b where a.id='xxx' and a.id=b.id and a.dates=to_date(sysdate, 'DD-MM-YYYY') --today and b.dates=to_date(sysdate-1, 'DD-MM-YYYY') --yesterday ``` the problem came when the data for today has not been inputted yet. this result the two balance null even though yesterday data is available note: my current solution is to split the query into 2. but i am hoping there is any ways to use only 1 query expected output ``` ----------------- |today|yesterday| ----------------- |null |9000 | ----------------- ``` Data ``` -------------------------- |id |balance |dates | -------------------------- |1 |9000 |6/5/2015| -------------------------- ```
No need to join two tables ,if we are allowed to hard code two dates ``` --code with sysdate with tab as --dummy data ( select 1 id,sysdate -level+1 dat,level*1000 balance from dual connect by level <=10 ) --main query select max(decode(trunc(dat),trunc(SYSDATE),balance)) "today" ,max(decode(trunc(dat),trunc(SYSDATE-1),balance)) "yesterday" from tab t where TRUNC(t.dat) IN (TRUNC(SYSDATE),TRUNC(SYSDATE-1)); --code without sysdate with tab as ( select 1 id,sysdate -level dat,level*1000 balance from dual connect by level <=10 ) --main query select max(decode(trunc(dat),trunc(SYSDATE),balance)) "today" ,max(decode(trunc(dat),trunc(SYSDATE-1),balance)) "yesterday" from tab t where TRUNC(t.dat) IN (TRUNC(SYSDATE),TRUNC(SYSDATE-1)); ``` [sqlfiddle](http://www.sqlfiddle.com/#!4/9eecb7db59d16c/326) ``` select max(decode(trunc(dates),trunc(SYSDATE),balance)) "today" ,max(decode(trunc(dates),trunc(SYSDATE-1),balance)) "yesterday" from account a where a.id='xxx' and trunc(a.dates) IN (trunc(sysdate),trunc(sysdate-1)); ```
No need to join, use `LAG` function to track previous. If you would like to know about Lag function. Please visit below link. <http://www.techonthenet.com/oracle/functions/lag.php>. I have taken below as input. ![enter image description here](https://i.stack.imgur.com/3CoAy.png) and executed below query using lag which automatically tracks previous row. `SELECT * FROM( SELECT ID,LAG(BALANCE) OVER (ORDER BY DATES) AS YESTERDAY_BALANCE,BALANCE AS TODAYS_BALANCE FROM ACCOUNTS) WHERE YESTERDAY_BALANCE IS NOT NULL;` Output which I got is below. If you wont get data for today still it will display the row. ![enter image description here](https://i.stack.imgur.com/0rRna.png)
oracle query balance
[ "", "sql", "database", "oracle", "analytics", "" ]
I have the below view, what I need to do is to get the date difference of the field ActionDate between each 2 records having the same Vehicle AND OrderCode, how can I achieve this in Oracle database. Also taking into consideration that the dates subtracted should be the one having the **Mode O - Mode I** I need to get the list of the differences in order to get the average of that time. Thanks for helping. ![Data](https://i.stack.imgur.com/cc0Hu.png)
You could use the analytic **LAG() OVER()** function to get the difference between the dates. For example, ``` SQL> WITH t AS 2 ( 3 select 'O' as "MODE", 'V1234567890' as Vehicle, '1411196232' as OrderCode, to_date('2014-11-19 16:34:35','yyyy-mm-dd hh24:mi:ss') as ActionDate from dual 4 union all 5 select 'I' as "MODE", 'V1234567890' as Vehicle, '1411196232' as OrderCode, to_date('2014-11-19 15:27:09','yyyy-mm-dd hh24:mi:ss') as ActionDate from dual 6 union all 7 select 'O' as "MODE", 'V2987654321' as Vehicle, '1411206614' as OrderCode, to_date('2014-11-20 14:03:02','yyyy-mm-dd hh24:mi:ss') as ActionDate from dual 8 union all 9 select 'I' as "MODE", 'V2987654321' as Vehicle, '1411206614' as OrderCode, to_date('2014-11-20 13:47:02','yyyy-mm-dd hh24:mi:ss') as ActionDate from dual 10 union all 11 select 'O' as "MODE", 'V2987654321' as Vehicle, '1411185798' as OrderCode, to_date('2014-11-20 01:40:58','yyyy-mm-dd hh24:mi:ss') as ActionDate from dual 12 union all 13 SELECT 'I' AS "MODE", 'V2987654321' AS Vehicle, '1411185798' AS OrderCode, to_date('2014-11-20 00:47:02','yyyy-mm-dd hh24:mi:ss') AS ActionDate FROM dual 14 ) 15 SELECT "MODE", 16 Vehicle, 17 OrderCode, 18 TO_CHAR(ActionDate,'yyyy-mm-dd hh24:mi:ss') dt, 19 TO_CHAR(LAG(ActionDate) OVER(PARTITION BY Vehicle,OrderCode ORDER BY Vehicle, ActionDate),'yyyy-mm-dd hh24:mi:ss') lag_dt, 20 ActionDate - LAG(ActionDate) OVER(PARTITION BY Vehicle,OrderCode ORDER BY Vehicle, ActionDate) diff 21 FROM t; M VEHICLE ORDERCODE DT LAG_DT DIFF - ----------- ---------- ------------------- ------------------- ---------- I V1234567890 1411196232 2014-11-19 15:27:09 O V1234567890 1411196232 2014-11-19 16:34:35 2014-11-19 15:27:09 .046828704 I V2987654321 1411185798 2014-11-20 00:47:02 O V2987654321 1411185798 2014-11-20 01:40:58 2014-11-20 00:47:02 .037453704 I V2987654321 1411206614 2014-11-20 13:47:02 O V2987654321 1411206614 2014-11-20 14:03:02 2014-11-20 13:47:02 .011111111 6 rows selected. SQL> ``` **NOTE:** The **WITH clause** is to build the **sample data**, in your case you need to use your actual **table\_name**: ``` SELECT "MODE", Vehicle, OrderCode, TO_CHAR(ActionDate,'yyyy-mm-dd hh24:mi:ss') dt, TO_CHAR(LAG(ActionDate) OVER(PARTITION BY Vehicle,OrderCode ORDER BY Vehicle, ActionDate),'yyyy-mm-dd hh24:mi:ss') lag_dt, ActionDate - LAG(ActionDate) OVER(PARTITION BY Vehicle,OrderCode ORDER BY Vehicle, ActionDate) diff FROM your_table; ``` I have put the **TO\_CHAR** just for demonstration purpose, your desired output is the **DIFF** column. Regarding the **MODE**, you could add it to the filter predicate.
Use GROUP BY and subtract the dates within a sub query ``` SELECT t.IO_SEQ, t.Vehicle, t.OrderCode, (SELECT MAX(t2.ActionDate) FROM table t2 WHERE t.IO_SEQ = t2.IO_SEQ) - (SELECT MIN(t2.ActionDate) FROM table t2 WHERE t.IO_SEQ = t2.IO_SEQ) AS ActionDiff FROM table t GROUP BY t.IO_SEQ, t.Vehicle, t.OrderCode ```
Date/Time Difference in Oracle for same field in different rows
[ "", "sql", "oracle", "datetime", "date-arithmetic", "" ]
I have three tables: 1. `project: project_id, project_name` 2. `milestone: milestone_id, milestone_name` 3. `project_milestone: id, project_id, milestone_id, completed_date` I want to get the second highest completed\_date and milestone\_id from project\_milestone grouped by project\_id. That is I want to get the milestone\_id of second highest completed\_date for each project. What would be the correct query for this?
I think you can do what you want with the `project_milestone` table and `row_number()`: ``` select pm.* from (select pm.*, row_number() over (partition by project_id order by completed_date desc) as seqnum from project_milestone pm where pm.completed_date is not null ) pm where seqnum = 2; ``` If you need to include *all* projects, even those without two milestones, you can use a `left join`: ``` select p.project_id, pm.milestone_id, pm.completed_date from projects p left join (select pm.*, row_number() over (partition by project_id order by completed_date desc) as seqnum from project_milestone pm where pm.completed_date is not null ) pm on p.project_id = pm.project_id and pm.seqnum = 2; ```
Using LATERAL (PG 9.3+) can yield better performance than the window function version. ``` SELECT * FROM project; project_id | project_name ------------+-------------- 1 | Project A 2 | Project B SELECT * FROM project_milestone; id | project_id | milestone_id | completed_date ----+------------+--------------+------------------------ 1 | 1 | 1 | 2000-01-01 00:00:00+01 2 | 1 | 2 | 2000-01-02 00:00:00+01 3 | 1 | 5 | 2000-01-03 00:00:00+01 4 | 1 | 6 | 2000-01-04 00:00:00+01 5 | 2 | 3 | 2000-02-01 00:00:00+01 6 | 2 | 4 | 2000-02-02 00:00:00+01 7 | 2 | 7 | 2000-02-03 00:00:00+01 8 | 2 | 8 | 2000-02-04 00:00:00+01 SELECT * FROM project p CROSS JOIN LATERAL ( SELECT milestone_id, completed_date FROM project_milestone pm WHERE pm.project_id = p.project_id ORDER BY completed_date ASC LIMIT 1 OFFSET 1 ) second_highest; project_id | project_name | milestone_id | completed_date ------------+--------------+--------------+------------------------ 1 | Project A | 2 | 2000-01-02 00:00:00+01 2 | Project B | 4 | 2000-02-02 00:00:00+01 ```
Query to find second largest value from every group
[ "", "sql", "postgresql", "postgresql-9.3", "" ]
I have the following table with two fields namely a and b as shown below: ``` create table employe ( empID varchar(10), department varchar(10) ); ``` Inserting some records: ``` insert into employe values('A101','Z'),('A101','X'),('A101','Y'),('A102','Z'),('A102','X'), ('A103','Z'),('A103','Y'),('A104','X'),('A104','Y'),('A105','Z'),('A106','X'); select * from employe; ``` ``` empID department ------------------ A101 Z A101 X A101 Y A102 Z A102 X A103 Z A103 Y A104 X A104 Y A105 Z A106 X ``` **Note**: Now I want to show the employee who is only and only belongs to the department `Z` and `Y`. So according to the condition the only employee `A103` should be displayed because of he only belongs to the department `Z` and `Y`. But employee `A101` should not appear because he belong to `Z,X, and Y`. **Expected Result**: If condition is : `Z` and `Y` then result should be: ``` empID ------ A103 ``` If condition is : `Z` and `X` then result should be: ``` empID ------ A102 ``` If condition is : `Z`,`X` and `Y` then result should be: ``` empID ------ A101 ``` ***Note**: I want to do it in the `where` clause only (don't want to use the `group by` and `having` clauses), because I'm going to include this one in the other `where` also.*
This is a ***Relational Division with no Remainder (RDNR)*** problem. See this [**article**](https://www.simple-talk.com/sql/learn-sql-server/high-performance-relational-division-in-sql-server/) by Dwain Camps that provides many solution to this kind of problem. *First Solution* [**SQL Fiddle**](http://sqlfiddle.com/#!6/6d9cf/2/0) ``` SELECT empId FROM ( SELECT empID, cc = COUNT(DISTINCT department) FROM employe WHERE department IN('Y', 'Z') GROUP BY empID )t WHERE t.cc = 2 AND t.cc = ( SELECT COUNT(*) FROM employe WHERE empID = t.empID ) ``` *Second Solution* [**SQL Fiddle**](http://sqlfiddle.com/#!6/6d9cf/3/0) ``` SELECT e.empId FROM employe e WHERE e.department IN('Y', 'Z') GROUP BY e.empID HAVING COUNT(e.department) = 2 AND COUNT(e.department) = (SELECT COUNT(*) FROM employe WHERE empID = e.empId) ``` --- Without using `GROUP BY` and `HAVING`: ``` SELECT DISTINCT e.empID FROM employe e WHERE EXISTS( SELECT 1 FROM employe WHERE department = 'Z' AND empID = e.empID ) AND EXISTS( SELECT 1 FROM employe WHERE department = 'Y' AND empID = e.empID ) AND NOT EXISTS( SELECT 1 FROM employe WHERE department NOT IN('Y', 'Z') AND empID = e.empID ) ```
I know that this question has already been answered, but it was a fun problem to do and I tried to do it in a way that no one else has. Benefits of mine is that you can input any list of strings as long as each value has a comma afterwards and you don't have to worry about checking counts. **Note:** Values must be listed in alphabetic order. ## XML Solution with CROSS APPLY ``` select DISTINCT empID FROM employe A CROSS APPLY ( SELECT department + ',' FROM employe B WHERE A.empID = B.empID ORDER BY department FOR XML PATH ('') ) CA(Deps) WHERE deps = 'Y,Z,' ``` **Results:** ``` empID ---------- A103 ```
Select users belonging only to particular departments
[ "", "sql", "sql-server", "postgresql", "sql-server-2008-r2", "" ]
I have an RDD containing a timestamp named **time** of type long: ``` root |-- id: string (nullable = true) |-- value1: string (nullable = true) |-- value2: string (nullable = true) |-- time: long (nullable = true) |-- type: string (nullable = true) ``` I am trying to group by value1, value2 and time as YYYY-MM-DD. I tried to group by cast(time as Date) but then I got the following error: ``` Exception in thread "main" java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.spark.deploy.worker.DriverWrapper$.main(DriverWrapper.scala:40) at org.apache.spark.deploy.worker.DriverWrapper.main(DriverWrapper.scala) Caused by: java.lang.RuntimeException: [1.21] failure: ``DECIMAL'' expected but identifier Date found ``` Does that mean there is not way to group by a date? I even tried to add another level of casting to have it as a String: ``` cast(cast(time as Date) as String) ``` Which returns the same error. I've read that I could use probably aggregateByKey on the RDD but I don't understand how to use it for a few columns and convert that long to a YYYY-MM-DD String. How should I proceed?
I solved the issue by adding this function: ``` def convert( time:Long ) : String = { val sdf = new java.text.SimpleDateFormat("yyyy-MM-dd") return sdf.format(new java.util.Date(time)) } ``` And registering it into the sqlContext like this: ``` sqlContext.registerFunction("convert", convert _) ``` Then I could finally group by date: ``` select * from table convert(time) ```
I'm using Spark 1.4.0 and since 1.2.0 `DATE` appears to be present in the Spark SQL API ([SPARK-2562](https://issues.apache.org/jira/browse/SPARK-2562)). `DATE` should allow you to group by the time as `YYYY-MM-DD`. I also have a similar data structure, where my `created_on` is analogous to your `time` field. ``` root |-- id: long (nullable = true) |-- value1: long (nullable = true) |-- created_on: long (nullable = true) ``` I solved it using `FROM_UNIXTIME(created_on,'YYYY-MM-dd')` and works well: ``` val countQuery = "SELECT FROM_UNIXTIME(created_on,'YYYY-MM-dd') as `date_created`, COUNT(*) AS `count` FROM user GROUP BY FROM_UNIXTIME(created_on,'YYYY-MM-dd')" ``` From here on you can do the normal operations, execute the query into a dataframe and so on. `FROM_UNIXTIME` worked probably because I have Hive included in my Spark installation and it's a [Hive UDF](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF). However it will be included as part of the Spark SQL native syntax in future releases ([SPARK-8175](https://issues.apache.org/jira/browse/SPARK-8175)).
Aggregation with Group By date in Spark SQL
[ "", "sql", "group-by", "apache-spark", "aggregation", "" ]
I am getting `0` on executing the following statement: ``` SELECT DATEDIFF(mi,'1970-01-01 00:00:00','1970-01-01 00:00:01') * CONVERT(BIGINT,60)*1000 as BidTicks ``` Whereas I get `6000` on executing this: ``` SELECT DATEDIFF(mi,'1970-01-01 00:00:00','1970-01-01 00:01:01') * CONVERT(BIGINT,60)*1000 as BidTicks ``` What are my options?
You need to use s or ss for your expected output . ``` select DATEDIFF(s,'1970-01-01 00:00:00','1970-01-01 00:00:01') * CONVERT(BIGINT,60)*1000 as BidTicks ``` it will prodece : ``` 60000 ```
To return the difference in seconds as the first argument in your query, you'll want to use seconds as the argument, which is an s: ``` select DATEDIFF(s,'1970-01-01 00:00:00','1970-01-01 00:01:01') ``` Basically, your query is asking for the number of minutes between the two. Your first query returns 0, the second returns 1. Therefore `0 * 60 * 1000 = 0` and `1 * 60 * 1000 = 60000` Try the following: ``` select DATEDIFF(s,'1970-01-01 00:00:00','1970-01-01 00:01:01') -- * convert(BIGINT,60) wasn't sure if this was necessary still * 1000 as BidTicks ```
SQL Server DATEDIFF keeps ignoring seconds part of date
[ "", "sql", "sql-server", "t-sql", "sql-server-2005", "" ]
How can one update the same field's value using a different ID? ``` Customer ORderType Matt 1 Jake 2 ``` For all "Matt"s I want to set the orderType to 2 from Jake. This doesnt work. ``` declare @X nvarchar (250) set @x = (select OrderType from table where Customer = 'Matt') update table set OrderType = @x where Customer = 'Jake' ```
You can include a FROM clause in an UPDATE query. ``` UPDATE C SET OrderType = J.OrderType FROM [Table] C CROSS JOIN [Table] J WHERE C.Customer = 'Matt' AND J.Customer = 'Jake' ``` If you use a subquery instead, that subquery must return precisely one row. Note also that your quoted line: ``` set @x = (select OrderType from table where Customer = 'Matt') ``` Can also be written as: ``` SELECT @x = OrderType FROM Table WHERE Customer = 'Matt' ``` People tend to overuse subqueries.
``` update table set OrderType = (select TOP 1 OrderType from table where Customer = 'Matt') where Customer = 'Jake' ```
SQL Update same field using a different ID
[ "", "sql", "sql-server", "t-sql", "" ]
For example I have a following table(tbl\_trans) like below ``` transaction_id transaction_dte integer timestamp without time zone ---------------+---------------------------------- 45 | 2014-07-17 00:00:00 56 | 2014-07-17 00:00:00 78 | 2014-04-17 00:00:00 ``` so how can I find the tottal no.of transaction in 7th month from tbl\_trans ? so the expected output is ``` tot_tran month --------+------- 2 | July ```
``` select count(transaction_id) tot_tran ,to_char(max(transaction_dte),'Month') month from tbl_trans where extract (month from transaction_dte)=7 ``` PostgreSQL Extract function explained [here](https://stackoverflow.com/a/29839100/3682599) Reference : [Date/Time Functions and Operators](http://www.postgresql.org/docs/9.4/static/functions-datetime.html)
``` select count(transaction_id),date_part('month',transaction_dte) from tbli_trans where date_part('month',transaction_dte)=7 ```
select values of a specific month from table contain timestamp column
[ "", "sql", "postgresql", "timestamp", "" ]
I'm trying to figure out a query that will efficiently allow me to check a specific number against a list of prefixes. My table structure looks like this Table 1 ``` MobileNumber 408726172 307612535 408642517 111824374 ``` Table 2 ``` Prefix 408 3076 ``` Now as you can see some MobileNumber's start with 408 and 307, the Prefix table contains 408 and 3076. Is there a way that I can check for valid prefixes within a SELECT statement without using a loop? my thoughts would be a SQL query that looks like ``` SELECT MobileNumber FROM Table1 WHERE THE FIRST PART OF THE NUMBER MATCHES ANY OF THE Prefixes in Table2 ``` I just can't understand how I can do it
select all numbers will valid/invalid attribute: ``` select MobileNumber, case when exists (select 1 from Table_2 where MobileNumber like Prefix + '%') then 1 else 0 end as IsValid from Table_1 ``` select only valid numbers: ``` select MobileNumber from Table_1 where exists (select 1 from Table_2 where MobileNumber like Prefix + '%') ```
Something like ``` SELECT Table1.MobileNumber FROM Table1 INNER JOIN Table2 ON Table1.MobileNumber LIKE Table2.Prefix +'%' ```
How to efficiently check records based off a list of prefixes
[ "", "sql", "sql-server", "" ]
I have datatable in Microsoft SQL 2012 ![Datatable rows](https://i.stack.imgur.com/SessO.jpg) It is possible to select data values with newest key position with one request? Or maybe not with one? Result i want should be 10,20,30,50,70
Try this: ``` select data from tableName where id in (select MAX(id) from tableName group by key) ```
You could do it using the following: ``` SELECT data FROM datatable WHERE id IN ( SELECT MAX(ID) latest_id FROM datatable GROUP BY key ) ``` This picks out the latest row for each key (going by incrementing ID). Then you simply only pick these rows using the `IN` which excludes the non-latest rows.
Looking for SQL request
[ "", "sql", "sql-server", "" ]
Am using MYSQL ``` SELECT m.proposalId, m.title, n.stageNumber, n.committeeId, n.modifiedDate, o.msNumber , o.description,o.ics,o.edition FROM mystands_Proposal m INNER join mystands_ProjectLifecycle n on m.proposalId = n.proposalId INNER join mystands_Project o on m.proposalId = o.proposalId WHERE n.newState=0 AND n.committeeId=79827 AND (n.assignedTo=29913 OR n.actionBy=29913) AND n.proposalId LIKE '%sdas%' AND o.projectNumber LIKE '%sdass%' AND n.stageNumber=40.92 AND o.category=1 AND o.degreeofCorrespondence=1 AND o.msNumber LIKE '%sdas%' AND (n.modifiedDate <='2015-05-15' AND n.stageNumber=40.2) AND (n.modifiedDate <='2015-05-07' AND n.stageNumber=30.99) AND (n.modifiedDate <='2015-05-27' AND n.stageNumber=55.99) ``` I doing inner join for three tables for search functionality and in java if the user enters the values then am appending the value to sql query on the fly The above code is working fine for AND operations between fields. How do I perform "OR" operation for the fields entered by user and display result Actually If I have tried with this ``` SELECT m.proposalId, m.title, n.stageNumber, n.committeeId, n.modifiedDate, o.msNumber , o.description,o.ics,o.edition FROM mystands_Proposal m INNER join mystands_ProjectLifecycle n on m.proposalId = n.proposalId INNER join mystands_Project o on m.proposalId = o.proposalId WHERE n.newState=0 OR n.committeeId=80246 OR (n.assignedTo=79977 OR n.actionBy=79977) OR n.proposalId LIKE '%ads%' OR o.projectNumber LIKE '%sds%' OR n.stageNumber=30.99 OR o.category=1 OR o.degreeofCorrespondence=1 OR o.msNumber LIKE '%sadsa%' OR (n.modifiedDate <='2015-05-22' AND n.stageNumber=40.2) OR (n.modifiedDate <='2015-05-22' AND n.stageNumber=30.99) OR (n.modifiedDate <='2015-05-29' AND n.stageNumber=55.99) ``` Now what is happening is it is giving me the results of inner joins because newState=0 is true and whole where condition is getting true and am getting the results of inner joins on three table and the result is not getting filtered as desired Can you please help where am going wrong Thanks
IF you are getting everything because newstate = 0 is always true then you just need to put this, if I am misunderstanding then I apologise but this seems to be the problem as I understand it: > "because newState=0 is true and whole where condition is getting true" ``` n.newState=0 AND (n.committeeId=80246 OR (n.assignedTo=79977 OR n.actionBy=79977) OR n.proposalId LIKE '%ads%' OR o.projectNumber LIKE '%sds%' OR n.stageNumber=30.99 OR o.category=1 OR o.degreeofCorrespondence=1 OR o.msNumber LIKE '%sadsa%' OR (n.modifiedDate <='2015-05-22' AND n.stageNumber=40.2) OR (n.modifiedDate <='2015-05-22' AND n.stageNumber=30.99) OR (n.modifiedDate <='2015-05-29' AND n.stageNumber=55.99)) ``` Somewhere in your code keep a record of how many where conditions you have added and if it is larger than 0 add the 'AND ( ...... ) as I don't know what your code that generates this query looks like I can't help you any further
``` WHERE n.newState=0 AND ( n.committeeId=80246 OR (n.assignedTo=79977 OR n.actionBy=79977) OR n.proposalId LIKE '%ads%' OR o.projectNumber LIKE '%sds%' OR n.stageNumber=30.99 OR o.category=1 OR o.degreeofCorrespondence=1 OR o.msNumber LIKE '%sadsa%' OR (n.modifiedDate <='2015-05-22' AND n.stageNumber=40.2) OR (n.modifiedDate <='2015-05-22' AND n.stageNumber=30.99) OR (n.modifiedDate <='2015-05-29' AND n.stageNumber=55.99) ) ```
OR Operation in JOINS SQL(MYSQL)
[ "", "mysql", "sql", "join", "" ]
I have a field named `Order_ID` from table `Order`. when value of `Order_ID` starts with numeric value then it should exclude those records, otherwise it should include those records in the report. For example: if the `Order_ID` starts with a value **1ABC** it should exclude that record from report . If `Order_ID` has a value **A1BC** it should not exclude those records.
Gayatri, put following in report where condition `WHERE NOT REGEXP_LIKE(Order_ID, '^[0-9]');` report it exclude entries start with numbers and contains values start with Alphabetics only. Hopes this helps.
<http://docs.oracle.com/cd/B12037_01/server.101/b10759/conditions018.htm#SQLRF00501> for your particular case it's going to be something like ``` SELECT ... WHERE REGEXP_LIKE (Order_ID, '^[a-zA-Z]+.*$'); ```
How to use Regex_Like for this case?
[ "", "sql", "regex", "oracle-apex", "regexp-like", "" ]
I have the following structure ``` ---------- presences ---------- id started ended user_id --------- breaks --------- id presence_id started ended ``` I need to create an SQL query that returns me the following information `presence_id user_id presence_time breaks_time` where presence time is (presence.ended - presence-started) - sum of (break.ended - break.started) of all the breaks related to the presence Is there an efficient way to get this information with an sql query? If you know how to do it in eloquent it would be even better :D Thank you so much!
<http://sqlfiddle.com/#!9/650a2/3> ``` SELECT p.id presence_id, p.user_id, (p.ended-p.started) presence_time , SUM(b.ended-b.started) breaks_time FROM presences p LEFT JOIN breaks b ON p.id = b.presence_id GROUP BY p.id ``` **UPDATE** Same query grouped by user\_id: <http://sqlfiddle.com/#!9/1ce21/1> ``` SELECT sub_total.user_id, SUM(sub_total.presence_time) , SUM(sub_total.breaks_time) FROM ( SELECT p.id presence_id, p.user_id, (p.ended-p.started) presence_time , SUM(b.ended-b.started) breaks_time FROM presences p LEFT JOIN breaks b ON p.id = b.presence_id GROUP BY p.id) sub_total GROUP BY sub_total.user_id ```
If your `started` and `ended` are stored as `datetime` or `timestamp`, then you can easily do the calculation and find the data in minutes. The following example will be useful when someone taking multiple short breaks through out the working hours. Later in the application level you can convert the minutes to hour. Here is how you can do in mysql ``` mysql> select * from presence ; +------+---------------------+---------------------+---------+ | id | started | ended | user_id | +------+---------------------+---------------------+---------+ | 1 | 2015-01-01 09:00:00 | 2015-01-01 18:00:00 | 10 | | 2 | 2015-01-01 09:20:00 | 2015-01-01 18:04:00 | 11 | | 3 | 2015-01-01 09:10:00 | 2015-01-01 18:30:00 | 12 | | 4 | 2015-01-02 09:23:10 | 2015-01-02 18:10:00 | 10 | | 5 | 2015-01-02 09:50:00 | 2015-01-02 19:00:00 | 11 | | 6 | 2015-01-02 09:10:00 | 2015-01-02 18:36:30 | 12 | +------+---------------------+---------------------+---------+ 6 rows in set (0.00 sec) mysql> select * from breaks ; +------+-------------+---------------------+---------------------+ | id | presence_id | started | ended | +------+-------------+---------------------+---------------------+ | 1 | 1 | 2015-01-01 12:00:00 | 2015-01-01 12:20:30 | | 2 | 1 | 2015-01-01 15:46:30 | 2015-01-01 15:54:26 | | 3 | 2 | 2015-01-01 11:26:30 | 2015-01-01 11:34:23 | | 4 | 2 | 2015-01-01 14:06:45 | 2015-01-01 14:10:20 | | 5 | 2 | 2015-01-01 16:01:10 | 2015-01-01 16:14:57 | | 6 | 3 | 2015-01-01 12:11:20 | 2015-01-01 12:40:05 | | 7 | 3 | 2015-01-01 17:01:10 | 2015-01-01 17:24:21 | | 8 | 4 | 2015-01-02 12:50:00 | 2015-01-02 13:40:00 | | 9 | 5 | 2015-01-02 12:20:00 | 2015-01-02 13:05:30 | | 10 | 5 | 2015-01-02 17:03:00 | 2015-01-02 17:20:00 | | 11 | 6 | 2015-01-02 12:16:50 | 2015-01-02 12:58:30 | +------+-------------+---------------------+---------------------+ 11 rows in set (0.00 sec) select p.id as presence_id, p.user_id, timestampdiff(minute,started,ended) - b.break_time as presence_time, b.break_time from presence p left join ( select presence_id, coalesce(sum( timestampdiff(minute,started,ended ) ),0) as break_time from breaks group by presence_id )b on b.presence_id = p.id +-------------+---------+---------------+------------+ | presence_id | user_id | presence_time | break_time | +-------------+---------+---------------+------------+ | 1 | 10 | 513 | 27 | | 2 | 11 | 501 | 23 | | 3 | 12 | 509 | 51 | | 4 | 10 | 476 | 50 | | 5 | 11 | 488 | 62 | | 6 | 12 | 525 | 41 | +-------------+---------+---------------+------------+ 6 rows in set (0.00 sec) ```
get time difference from childs
[ "", "mysql", "sql", "date", "laravel", "eloquent", "" ]
Given a table like that below, how can I get the first `k` rows of it **without using `LIMIT` or `TOP`** ? ``` +--------+-------+-----+ | Person | Group | Age | +--------+-------+-----+ | Bob | 1 | 32 | | Jill | 1 | 34 | | Shawn | 1 | 42 | | Jake | 2 | 29 | | Paul | 2 | 36 | | Laura | 2 | 39 | +--------+-------+-----+ ``` Desired result for `k=3`: ``` +--------+-------+-----+ | Bob | 1 | 32 | | Jill | 1 | 34 | | Shawn | 1 | 42 | +--------+-------+-----+ ``` Desired result for `k=4`: ``` +--------+-------+-----+ | Bob | 1 | 32 | | Jill | 1 | 34 | | Shawn | 1 | 42 | | Jake | 2 | 29 | +--------+-------+-----+ ``` I need to replicate MySQL's `LIMIT` function without using it.
OK. If variables weren't allowed, you could still do it this way... ``` SELECT some, stuff FROM an_instance x JOIN another_instance y ON (y.something < x.something) OR (y.something = x.something AND y.some_other_thing <= x.some_other_thing) GROUP BY x.something,x.some_other_thing HAVING COUNT(*) <= whatever; ``` ...but it would scale appallingly badly on larger data sets.
You can do this by numbering the rows in a derived table and then selecting the rows with row numbers up until your threshold. For MySQL the query could look like this: ``` SELECT * FROM (SELECT @row_number:=@row_number + 1 AS row_number, person, `group`, age FROM your_table, (SELECT @row_number:=0) AS r ORDER BY `group` , age) x WHERE row_number <= 3; ``` [Sample SQL Fiddle](http://www.sqlfiddle.com/#!9/87363/3)
Get first k records / rows of a table using SQL
[ "", "mysql", "sql", "" ]
I have a function that has to convert currencies based on a conversion rate retrieved from a a table with historic values. It takes four parameters (@to\_curr, @from\_curr, @trans\_date, @gl\_cmp) adn returns the exchange rate. The table structure is * `currencypk int primary key` * `gl_crcnv_bdate datetime` * `gl_crcnv_edate datetime` * `gl_crcnv_rate float` * `gl_cmp_key char(30)` (we have multiple companies this identifies each one) * `gl_CRNCY_TO char(30)` * `gl_CRNCY_FROM char(30)` Below is my code and when I insert this in a function it causes a MASSIVE increase in execution time. I have a pretty good idea where the bottle neck is in my function but I am stuck on how to rewrite it. The reason why I have a ugly where clause with a (and) or (and) setup is because not every relationship is reversed. For example there are records with a conversion rate from USD to GBP but there doesn't exist one at all with GBP to USD. That fact accounts for the case statement in the `SELECT`, to get the actual conversion that doesn't exist in the table. The `isnull` is in case it returns nothing, so it uses 1 Any help is greatly appreciated. ``` SELECT isnull((SELECT convert(decimal(5,4), case when @to_curr <> gl_CRNCY_TO then (1/gl_crcnv_rate) else gl_crcnv_rate end) FROM [TABLE] WHERE gl_cmp_key = @gl_cmp AND ((gl_CRNCY_TO = @from_curr AND gl_CRNCY_FROM = @to_curr) OR (gl_CRNCY_TO = @to_curr AND gl_CRNCY_FROM = @from_curr)) AND @trans_date BETWEEN gl_crcnv_bdate AND gl_crcnv_edate), 1) ```
You could try: ``` SELECT isnull(SELECT convert(decimal(5,4), gl_crcnv_rate) FROM (select 1/gl_crcnv_rate as gl_crcnv_rate from [TABLE] WHERE gl_cmp_key = @gl_cmp AND gl_CRNCY_TO = @from_curr AND gl_CRNCY_FROM = @to_curr AND @trans_date BETWEEN gl_crcnv_bdate AND gl_crcnv_edate UNION select gl_crcnv_rate from [TABLE] WHERE gl_cmp_key = @gl_cmp AND gl_CRNCY_TO = @to_curr AND gl_CRNCY_FROM = @from_curr AND @trans_date BETWEEN gl_crcnv_bdate AND gl_crcnv_edate ) sq, 1) ``` This assumes that if both possible conversion records exists then the conversion factor will evaluate to the same value, and that there is an index that the query can use.
This is all I can think of based on privided information Create computed column so that we don't have to calculate it for ever query. ``` ALTER TABLE [TABLE] ADD gl_crcnv_rate_invert AS (1/gl_crcnv_rate) PERSISTED; ``` Craete index ``` CREATE NONCLUSTERED INDEX ix_cmpkey_crncyto_crncyfrom_bdate_edate ON [TABLE] (gl_CRNCY_TO, gl_CRNCY_FROM, gl_crcnv_bdate, gl_crcnv_edate) ```
Optimize SQL query with and or and structure in WHERE clause
[ "", "sql", "sql-server", "query-optimization", "where-clause", "sql-function", "" ]
Consider the following table ``` create table temp (id int, attribute varchar(25), value varchar(25)) ``` And values into the table ``` insert into temp select 100, 'First', 234 insert into temp select 100, 'Second', 512 insert into temp select 100, 'Third', 320 insert into temp select 101, 'Second', 512 insert into temp select 101, 'Third', 320 ``` I have to deduce a column EndResult which is dependent on 'attribute' column. For each id, I have to parse through attribute values in the order First, Second, Third and choose the very 1st value which is available i.e. for id = 100, EndResult should be 234 for the 1st three records. Expected result: ``` | id | EndResult | |-----|-----------| | 100 | 234 | | 100 | 234 | | 100 | 234 | | 101 | 512 | | 101 | 512 | ``` I tried with the following query in vain: ``` select id, case when isnull(attribute,'') = 'First' then value when isnull(attribute,'') = 'Second' then value when isnull(attribute,'') = 'Third' then value else '' end as EndResult from temp ``` Result ``` | id | EndResult | |-----|-----------| | 100 | 234 | | 100 | 512 | | 100 | 320 | | 101 | 512 | | 101 | 320 | ``` Please suggest if there's a way to get the expected result.
Here is how you can achieve this using `ROW_NUMBER()`: ``` WITH t AS ( SELECT * ,ROW_NUMBER() OVER ( PARTITION BY id ORDER BY (CASE attribute WHEN 'First' THEN 1 WHEN 'Second' THEN 2 WHEN 'Third' THEN 3 ELSE 0 END) ) rownum FROM TEMP ) SELECT id ,( SELECT value FROM t t1 WHERE t1.id = t.id AND rownum = 1 ) end_result FROM t; ``` For testing purpose, please see SQL Fiddle demo here: [SQL Fiddle Example](http://sqlfiddle.com/#!6/88ccf/10/0)
You can use analytical function like `dense_rank` to generate a numbering, and then select those rows that have the number '1': ``` select x.id, x.attribute, x.value from (select t.id, t.attribute, t.value, dense_rank() over (partition by t.id order by t.attribute) as priority from Temp t) x where x.priority = 1 ``` In your case, you can conveniently order by `t.attribute`, since their alphabetical order happens to be the right order. In other situations you could convert the attribute to a number using a case, like: ``` order by case t.attribute when 'One' then 1 when 'Two' then 2 when 'Three' then 3 end ```
Using Case in a select statement
[ "", "sql", "sql-server", "" ]
I have this table: ``` id_category | id_service | amount | date ``` This table have more than one rows with same id\_category and id\_service. How I can get only row from same id\_category and id\_service with the max date of them? Example data: ``` 1 | 1 | 0.1 | 2015-05-05 1 | 1 | 0.12 | 2015-05-06 1 | 2 | 0.2 | 2015-05-04 1 | 2 | 0.25 | 2015-05-05 1 | 2 | 0.30 | 2015-05-06 2 | 1 | 0.15 | 2015-05-05 ``` I want to get this results: ``` 1 | 1 | 0.12 | 2015-05-06 1 | 2 | 0.30 | 2015-05-06 2 | 1 | 0.15 | 2015-05-05 ``` Thanks!
<http://sqlfiddle.com/#!9/ad96b/3> ``` SELECT t1.* FROM t1 LEFT JOIN t1 t2 ON t1.id_category = t2.id_category AND t1.id_service = t2.id_service AND t1.`date` < t2.`date` WHERE t2.date IS NULL ```
Maybe your query like this ``` select a.* from table a where a.date = (select max(b.date) from table b where a.id=b.id group by b.id_service, b.id_category) group by a.id_category, a.id_service; ```
How I can get only row from same id_category and id_service with the max date of them
[ "", "mysql", "sql", "group-by", "max", "" ]
I have the following dynamic SQL query that I am using to filter results for a data range. If my "`createdFrom`" and "`createdTo`" date are not null, I add a where clause to my select statement. I have 2 questions: 1. How would you go about doing this? Is my logic correct? 2. I am currently getting an error message when I add the below code, why would I be getting this?: > Conversion failed when converting character string to smalldatetime > data type. ``` Declare @createdFromDate SMALLDATETIME = NULL , @createdToDate SMALLDATETIME = NULL IF @createdFromDate IS NOT NULL BEGIN SELECT @sWhere = @sWhere + 'AND g.CreatedOn > '+@createdFromDate END IF @createdToDate IS NOT NULL BEGIN SELECT @sWhere = @sWhere + ' AND g.CreatedOn <'+@createdToDate END ```
What's causing your issue is your trying to concatenate a `smalldatetime` with, so string data-types like `VARCHAR`. You can just cast your dates as `VARCHAR` to get around that. `CONCAT()` would be even better though. It is your best friend when concatenating values of different data-types. I HIGHLY recommend using it, sadly it only is support by SQL Server 2012+ and above and not many know it even exists. Now for you specific problem, I don't think dynamic SQL is necessary. You should generally try to avoid using dynamic SQL since it's hard to debug and make changes later. Something like this would work just fine for you. ``` Declare @createdFromDate SMALLDATETIME = NULL, @createdToDate SMALLDATETIME = NULL SELECT * FROM yourTable WHERE ( date_column > @createdFromDate OR @createdFromDate IS NULL ) AND ( date_column < @createdToDate OR @createdToDate IS NULL ) ```
For your question > I am currently getting an error message when I add the below code, why would I be getting this? The reason you are getting this error is when the query is executed your condition becomes ``` AND g.CreatedOn > 2015-05-07 ``` Which is invalid, but instead your need ``` AND g.CreatedOn > '2015-05-07' ``` Hence your SQL should be ``` SELECT @sWhere = @sWhere + 'AND g.CreatedOn > '''+@createdFromDate + '''' ``` For your question > How would you go about doing this? Is my logic correct? You should use [`sp_executesql`](https://msdn.microsoft.com/en-IN/library/ms188001.aspx) and pass the variable in your dynamic SQL like this. ``` IF @createdFromDate IS NOT NULL BEGIN SELECT @sWhere = @sWhere + 'AND g.CreatedOn > @createdFromDate' END IF @createdToDate IS NOT NULL BEGIN SELECT @sWhere = @sWhere + ' AND g.CreatedOn < @createdToDate' END ``` Instead of ``` EXEC(@SQL) ``` You would use ``` EXEC sp_executeSQL @SQL,N'@createdFromDate smalldatetime,@createdToDate smalldatetime',@createdFromDate,@createdToDate ``` Where `@SQL` is constructed from your `@Where` ***Note**: You don't need dynamic SQL if this is the only reason for using it.*
TSQL - See if date falls between date
[ "", "sql", "t-sql", "datetime", "" ]
I am struggling with converting a sql query in rails. **Background** I have 3 tables named bus,stop and schedule. Bus table has fields id, and name. Stop table has fields id, and name. Schedule table has fields id,bus\_id,stop\_id,arrival,and bustag. This is the query i have in sql ``` select A.bus_id as busid, A.stop_id as source, A.arrival as atime, B.arrival as dtime from (SELECT * from schedules as S where S.stop_id = #{startStopId}) A inner join (SELECT * from schedules as S where S.stop_id = #{endStopId}) B on A.bustag = B.bustag where A.arrival < B.arrival and A.arrival > CURTIME(); ``` In rails I have done this so far ``` @possible_buses = Schedule.where(stop_id: [startStopId,endStopId]) ``` Now I want to do same as mysql query that is I want to further process this list of possible buses get list of buses at startStop who's bustag equals bustag of the buses at endStop where arrival time is less at startStop than arrival time at endStop. I would highly appreciate if someone can help me out. I am not good in rails query and this would help me a long way. **Sample Tables** ``` BusTable StopTable ScheduleTable id Name id Name id bus_id stop_id arrival bustag 1 ttc(inbound) 1 mall 1 1 1 3:00 1 2 ttc(outbound) 2 home 2 1 2 3:15 1 3 downtown 3 1 3 3:30 1 4 uptown 4 1 4 3:45 1 5 1 1 3:15 2 6 1 2 3:30 2 7 1 3 3:45 2 8 1 4 4:00 2 9 2 4 2:55 3 10 2 3 3:10 3 11 2 2 3:25 3 12 2 1 3:35 3 13 2 4 3:10 4 14 2 3 3:20 4 15 2 2 3:30 4 16 2 1 3:45 4 ``` **Sample Query and Expected Output** For instance if user wants to go from the Mall at 2:30 to uptown Then the following ids should be returned from the schedule : -> 1,5. Since Both the Ids will get you the bus\_id that take can you from the mall to uptown. I Hope this is more clear. please feel free to ask for more information. Thanks.
Other awnsers have provided the right SQL. So I only show how to execute raw SQL in Rails. Rails supports not only active record, but also it allows executing raw SQL. It returns an array, each element in the array is a hash with all your selected columns as key and data as value. The returned array is just like which active record way returns. Here is a sample: ``` # first establish connection, if not explicitly specify establish connection, it should use default configuration, which is config/database.yml ActiveRecord::Base.establish_connection( :adapter => "mysql2", :host => "localhost", :port => 3306, :username => "myuser", :password => "mypass", :database => "somedatabase", :pool => 1, :timeout => 5000 ) sql = "select name, age from users where age < 30" raw = ActiveRecord::Base.connection.execute(sql) # raw is like [{"name" => "Tom", "age" => 28}, {"name" => "Bob", "age" => 26}] raw.each(:as => :hash) do |row| puts row.inspect # row is hash, something like {"name" => "Tom", "age" => 28} end ``` You can run `rails runner 'puts ActiveRecord::Base.configurations.inspect'` to check your default DB connection info.
Sorry, it should be a comment but as soon as I did job for you <http://sqlfiddle.com/#!9/45f47/9> And I expect that you will explain what is wrong with your query or result I post this answer for future real response. So as you can see in my fiddle your query (with small change to select `A.id` field) already returns `1,5` values that you expected. So what is wrong? what other result are you looking for? ``` select A.id, A.bus_id as busid, A.stop_id as source, A.arrival as atime, B.arrival as dtime from (SELECT * from ScheduleTable as S where S.stop_id = 1) A inner join (SELECT * from ScheduleTable as S where S.stop_id = 4) B on A.bustag = B.bustag where A.arrival < B.arrival and A.arrival > '2:30'; ``` **EDIT** <https://stackoverflow.com/a/15408419/4421474> here is solution how to run custom query with rails
Convert a complex query in rails given a sql query
[ "", "mysql", "sql", "ruby-on-rails", "ruby", "" ]
I have a table similar to this: ``` Index Name Type -------------------------------- 1 'Apple' 'Fruit' 2 'Carrot' 'Vegetable' 3 'Orange' 'Fruit' 3 'Mango' 'Fruit' 4 'Potato' 'Vegetable' ``` and would like to change it to this: ``` Index Name Type -------------------------------- 1 'Apple' 'Fruit 1' 2 'Carrot' 'Vegetable 1' 3 'Orange' 'Fruit 2' 3 'Mango' 'Fruit 3' 4 'Potato' 'Vegetable 2' ``` Any chance to do this in a smart update query (*= without cursors*)?
You can run `update` with `join` to get [`row_number()`](https://msdn.microsoft.com/en-us/library/ms186734.aspx) within `[type]` group for each row and then concatenate this values with `[type]` using `[index]` as glue column: ``` update t1 set t1.[type] = t1.[type] + ' ' + cast(t2.[rn] as varchar(3)) from [tbl] t1 join ( select [index] , row_number() over (partition by [type] order by [index]) as [rn] from [tbl] ) t2 on t1.[index] = t2.[index] ``` [**SQLFiddle**](http://sqlfiddle.com/#!6/6fb1b/2)
Suppose that your table has a Primary Key called ID then you can run the following: ``` update fruits set Type = newType from ( select f.id ,f.[index] ,f.Name ,f.[Type] ,Type + ' '+ cast((select COUNT(*) from fruits where Type = f.Type and Fruits.id <= f.id) as varchar(10)) as newType from fruits f ) t where t.id = fruits.id ```
SQL update column depending on other values in same column
[ "", "sql", "sql-server", "sql-server-2014", "" ]
Can anyone help me format my dollars data into millions of dollars for SQL Server? ``` 3,000,000 --> $3M ``` I have this but it's not working ``` SELECT '$' + SUM(Sales_Information.Sales_in_Dollars / 1000000) AS [Sales in Millions] ``` Doing this gives me #Error ``` format(SUM(Sales_Information.Sales_in_Dollars / 1000000) ```
The `FORMAT` function has a way of trimming the thousands each comma reduces the displayed value by 1000 e.g. ``` select format(3000000,'$0,,,.000B') select format(3000000,'$0,,M') select format(3000000,'$0,K') ``` (note that I had to use decimals to show 3 million in Billions) Output: > $0.003B > $3M > $3000K
Try this.... ``` SELECT '$' + CONVERT(VARCHAR(100),CAST(3000000 AS MONEY),1) RESULT: $3,000,000.00 ```
Format Currency into millions of dollars SQL Server
[ "", "sql", "sql-server", "" ]
I've a MySQL table like this: `id | reference_id | name` And I wish to get rows `WHERE` reference\_id is `IN` a certain list of values: `SELECT * FROM TABLE WHERE reference_id IN(2,3);` The problem is that the table will contain data like: ``` id | reference_id | name 1 | 1 | nameA 2 | 2 | nameB 3 | 2 | nameC 4 | 3 | nameD ``` And by running the query I'll get a result set like: ``` id | reference_id | name 2 | 2 | nameB 3 | 2 | nameC 4 | 3 | nameD ``` Is there a way I can tell MySQL to only return ONE value peer IN argument? Meaning, I would like a result set like: ``` id | reference_id | name 3 | 2 | nameC 4 | 3 | nameD ```
<http://sqlfiddle.com/#!9/6412b/2> ``` SELECT `table1`.* FROM `table1` LEFT JOIN `table1` t ON `table1`.reference_id = t.reference_id AND `table1`.id<t.id WHERE `table1`.reference_id IN(2,3) AND t.id IS NULL GROUP BY `table1`.reference_id ```
``` select * from your_table where id in ( select max(id) from your_table where reference_id in (2,3) group by reference_id ) ```
MySQL: WHERE IN() limited to one row peer argument
[ "", "mysql", "sql", "limit", "where-in", "" ]
I have a JDBC database (DB2 specifically but am looking for something DB agnostic, at a minimum DB2 and Oracle) that has a table that, every 10 minutes, gets records inserted with statistics about APIs that are run by the application in question. It looks something like: ``` StatKey, StartDate, EndDate, APIName, StatName, StatValue 201505071498224437562706 2015-05-07 14:12:44.0 2015-05-07 14:22:44.0 API5 Invocations 34 201505071498161437466684 2015-05-07 14:06:14.0 2015-05-07 14:16:14.0 API4 Invocations 79 201505071498060937466556 2015-05-07 13:56:08.0 2015-05-07 14:06:08.0 API4 Average 26,264.37 201505071497263437627286 2015-05-07 14:16:33.0 2015-05-07 14:26:34.0 API2 Invocations 24 201505071497262137620812 2015-05-07 14:16:19.0 2015-05-07 14:26:20.0 API2 Invocations 24 201505071497024537466378 2015-05-07 13:52:43.0 2015-05-07 14:02:44.0 API1 Average 6,830,050 201505071497023337466368 2015-05-07 13:52:31.0 2015-05-07 14:02:32.0 API3 Average 31,523 201505071496023337466361 2015-05-07 13:52:31.0 2015-05-07 14:02:32.0 API2 Invocations 1 201505071494263837628892 2015-05-07 14:16:36.0 2015-05-07 14:26:37.0 API5 Invocations 68 201505071493124437466656 2015-05-07 14:02:44.0 2015-05-07 14:12:44.0 API1 Invocations 2 201505071492263037625304 2015-05-07 14:16:29.0 2015-05-07 14:26:30.0 API3 Average 179,223.29 ``` Every 10 minutes, any API executed during that time will have an entry similar to the above. However, multiple JVMs will write to the same database so the start and end times are not simply every 10 minutes and there could be more than 6 entries every hour. What I'm trying to do is create a SQL that will group per hour all invocations of all API for each hour of run time. For example: ``` Date&Hour, API, Invocations 2015-05-07 12:00, API1, 100 2015-05-07 12:00, API2, 150 2015-05-07 13:00, API2, 200 etc... ``` I've tried doing a GROUP BY based on a SUBSTR of the primary key (which is always the timestamp plus some random numbers - but between the hours and minutes are 2 random digits) at the hour mark but I'm not sure how to add all StatName=Invocations per hour. Could someone please provide some ideas as to how I might accomplish this?
Another possible solution: ``` select to_char(StarDate,'rrrr-mm-dd HH24:')||'00' as DateHour, APIName as API, sum(StatValue) as Invocations from STATISTICS where StatName = 'Invocations' group by to_char(StarDate,'rrrr-mm-dd HH24:')||'00', APIName ``` There are differents ways to do this.. Good luck!
[SQL Fiddle](http://sqlfiddle.com/#!4/3ad6f/2) **Oracle 11g R2 Schema Setup**: ``` CREATE TABLE Data AS SELECT '201505071498224437562706' AS StatKey, TO_DATE( '2015-05-07 14:12:44', 'YYYY-MM-DD HH24:MI:SS' ) AS StartDate, TO_DATE( '2015-05-07 14:22:44', 'YYYY-MM-DD HH24:MI:SS' ) AS EndDate, 'API5' AS APIName, 'Invocations' AS StatName, 34 AS StatValue FROM DUAL UNION ALL SELECT '201505071498161437466684' AS StatKey, TO_DATE( '2015-05-07 14:06:14', 'YYYY-MM-DD HH24:MI:SS' ) AS StartDate, TO_DATE( '2015-05-07 14:16:14', 'YYYY-MM-DD HH24:MI:SS' ) AS EndDate, 'API4' AS APIName, 'Invocations' AS StatName, 79 AS StatValue FROM DUAL UNION ALL SELECT '201505071498060937466556' AS StatKey, TO_DATE( '2015-05-07 13:56:08', 'YYYY-MM-DD HH24:MI:SS' ) AS StartDate, TO_DATE( '2015-05-07 14:06:08', 'YYYY-MM-DD HH24:MI:SS' ) AS EndDate, 'API4' AS APIName, 'Average' AS StatName, 26264.37 AS StatValue FROM DUAL UNION ALL SELECT '201505071497263437627286' AS StatKey, TO_DATE( '2015-05-07 14:16:33', 'YYYY-MM-DD HH24:MI:SS' ) AS StartDate, TO_DATE( '2015-05-07 14:26:34', 'YYYY-MM-DD HH24:MI:SS' ) AS EndDate, 'API2' AS APIName, 'Invocations' AS StatName, 24 AS StatValue FROM DUAL UNION ALL SELECT '201505071497262137620812' AS StatKey, TO_DATE( '2015-05-07 14:16:19', 'YYYY-MM-DD HH24:MI:SS' ) AS StartDate, TO_DATE( '2015-05-07 14:26:20', 'YYYY-MM-DD HH24:MI:SS' ) AS EndDate, 'API2' AS APIName, 'Invocations' AS StatName, 24 AS StatValue FROM DUAL UNION ALL SELECT '201505071497024537466378' AS StatKey, TO_DATE( '2015-05-07 13:52:43', 'YYYY-MM-DD HH24:MI:SS' ) AS StartDate, TO_DATE( '2015-05-07 14:02:44', 'YYYY-MM-DD HH24:MI:SS' ) AS EndDate, 'API1' AS APIName, 'Average' AS StatName, 6830050 AS StatValue FROM DUAL UNION ALL SELECT '201505071497023337466368' AS StatKey, TO_DATE( '2015-05-07 13:52:31', 'YYYY-MM-DD HH24:MI:SS' ) AS StartDate, TO_DATE( '2015-05-07 14:02:32', 'YYYY-MM-DD HH24:MI:SS' ) AS EndDate, 'API3' AS APIName, 'Average' AS StatName, 31523 AS StatValue FROM DUAL UNION ALL SELECT '201505071496023337466361' AS StatKey, TO_DATE( '2015-05-07 13:52:31', 'YYYY-MM-DD HH24:MI:SS' ) AS StartDate, TO_DATE( '2015-05-07 14:02:32', 'YYYY-MM-DD HH24:MI:SS' ) AS EndDate, 'API2' AS APIName, 'Invocations' AS StatName, 1 AS StatValue FROM DUAL UNION ALL SELECT '201505071494263837628892' AS StatKey, TO_DATE( '2015-05-07 14:16:36', 'YYYY-MM-DD HH24:MI:SS' ) AS StartDate, TO_DATE( '2015-05-07 14:26:37', 'YYYY-MM-DD HH24:MI:SS' ) AS EndDate, 'API5' AS APIName, 'Invocations' AS StatName, 68 AS StatValue FROM DUAL UNION ALL SELECT '201505071493124437466656' AS StatKey, TO_DATE( '2015-05-07 14:02:44', 'YYYY-MM-DD HH24:MI:SS' ) AS StartDate, TO_DATE( '2015-05-07 14:12:44', 'YYYY-MM-DD HH24:MI:SS' ) AS EndDate, 'API1' AS APIName, 'Invocations' AS StatName, 2 AS StatValue FROM DUAL UNION ALL SELECT '201505071492263037625304' AS StatKey, TO_DATE( '2015-05-07 14:16:29', 'YYYY-MM-DD HH24:MI:SS' ) AS StartDate, TO_DATE( '2015-05-07 14:26:30', 'YYYY-MM-DD HH24:MI:SS' ) AS EndDate, 'API3' AS APIName, 'Average' AS StatName, 179223.29 AS StatValue FROM DUAL; ``` **Query 1**: ``` SELECT TRUNC( EndDate, 'HH' ) AS "Date&Hour", APIName, SUM( StatValue ) AS Invocations FROM Data WHERE StatName = 'Invocations' GROUP BY TRUNC( EndDate, 'HH' ), APIName ``` **[Results](http://sqlfiddle.com/#!4/3ad6f/2/0)**: ``` | Date&Hour | APINAME | INVOCATIONS | |-----------------------|---------|-------------| | May, 07 2015 14:00:00 | API2 | 49 | | May, 07 2015 14:00:00 | API5 | 102 | | May, 07 2015 14:00:00 | API1 | 2 | | May, 07 2015 14:00:00 | API4 | 79 | ```
SQL: Group by date and summing values in a column
[ "", "sql", "oracle", "jdbc", "db2", "" ]
**Note: I cant change the datatype of column** I want to store a character into a table that has column with number datatype. The work around i found is convert the character values to ASCII and when retrieving it from the database convert it back to character. I used couple of function ASCII and ASCIISTR but the limitation with these functions are they are converting only first character of the string. So i used dump function ``` select dump('Puneet_kushwah1') from dual; ``` Result: Typ=96 Len=15: 80,117,110,101,101,116,95,107,117,115,104,119,97,104,49 This function is giving ASCII value of all the characters. Then i execute below query ``` select replace(substr((DUMP('Puneet_kushwah1')),(instr(DUMP('Puneet_kushwah1'),':')+2 )),',',' ') from dual; ``` Result: 80 117 110 101 101 116 95 107 117 115 104 119 97 104 49 then i used a special character to fill the space, so that i can replace it while retrieving from the database. ``` select replace(substr((DUMP('Puneet_kushwah1')),(instr(DUMP('Puneet_kushwah1'),':')+2 )),',','040') from dual; Result: 80040117040110040101040101040116040950401070401170401150401040401190409704010404049 ``` Table definition: ``` create table test (no number); ``` Then i inserted it into the table ``` INSERT into test SELECT replace(substr((DUMP('Puneet_kushwah1')),(instr(DUMP('Puneet_kushwah1'),':')+2 )),',','040') from dual; ``` **Problem 1**: When i execute ``` select * from test; ``` i got Result: 8.004011704011E82 I want to convert it into number only. Exact same what i inserted. **Problem 2:** And then when i execute select i want it to return the exact character string. Please help i tried many functions. Thanks in advance.
While I advise not to proceed like this as this could be rife for errors and a possible maintenance nightmare, I do like a challenge and have been forced to do some screwy things myself in order make some vendor's bizarre way of doing things work for us so I sympathize with you if that is the case. So, for the fun of it check this out. Convert to hex, then to a decimal and insert into the database (x\_test has one NUMBER column), then select, converting back: ``` SQL> insert into x_test 2 select to_number(rawtohex('Puneet_kushwah1'), rpad('X', length(rawtohex('Puneet_kushwah1')), 'X')) from dual; 1 row created. SQL> select * from x_test; col1 ---------- 4.1777E+35 SQL> SELECT utl_raw.cast_to_varchar2(hextoraw(trim(to_char(col1, 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX')))) 2 FROM x_test; UTL_RAW.CAST_TO_VARCHAR2(HEXTORAW(TRIM(TO_CHAR(col1,'XXXXXXXXXXXXXXXXXXXXXXXXXXXX -------------------------------------------------------------------------------- Puneet_kushwah1 SQL> ```
You can't get the exact string back because Oracle numbers are only stored up to 38 digits of precision. So if you run this: ``` select cast(no as varchar2(100)) from test; ``` You'll get: ``` 80040117040110040101040101040116040950400000000000000000000000000000000000000000000 ```
Insert character to a number datatype column
[ "", "sql", "database", "oracle", "oracle11g", "type-conversion", "" ]
I am trying to return results in TSQL where it ***only*** displays addresses where there are multiple names. The tricky part has been there are multiple duplicates already in this table... so the Having Count variations that I've tried do not work because they all have a count greater than one. So I have not been able to easily distinguish unique names that have the same address. The solution illustrated below is what I would like to produce... and I have but my solution is a sad last ditched effort within Access where I ended up using a query with three sub queries to get the results: ``` Address Name 101 1st Ave Brian Wood 101 1st Ave Amy Wood 101 1st Ave Adam Wood 555 5th St Sarah Parker 555 5th St Parker Corp. ``` Sample Data Looks Like this: ``` Address Name 101 1st Ave Brian Wood 101 1st Ave Brian Wood 101 1st Ave Brian Wood 101 1st Ave Amy Wood 101 1st Ave Adam Wood 555 5th St Sarah Parker 555 5th St Sarah Parker 555 5th St Sarah Parker 555 5th St Parker Corp. ``` I've been trying to get this for hours... I know their is a much simpler way to do this but as it's been a 16 hour day and it's 2:00 am I just can't get my head around it. Here is an example of my best TSQL results... it does the trick but it bumps it into two different columns: ``` SELECT DISTINCT t1.Name, t2.Name, t1.Address FROM tblLeads t1 JOIN tblLeads t2 ON t1.Address = t2.Address WHERE t1.Name <> t2.Name ORDER BY t1.Address ```
You can do a `GROUP` with `COUNT(Distinct Name) > 1` to get Address with more than 1 unique name, and then do a select distinct with a filter on the above grouped Addresses like this. ``` SELECT DISTINCT Address,Name From Table1 WHERE Address IN ( SELECT Address FROM Table1 GROUP BY Address HAVING COUNT(distinct Name) > 1 ) ```
You could use multiple `CTE's` to simplify this task. You first want to clean up your data, so remove all those duplicates, therefore you can use `DISTINCT`. Then use `Count(*)OVER(Partition By Address)` to get the count of rows per `Address`: ``` WITH CleanedData AS ( SELECT DISTINCT Address, Name FROM dbo.tblLeads ), CTE AS ( SELECT Address, Name, cnt = Count(*) OVER (Partition By Address) FROM CleanedData ) SELECT Address, Name FROM CTE WHERE cnt > 1 ``` `Demo` By the way, this works also if `Address` has `null` values: `Demo` (as opposed to [this](http://sqlfiddle.com/#!6/5cd9b/3/0)).
Return Distinct Values Where One Column Is The Same But One Column Different
[ "", "sql", "sql-server", "t-sql", "" ]
There are two tables, OrderID and Order Details. Orders: ``` OrderID PK Freight |....| ``` Order Details: ``` OrderID PK FK ProductID PK FK UnitPrice Quantity |....| ``` Each `OrderID` is unique for `Orders`, but Order Details can contain several details for the same `OrderID` and different `ProductId`, `UnitPrice` etc. So, in Order Details we can see two, three or more orders (`OrderID`). My task is to select physical addresses of all Freight records which more than total cost of entire order Freight > UnitPrice \* Quantity \* (Quantity of OrderID in Order Details) ``` SELECT %%physloc%% FROM Orders WHERE Freight > (SELECT SUM(UnitPrice * Quantity) FROM [Order Details] GROUP BY OrderID); ``` And of course I've got > 'Subquery returned more than 1 value...' I try to use Top but in that case I have a wrong selection. All that I need is to compare somehow each Freight with OrderID records with each records from that subquery with same OrderID. But I have no idea how. Maybe someone can find a different way, It would be great. I use SQL Server 2008 Thank you all.
> My task is to select physical addresses of all Freight records which more than total cost of entire order Freight > UnitPrice \* Quantity \* (Quantity of OrderID in Order Details) You can achieve that easily by using aliases and filtering the subquery in the following way... ``` SELECT %%physloc%% FROM Orders o WHERE o.Freight > (SELECT SUM(od.UnitPrice * od.Quantity) FROM [Order Details] as od WHERE od.OrderId = o.OrderId); ``` You don't need a `group by` clause here at all because it could potentially split/group the results. By using the `sum` aggregate function you're already returning a scalar value which can further be filtered by a `where` clause
The easiest way of thinking of the process is as follows: 1. The outer query processes each record within the Orders table. As this happens, the current OrderID is shared with the subquery. This is made possible by the subquery's "where" statement. This will ensure the data being summed in the subquery has the same OrderID. 2. The use of *aliases* makes referencing tables much easier than having to specify the full name of each table and its corresponding column. Format: tableName.Column e.g. Orders.OrderID. The much easier alternative alias.Column e.g. o.OrderID where "o" is defined as an alias of Orders. --- ``` SELECT %%physloc%% FROM Orders o WHERE Freight > (SELECT SUM(UnitPrice * Quantity) FROM [Order Details] od WHERE od.OrderID = o.OrderID GROUP BY OrderID); ```
How to compare row with subquery output in SQL?
[ "", "sql", "sql-server", "database", "t-sql", "" ]
I'm trying to join a table of data to another table, but one of the ID that I'll be joining on is NULL. However, I have a specific ID I want it to link to with the NULLs. I might be oversimplifying this question too much with this example, but I'm hoping this will point me in the right direction. So suppose we have the two tables below. TABLE1: ``` Name ID A 1 B 2 C NULL ``` TABLE2: ``` ID Value 1 4 2 5 3 6 ``` What would I need to do in the query to get an output like this? OUTPUT: ``` Name Value A 4 B 5 C 6 ``` Thanks in advance for any assistance.
You can explicitly check it in the `on` clause: ``` SELECT name, value FROM table1 JOIN table2 ON (table1.id = table2.id) OR (table1.id IS NULL AND table2.id = 3) ```
Join using COALESCE(ID,3) instead of table1.id
Joining an ID with a NULL
[ "", "sql", "sql-server", "select", "null", "sql-server-2012", "" ]
It is known that `GROUP BY` produces one row per group. I want to produce multiple rows per group. The particular use case is, for example, selecting two cheapest offerings for each item. It is trivial for two or three elements in the group: ``` select type, variety, price from fruits where price = (select min(price) from fruits as f where f.type = fruits.type) or price = (select min(price) from fruits as f where f.type = fruits.type and price > (select min(price) from fruits as f2 where f2.type = fruits.type)); ``` ([Select n rows per group in mysql](https://stackoverflow.com/questions/27463140/select-n-rows-per-group-in-mysql)) But I am looking for a query that can show `n` rows per group, where `n` is arbitrarily large. In other words, a query that displays `5` rows per group should be convertible to a query that displays `7` rows per group by just replacing some constants in it. I am not constrained to any DBMS, so I am interested in any solution that runs on any DBMS. It is fine if it uses some non-standard syntax.
For any database that supports analytic functions\ window functions, this is relatively easy ``` select * from (select type, variety, price, rank() over ([partition by something] order by price) rnk from fruits) rank_subquery where rnk <= 3 ``` If you omit the `[partition by something]`, you'll get the top three overall rows. If you want the top three for each `type`, you'd `partition by type` in your `rank()` function. Depending on how you want to handle ties, you may want to use `dense_rank()` or `row_number()` rather than `rank()`. If two rows tie for first, using `rank`, the next row would have a `rnk` of 3 while it would have a `rnk` of 2 with `dense_rank`. In both cases, both tied rows would have a `rnk` of 1. `row_number` would arbitrarily give one of the two tied rows a `rnk` of 1 and the other a `rnk` of 2.
To save anyone looking some time, at the time of this writing, apparently this won't work because <https://dev.mysql.com/doc/refman/5.7/en/subquery-restrictions.html>. I've never been a fan of correlated subqueries as most uses I saw for them could usually be written more simply, but I think this has changed by mind... a little. *(This is for MySQL.)* ``` SELECT `type`, `variety`, `price` FROM `fruits` AS f2 WHERE `price` IN ( SELECT DISTINCT `price` FROM `fruits` AS f1 WHERE f1.type = f2.type ORDER BY `price` ASC LIMIT X ) ; ``` Where X is the "arbitrary" value you wanted. If you know how you want to limit further in cases of duplicate prices, and the data permits such limiting ... ``` SELECT `type`, `variety`, `price` FROM `fruits` AS f2 WHERE (`price`, `other_identifying_criteria`) IN ( SELECT DISTINCT `price`, `other_identifying_criteria` FROM `fruits` AS f1 WHERE f1.type = f2.type ORDER BY `price` ASC, `other_identifying_criteria` [ASC|DESC] LIMIT X ) ; ```
Producing n rows per group
[ "", "sql", "greatest-n-per-group", "" ]
I need to sort a varchar2 datatype column with date value in number format as `YYMM` to `MMYY` The value ranges from ``` 3 6 9 12 103 . . 9909 9912 ``` The value with 4 characters in length denotes the year between 1900 and 1999 less than 4 characters denotes the year starts with 2000 and above. I tried using LPAD(fieldname, 4,'0') which returns value as ``` 0003 this is nothing but 2000/03 0006 ------------------- 2000/06 0009 0012 0103 ------------------- 2001/03 . . 9912 ------------------- 1999/12 ``` how can i order the column based on the year value starting from 1900 in ascending order . Can anyone please provide me the solution....
You can prepend a century marker based on the length of the value: ``` select value, case when length(value) = 4 then '19' else '20' end || lpad(value, 4, '0') as dt from t order by case when length(value) = 4 then '19' else '20' end || lpad(value, 4, '0'); VALUE DT ---------- ------ 9909 199909 9912 199912 6 200006 9 200009 12 200012 103 200103 ``` Or use the same thing and convert to a date, which defaults to the first day of each month: ``` select value, to_date(case when length(value) = 4 then '19' else '20' end || lpad(value, 4, '0'), 'YYYYMM') as dt from t order by to_date(case when length(value) = 4 then '19' else '20' end || lpad(value, 4, '0'), 'YYYYMM'); VALUE DT ---------- ---------- 9909 1999-09-01 9912 1999-12-01 6 2000-06-01 9 2000-09-01 12 2000-12-01 103 2001-03-01 ``` If you're only looking at dates within the Y2K-safe range of 1950-2049, you could skip the century part and use an RR date model instead, though since this will potentially cause you problems later there isn't really any advantage over using the length to prepend the century: ``` select value, to_date(lpad(value, 4, '0'), 'RRMM') as dt from t order by to_date(lpad(value, 4, '0'), 'RRMM'); VALUE DT ---------- ---------- 9909 1999-09-01 9912 1999-12-01 6 2000-06-01 9 2000-09-01 12 2000-12-01 103 2001-03-01 ``` [SQL Fiddle](http://sqlfiddle.com/#!4/300f6/2).
You need to distinguish the existing four digit values first, so you know they're 20th century dates before you pad out the rest. Then convert it to a date and fiddle with the format to get the sort order you require : ``` select to_char( expanded_dt, 'MMYYYY') as switched_dt from ( select to_date( case when length(dt) = 4 then '19'||dt else '20'||lpad(dt,4,'0') end , 'YYYYMM' ) as expanded_dt from your_table ) order by 1 asc / ```
sorting date field with datatype as varchar2 in oracle 11g
[ "", "sql", "oracle", "oracle11g", "date-arithmetic", "date-conversion", "" ]
I have a table with 5 columns `ReportId, Date, Area, BuildingName, Amount`. Sample data looks like this : ``` ------------------------------------------------------- ReportId | Date | Area | BuildingName | Amount ------------------------------------------------------- 1 | 01/01/2013 | S1 | A1-01 | 5 2 | 01/01/2013 | S1 | A1-03 | 5 3 | 01/01/2013 | S2 | A1-05 | 4 4 | 02/01/2013 | S2 | A1-05 | 7 5 | 02/01/2013 | S2 | A1-03 | 9 6 | 03/01/2013 | S1 | A1-03 | 2 7 | 04/01/2013 | S2 | A1-02 | 6 8 | 05/01/2013 | S1 | A1-01 | 7 9 | 06/01/2013 | S1 | A1-02 | 5 10 | 06/01/2013 | S1 | A1-05 | 8 11 | 06/01/2013 | S1 | A1-07 | 5 ``` I need to write a query to get the result like this : ``` ----------------------------------------------------- Date | Area | BuildingName | Amount | Sum ----------------------------------------------------- 01/01/2013 | S1 | A1-01 | 5 | 12 01/01/2013 | S1 | A1-03 | 5 | 7 01/01/2013 | S2 | A1-05 | 4 | 11 ``` `Date` value passed as a parameter to the query. "Area", "BuildingName", "Amount" are records which have the same "Date". "Sum", is Sum of All "Amount" in the table where has the same "Area" And "BuildingName" in the result of query. I searched much, but I can't get anything about this ...
Try this query ``` SELECT @Date AS 'Date' ,t.Area ,t.BuildingName ,t.Amount ,temp.SumAmount FROM TABLE t INNER JOIN (SELECT Area, BuildingName, SUM(Amount) 'SumAmount' FROM TABLE t GROUP BY Area, BuildingName) temp ON temp.Area=t.Area AND temp.BuildingName=t.BuildingName Where t.Date= @Date ```
This should work: ``` ;with filter as ( select Date, Area, BuildingName, Amount from data where data.Date = @date ) select filter.Date ,filter.Area ,filter.BuildingName ,filter.Amoount ,sum(data.Amount) as [Sum] from data join filter on filter.Area = data.Area and filter.BuildingName = date.BuildingName group by filter.Date ,filter.Area ,filter.BuildingName ; ```
How to get data by a condition and sum of every row with another condition?
[ "", "sql", "sql-server", "" ]
I want to select either week days or a full week depending on a parameter. I was looking at using a case statement to do the following, but I don't know how to convert a string of number to a value that can be passed as integers. Im probably doing this all wrong but any help would be appreciated. This is where im setting the param value: ``` set @days = (select case when FullWeek = 1 then cast('1,2,3,4,5,6,7' as Numeric(38,0)) when fullweek = 0 then cast('2, 3,4,5,6' as Numeric(38,0)) end from Reports) ``` And this is how I want to call this, its part of a where statement: ``` where datepart(dw,date) in (@days) ```
Why not simplify it and do it this way: ``` Where (Fullweek = 1) -- Will get all days of week or (Fullweek = 0 and datepart(dw,date) in (2,3,4,5,6)) ```
This isn't even a problem with sql, it's a conveptual problem. You can't convert such a value to a numeric. What do you expect the Numeric value to be in each case? You are making a very popular beginer's mistake, when you try to convert a delimited string into an array. What you should do in this case is this: ``` where datepart(dw,date) in case when FullWeek = 1 then (1,2,3,4,5,6,7) else -- if fullweek is a bit. otherwise use when fullweek = 0 then (1,2,3,4,5,6,7) end ```
Sql convert string of numbers to int
[ "", "sql", "sql-server", "string", "int", "" ]
I have lots of data that is pulled by a press of a button from SQL query through connections in Excel. Then I compose few simple calculations to get results. I also have 4 graphs that are based off that data. I run into an issue where the code will take few minutes to execute. I believe it is due to the fact that while data is being updated, graphs are updated as well. I ran into that conclusion after removing graphs, it was significantly faster. Is there a way to speed up this process a bit? Can I pause the graphing and resume it after all the data have been updated? Thank you!
Have you considered offsetting the chart area in vba and switching back at the end of the code? Here's how you can [Select the Chart Area](https://msdn.microsoft.com/en-us/library/office/ff841196.aspx) in VBA. For example. If you want to chart data in Range A1:A10 then you can do the following `Charts(1).SetSourceData Source:=Sheets(1).Range("B1:B10")` ``` your logic ``` `Charts(1).SetSourceData Source:=Sheets(1).Range("A1:A10")` This "Aims" the chart at a different range so that it doesn't try to recompute the graph after each cell change. Once your logic is complete, then "Aim" it back at the correct range.
I suggest using ``` Application.Calculation = xlCalculationManual Application.ScreenUpdating = False ``` And perhaps also ``` Application.EnableEvents = False ``` Before the query, And reversing it after the query ``` Application.Calculation = xlCalculationAutomatic Application.ScreenUpdating = True Application.EnableEvents = True ```
Pause Excel graph while data is updating
[ "", "sql", "excel", "vba", "graph", "" ]
Hello I want to extract month inside a trigger, but I get a syntax error near new is there another way to get month from fdate inside the trigger ``` SELECT EXTRACT(MONTH FROM TIMESTAMP new.fdate) into month_extr; ```
Try using [`date_part(text, timestamp)`](http://www.postgresql.org/docs/9.4/static/functions-datetime.html#FUNCTIONS-DATETIME-TABLE) instead: ``` SELECT date_part('month', NEW.fdate) INTO month_extr; ```
Have you tried NEW.fdate instead of lowercased? SQL is case insensiteve, but to the best of my knowledge in plpgsql procedures it may be significant. Another clue - NEW and OLD variables are available only in the row context (so the trigger must be executed "for each row" and not for a statement).
date part in a trigger
[ "", "sql", "postgresql", "" ]
I have a query where each row consists of 3 columns: 1. Name 2. Distance 3. Proximity I want to sort the rows based on number of `NOT NULL` (i.e. present) values exactly as follows: 1. All values are present 2. Two values are present in this order * Name and Distance * Name and Proximity * Distance and Proximity 3. One value is present * Name * Distance * Proximity Here is sample data (insert statements are sorted in the order i expect): ``` /* CREATE TABLE #TEMP ( Type VARCHAR(100), Name VARCHAR(100), Distance VARCHAR(100), Proximity VARCHAR(100) ); */ INSERT INTO #TEMP VALUES ('AIRPORT', 'KBLI', '21mi', 'City') INSERT INTO #TEMP VALUES ('AIRPORT', 'KBLI', '21mi', NULL ) INSERT INTO #TEMP VALUES ('AIRPORT', 'KBLI', NULL , 'City') INSERT INTO #TEMP VALUES ('AIRPORT', NULL , '21mi', 'City') INSERT INTO #TEMP VALUES ('AIRPORT', 'KBLI', NULL , NULL ) INSERT INTO #TEMP VALUES ('AIRPORT', NULL , '21mi', NULL ) INSERT INTO #TEMP VALUES ('AIRPORT', NULL , NULL , 'City') ``` I have had some success with `COALESCE` statement but I am looking for something efficient and readable. Later I will change to four columns.
Assign a present value as if it were a number (a name=4, a distance=3, a proximity=2), then sum them and sort by that: ``` select ... from ... order by case when name is null then 0 else 4 end + case when distance is null then 0 else 3 end + case when proximity is null then 0 else 2 end desc ``` The trick here is that 3+2 > 4, so a distance and proximity beats a name.
This doesn't have any cool fancy math, but if you were to have multiple values for the same [type] then mine would sort those in order as well. ``` SELECT * FROM #Temp ORDER BY [Type], LEN(CONCAT(LEFT(Name,1),LEFT(Distance,1),LEFT(Proximity,1))) DESC, --counts number of non null columns --LEN(ISNULL(LEFT(Name,1),'') + ISNULL(LEFT(Distance,1),'') + ISNULL(LEFT(Proximity,1),'')) DESC, /*SQL 2008R2 and below alternative for counting non-null columns*/ ISNULL(Name,'zz'), --ISNULL then 'zz' which when ordered, goes at the end ISNULL(Distance,'zz'), ISNULL(Proximity,'zz') ```
Sort results by number of NOT NULL values
[ "", "sql", "sql-server", "t-sql", "sorting", "" ]
I need a SQL which generates sequence of alphabets between given start and end point. Like,for `Start='C' End='G'` output should be ``` C D E F G ```
``` select chr(ascii('C') + level - 1) from dual connect by ascii('C') + level - 1 <= ascii('G'); ``` Using `connect by` like this (no `start with` and an end condition that only depends on the level) is undocumented (and unsupported) so it might break any time (although I'm not aware of any version where this would not work). Starting with 11.2 you can also use a recursive common table expression: ``` with letters (letter, inc) as ( select 'C', 1 as inc from dual union all select chr(ascii('C') + p.inc), p.inc + 1 from letters p where p.inc < 5 ) select letter from letters; ```
Something like this worked ``` WITH X AS (SELECT 'C' as St, 'G' as En FROM dual) SELECT CHR(ASCII(X.St)+ROWNUM-1) FROM X CONNECT BY ROWNUM<=(ASCII(X.En)-ASCII(X.St)+1) ```
SQL which generates sequence of alphabets between given start and end point
[ "", "sql", "oracle", "" ]
I have a two tables (Brands and Customers) in my database. Brands ![my tables](https://i.stack.imgur.com/iq5D2.png) Customers What I want is firstly looking up BRANDID for each customer. Then comapre if BRANDID matches agains BRANDID from BRANDS table. If matched, appropriate BRANDNAME goes in Customers BRANDNAME. If NOT matched, a string (Invalid) goes in Customers BRANDNAME. Do I need to use INNER JOIN and CASE statement for this?
You simply need an OUTER JOIN plus COALESCE: ``` select c.id, c.brandid, coalesce(b.brandname, 'Invalid') from customers c left join brands b on c.brandid = b.brandid; ``` This is pure Standard SQL and should run in any DBMS.
Oracle has its own outer join syntax, which is much nicer than standard SQL, but here's the ANSI SQL enquiry: ``` select customers.id,customers.brandid, if(brands.brandname is null,'Invalid',brands.brandname) as 'Brandname' from customers left join brands on (customers.brandid = brands.brandid) ; ```
Simple SQL Statement in Oracle
[ "", "sql", "oracle", "" ]
I want to insert details of employees into sql table where salary is greater than 5000. How to write this query? Can I use where clause in insert query?
First we have to create employee table with CHECK constraint. For Example - ``` create table employee(varchar(45) ename, numeric salary CHECK(salary>5000)); ``` Now we can insert into employee table. For Example - ``` insert into employee values('abc', 4000); ``` //it will not be inserted into employee table. ``` insert into employee values('xyz', 6000); ``` //it will be inserted into employee table.
If I understand your question correctly, you want to insert values of employees with salary greater than 5000, into another sql table. Here's what you can do: ``` INSERT INTO SOME_SQL_TBL (NAME, SALARY) SELECT NAME, SALARY FROM EMPLOYEES WHERE SALARY > 5000; ```
How to insert value into sql table while checking condition?
[ "", "sql", "" ]
I have thousands of groups in a table, something like : ``` 1.. 1.. 2.. 2.. 2.. 2.. 3.. 3.. . . . 10000.. 10000.. ``` How can i make a select that give me the Top 3 groups each time. I Want something like select Top 3 from rows , but it have to return the first three groups not the first three rows.
You can try this : ``` ;with cte as ( select distinct groupId from mytable order by groupid ) select * from mytable where TheGroupId in (select top 3 groupdid from cte) ```
You can use `DENSE_RANK` to assign a number to each group. All members of the same group will have the same number. Then in an outer query, select top 3 groups: ``` SELECT * FROM (SELECT *, DENSE_RANK() OVER (ORDER BY id) AS rnk FROM mytable ) t WHERE t.rnk <= 3 ``` The above query assumes that `id` is the column used to group records together. [**SQL Fiddle Demo**](http://sqlfiddle.com/#!6/eb577/1)
Select Top 100 Groups
[ "", "sql", "sql-server", "t-sql", "" ]
I have a sample query like this: ``` ;WITH view1 AS ( SELECT 1 AS id, 1 AS pId, 'a' AS pName, 1 AS kind UNION ALL SELECT 2, 1, 'a', 2 UNION ALL SELECT 3, 1, 'a', 1 UNION ALL SELECT 4, 2, 'b', 2 UNION ALL SELECT 5, 2, 'b', 2 UNION ALL SELECT 6, 2, 'b', 1 ), view2 AS ( SELECT 1 AS id, 1 AS pId, 'a' AS pName, 5 AS price UNION ALL SELECT 3, 3, 'c', 3 UNION ALL SELECT 6, 2, 'b', 3 ) SELECT ISNULL(v2.pId, v1.pId) AS pId, ISNULL(v2.pName, v1.pName) AS pName, SUM(CASE v1.kind WHEN 1 THEN 1 ELSE 0 END) AS countKind1, SUM(v2.price) AS sumPrice FROM view1 v1 LEFT OUTER JOIN view2 v2 ON v1.id = v2.id GROUP BY ISNULL(v2.pId, v1.pId), ISNULL(v2.pName, v1.pName) ``` Its result is: ``` pId | pName | countKind1 | sumPrice ----+-------+------------+----------- 1 | a | 1 | 5 2 | b | 1 | 3 3 | c | 1 | 3 ``` But I wanted this result after adding `ISNULL`s and `sumPrice`: ``` pId | pName | countKind1 | sumPrice ----+-------+------------+----------- 1 | a | 2 | 5 2 | b | 1 | 3 3 | c | 0 | 3 ```
I found a way for with using a inline select: ``` SELECT ISNULL(v2.pId, v1.pId) AS pId, ISNULL(v2.pName, v1.pName) AS pName, (SELECT COUNT(*) FROM view1 vi WHERE vi.kind = 1 AND vi.pId = ISNULL(v2.pId, v1.pId)) AS countKind1, SUM(v2.price) AS sumPrice FROM view1 v1 LEFT OUTER JOIN view2 v2 ON v1.id = v2.id GROUP BY ISNULL(v2.pId, v1.pId), ISNULL(v2.pName, v1.pName) ``` --- And this one: ``` SELECT ISNULL(v2.pId, v1.pId) AS pId, ISNULL(v2.pName, v1.pName) AS pName, ISNULL(tc.Counts, 0) AS countKind1, SUM(v2.price) AS sumPrice FROM view1 v1 LEFT OUTER JOIN view2 v2 ON v1.id = v2.id LEFT OUTER JOIN (SELECT vi.pId, COUNT(*) Counts FROM view1 vi WHERE vi.kind = 1 GROUP BY vi.pId) AS tc ON tc.pId = ISNULL(v2.pId, v1.pId) GROUP BY ISNULL(v2.pId, v1.pId), ISNULL(v2.pName, v1.pName), tc.Counts ```
Your query looks a bit weird. I can create a few ways of getting desired result, but I'm not sure what's the meaning behind your data. In general, I'd always advice group your data as early as you can, so you probably could group `view1` and then join by `pId`. Here's the query which gives your results, though ``` ;WITH view1 AS ( SELECT 1 AS id, 1 AS pId, 'a' AS pName, 1 AS kind UNION ALL SELECT 2, 1, 'a', 2 UNION ALL SELECT 3, 1, 'a', 1 UNION ALL SELECT 4, 2, 'b', 2 UNION ALL SELECT 5, 2, 'b', 2 UNION ALL SELECT 6, 2, 'b', 1 ), view2 AS ( SELECT 1 AS id, 1 AS pId, 'a' AS pName, 5 AS price UNION ALL SELECT 3, 3, 'c', 3 UNION ALL SELECT 6, 2, 'b', 3 ), cte1 as ( SELECT ISNULL(v2.pId, v1.pId) AS pId, ISNULL(v2.pName, v1.pName) AS pName, SUM(v2.price) AS sumPrice FROM view1 v1 LEFT OUTER JOIN view2 v2 ON v1.id = v2.id GROUP BY ISNULL(v2.pId, v1.pId), ISNULL(v2.pName, v1.pName) ), cte2 as ( select pName, sum(case when kind = 1 then 1 else 0 end) as countKind1 from view1 group by pName ) select c1.pId, c1.pName, isnull(c2.countKind1, 0) as countKind1, c1.sumPrice from cte1 as c1 left outer join cte2 as c2 on c2.pName = c1.pName ``` `sql fiddle demo`
SQL Server : Sum over a field changed after adding a LEFT JOIN
[ "", "sql", "sql-server", "left-join", "" ]
I see many examples on how to find records that are not in another table, but I'm having a lot of trouble finding records that are either not in table 2, or are in table two, but the freq column value is less than 10%. I'm first joining a list of variants with ensembl gene names for BRCA1, BRCA2 and any genes that start with BRC, where a variant falls between the start and stop position. From those results, I would like to check kaviar allele frequencies (k) and return results that either do not have an entry in the kaviar table, or results that are in the kaviar table with an alle\_freq of < .10. The results from the first join need to be matched with kaviar by chr, pos, ref and alt. I've tried: ``` SELECT DISTINCT * FROM puzz p, ensembl ens, kaviar k WHERE (ens.gene_name IN ('BRCA1', 'BRCA2') OR ens.gene_name LIKE 'RAS%') AND p.chr = ens.chromosome AND p.pos >= ens.start AND p.pos <= ens.stop AND NOT EXISTS (SELECT k.chromosome, k.pos, k.ref, k.alt, k.alle_freq, k.alle_cnt FROM public_hg19.kaviar k WHERE p.chr = k.chromosome AND p.pos = k.pos AND p.ref = k.ref AND p.alt = k.alt ) AND p.pos = k.pos AND p.ref = k.ref AND p.alt = k.alt AND k.alle_freq < .10 ``` And I've also tried: ``` WITH puzz AS ( SELECT * FROM puzz p WHERE p.gt IS NOT NULL ) SELECT DISTINCT t1.*, kav.* FROM (SELECT puzz.*, ens.* FROM puzz, public_hg19.ensembl_genes AS ens WHERE (ens.gene_name IN IN ('BRCA1', 'BRCA2') OR ens.gene_name LIKE 'RAS%') AND puzz.chr = ens.chromosome AND puzz.pos BETWEEN ens.start AND ens.stop AND ens.chromosome NOT LIKE "H%") t1 LEFT JOIN public_hg19.kaviar as kav ON kav.chromosome = t1.chr AND kav.pos = t1.pos AND kav.ref = t1.ref AND kav.alt = t1.alt AND (kav.alle_freq < .10 OR kav.alle_freq IS NULL) ``` SOLUTION: Thanks to @John Bollinger for providing the framework for the solution. Because Impala does not index, the quickest solution involved creating a temporary table that narrows down the number of rows passed to string operations, as shown in the ens temp table. ``` WITH ens AS ( SELECT DISTINCT chromosome as chr, start, stop, gene_name FROM public_hg19.ensembl_genes WHERE (gene_name IN ( 'BRCA1', 'BRCA2') OR gene_name LIKE 'RAS%') AND chromosome NOT LIKE "H%" ) SELECT p.*, k.chromosome, k.pos, k.id, k.ref, k.alt, k.qual, (k.alle_freq * 100) as kav_freqPct, k.alle_cnt as kav_count FROM (SELECT DISTINCT p.sample_id, p.chr, p.pos, p.id, p.ref, p.alt, p.qual, p.filter, ens.gene_name FROM ens, p7_ptb.itmi_102_puzzle p WHERE p.chr = ens.chr AND p.gt IS NOT NULL AND p.pos >= ens.start AND p.pos <= ens.stop ) AS p LEFT JOIN public_hg19.kaviar k ON p.chr = k.chromosome AND p.pos = k.pos AND p.ref = k.ref AND p.alt = k.alt WHERE COALESCE(k.alle_freq, 0.0) < .10 ``` The following line, as pointed out by @Gordon Linoff could also be ``` WHERE (k.alle_freq IS NULL OR k.alle_freq < 0.10) ``` Both final clauses return the same results, but on impala, the coalesce function is somehow faster.
The two queries you present don't seem to match up. Table names differ, and some of the filter conditions simply don't correlate. In particular, from whence came the condition `AND ens.chromosome NOT LIKE "H%"` (with its incorrect quotes)? I do think your outer join approach is promising, but I don't understand why you need a CTE or an inline view. Also, "any gene that starts with 'BRC'" includes 'BRCA1' and 'BRCA2', so you don't need to test those separately. Removing redundant conditions may improve performance a little. Furthermore, if happens to be the case that the structure of your data will preclude duplicate rows anyway, then explicitly selecting `DISTINCT` rows cannot help you, but might harm you. (Nevertheless, I follow your lead by including it in my suggested query.) If there are many results then `SELECT DISTINCT` is expensive; especially so if you are selecting a lot of columns. This seems like it accurately expresses the query you describe: ``` SELECT DISTINCT p.sample_id, p.chr, p.pos, p.ref, p.alt, p.gt, p.qual, p.filter FROM p7_ptb.itmi_102_puzzle p join public_hg19.ensembl_genes ens ON p.chr = ens.chromosome left join public_hg19.kaviar k ON p.chr = k.chromosome AND p.pos = k.pos AND p.ref = k.ref AND p.alt = k.alt WHERE ens.gene_name LIKE 'BRC%' AND ens.chromosome NOT LIKE 'H%' AND p.pos BETWEEN ens.start AND ens.stop AND COALESCE(k.alle_freq, 0.0) < .10 ``` If it's not fast enough for you then you'll want to examine your query plan to determine what the bottleneck is rather than trying to guess.
Start by fixing your base query to use proper `join` syntax and simplify the logic (the `IN` is unnecessary, for instance). ``` SELECT DISTINCT * FROM puzz p JOIN ensembl ens ON p.chr = ens.chromosome AND p.pos >= ens.start AND p.pos <= ens.stop WHERE ens.gene_name LIKE 'BRC%' ``` I suspect the `DISTINCT` is not needed. Remove it if it is not. Then, use `LEFT JOIN` and additional logic for what you want: ``` SELECT DISTINCT * FROM puzz p JOIN ensembl ens ON p.chr = ens.chromosome AND p.pos >= ens.start AND p.pos <= ens.stop LEFT JOIN kaviar k ON p.chr = k.chromosome AND p.pos = k.pos AND p.ref = k.ref AND p.alt = k.alt WHERE ens.gene_name LIKE 'BRC%' AND (k.chr IS NULL OR k.alle_freq < 0.10) ``` Note: This is a case where you use `left join`, but the condition on the second table goes in the `WHERE` clause rather than the `ON` clause.
SQL find records from table 1 that are not in table 2 OR in table 2 with condition
[ "", "mysql", "sql", "left-join", "exists", "impala", "" ]
When I try to exec this query I'll get the error: > Must declare the scalar variable What am I doing wrong? ``` @username varchar(50), @routenaam varchar(50), @van varchar(50), @naar varchar(50), @bezoekadres varchar(50), @geredenroute varchar(50), @karakterrit varchar(50), @toelichting varchar(50) as begin DECLARE @sql NVARCHAR(MAX) = 'insert into '+@username+'routeinformatie'+' values( @routenaam,@van,@naar,@bezoekadres,@geredenroute,@karakterrit,@toelichting)' EXEC sp_executeSQL @sql,N', @username varchar(50), @routenaam varchar(50), @van varchar(50), @naar varchar(50), @bezoekadres varchar(50), @geredenroute varchar(50), @karakterrit varchar(50), @toelichting varchar(50) ',@username@routenaam,@van,@naar,@bezoekadres,@geredenroute,@karakterrit,@toelichting end ```
You forgot starting DECLARE clause, and in last line you omitted comma between first two variables: ``` DECLARE @username varchar(50), @routenaam varchar(50), @van varchar(50), @naar varchar(50), @bezoekadres varchar(50), @geredenroute varchar(50), @karakterrit varchar(50), @toelichting varchar(50) as begin DECLARE @sql NVARCHAR(MAX) = 'insert into '+@username+'routeinformatie'+' values( @routenaam,@van,@naar,@bezoekadres,@geredenroute,@karakterrit,@toelichting)' EXEC sp_executeSQL @sql,N', @username varchar(50), @routenaam varchar(50), @van varchar(50), @naar varchar(50), @bezoekadres varchar(50), @geredenroute varchar(50), @karakterrit varchar(50), @toelichting varchar(50) ',@username, @routenaam,@van,@naar,@bezoekadres,@geredenroute,@karakterrit,@toelichting end ```
You need to `set` the value of the variable like this: ``` DECLARE @sql NVARCHAR(MAX) SET @sql = N'insert into '+@username+ N'routeinformatie'+ N' values( @routenaam,@van,@naar,@bezoekadres,@geredenroute,@karakterrit,@toelichting)' ``` Also if this is complete query then you need to add declare at the starting ``` DECLARE @username varchar(50), @routenaam varchar(50), @van varchar(50), @naar varchar(50), @bezoekadres varchar(50), @geredenroute varchar(50), @karakterrit varchar(50), @toelichting varchar(50) ```
Must declare the scalar variable
[ "", "sql", "sql-server", "" ]
I have two tables (created here for example) like: ``` X1 Y1 a 1 b 2 c 3 d 4 ``` and ``` X2 Y2 a 5 m 6 n 7 b 4 ``` And I want the output column: ``` X Y1 y2 a 1 5 b 2 4 c 3 0 d 4 0 m 0 6 n 0 7 ``` What I tried is: ``` SELECT (A.X1 UNION B.X1) AS X, A.Y1,B.Y2 FROM A FULL OUTER JOIN B ON A.X1 = B.X2 ``` (the query mentioned above is just sample). After executing this query I am getting error message: > Syntax error: near UNION in select clause Can someone tell me what is wrong here. Is there any other option to get the output table in the mentioned format?
`union` is used to join results one after another. You're attempting to `join` results side by side (which you already did!). The only thing you're missing is a `coalesce` call to handle the missing values: ``` SELECT COALESCE(a.x1, b.x2) AS x, COALESCE(a.y1, 0) AS y1, COALESCE(b.y2, 0) AS y2 FROM a FULL OUTER JOIN b on a.x1 = b.x2 ```
You can try [COALESCE](http://www.postgresql.org/docs/8.1/static/functions-conditional.html) > The `COALESCE` function returns the first of its arguments that is not > null. Null is returned only if all arguments are null. ``` SELECT COALESCE(A.X1,B.X2) AS X, COALESCE(A.Y1, 0) AS Y1, COALESCE(B.Y2, 0) AS Y2 FROM A FULL OUTER JOIN B ON A.X1 = B.X2 ```
How to use union in select clause?
[ "", "sql", "postgresql", "select", "union", "" ]
I have two tables : ***company*** and ***users*** the users table: ``` id | name | cpf | phone_number | company_id 1 | Jonh | 111.11.11 | 1111-1111 | 1 2 | Marie | 222.22.22 | 2222-2222 | 3 | Paul | 333.33.33 | 3333-3333 | 3 4 | Luna | 444.44.44 | 4444-4444 | 1 5 | Leo | 555.55.55 | 5555-5555 | ``` the company table: ``` id | name | cnpj | phone_number | company_data | consumer 1 | companyA | 111.1111.11 | 1111-1111 | data1 | true 2 | companyB | 222.2222.22 | 2222-2222 | data2 | true 3 | companyC | 333.3333.33 | 3333-3333 | data3 | false ``` I want to select all the users where `company_id IS NULL` and all the companies where `consumer is true` What I'm trying to do is something like this : ``` Select u.name as name, u.cpf as document, u.phone_number as phoneNumber, 'false' as company FROM users u WHERE company_id is NULL UNION Select c.name as name, c.cnpj as document, c.phone_number as phoneNumber, c.company_data as companyData 'true' as company FROM company c WHERE c.consumer = 'true' ORDER BY id ``` And the answer I want is : ``` id | name | document | phone_number | companyData | company 1 | companyA | 111.1111.11 | 1111-1111 | data 1 | true 2 | companyB | 222.2222.22 | 2222-2222 | data 2 | true 2 | Marie | 222.22.22 | 2222-2222 | | false 5 | Leo | 555.55.55 | 5555-5555 | | false ``` I can accept a answer with the columns `cpf` and `cnpj` separed and the results null if it doens't apply to the selected entity. In this way I would not need the `company` column
You need the same number of columns for both selects, simply add a NULL column (you might have to cast it to a datatype): ``` Select u.name as name, u.cpf as document, u.phone_number as phoneNumber, CAST(NULL AS VARCHAR(20)) as companyData, 'false' as company FROM users u WHERE company_id is NULL UNION Select c.name as name, c.cnpj as document, c.phone_number as phoneNumber, c.company_data as companyData 'true' as company FROM company c WHERE c.consumer = 'true' ORDER BY id ```
If `users` and `company` have the same columns you can use a union query ``` select u.name, null from users where company_id is null union select c.name, c.some_column from users u join company c on u.company_id = c.id ``` If they don't have the same columns, you'll have to specify the column names you want to select manually instead of selecting `*`
PostgreSQL - Merge two selects queries
[ "", "sql", "postgresql", "" ]
I have two tables in sql. One is a table of test cases and the other is a table of test runs with a foreign key that links back to a test case. I want to get the most recent 10 test runs for each test case. I don't want to loop through if I don't have to, but I don't see any other way to solve this problem. What is the most effective way to handle this sort of thing in sql server?
The idea: ``` select ... from <test cases> as tc outer apply ( select top 10 * from <test runs> as tr where tr.<test case id> = tc.<id> order by tr.<date time> desc ) as tr ``` or, if you just need to get data from table: ``` ;with cte_test_runs as ( select *, row-Number() over(partition by <test case id> order by <date time> desc) as rn from <test runs> ) select * from cte_test_runs where rn <= 10 ```
You can use Row No. Use Inner of Left Join as the case may be.. ``` Select * from testCase a left outer join (Select Row_number() over (partition by testcase order by RecentDate desc ) RowNo, * from TestRuns) b on a.pk = b.fk where b.RowNo <=10 ```
How do I select a given number of rows for one table for each parent primary key in another table in sql server 2012?
[ "", "sql", "sql-server", "sql-server-2012", "iteration", "" ]
Here's my current dataset: ``` rname ename Advises Grad_student Advises Faculty Chairs Department Chairs Faculty ``` I'm trying to get it into this format: ``` rname ename1 ename2 advises grad_student faculty chairs department faculty ``` Here's what i've tried so far: ``` select distinct r1.rname, r1.ENAME as ename1,r2.ENAME as ename2 from [dbo].[RELATIONSHIPS] r1 inner join( select distinct RNAME, ENAME from [dbo].[RELATIONSHIPS]) r2 on r1.RNAME = r2.RNAME where r1.ENAME <> r2.ENAME order by r1.rname ``` Here's what i'm getting back: ``` rname ename1 ename2 Advises Grad_student Faculty Advises Faculty Grad_student Chairs Department Faculty Chairs Faculty Department ``` How would I would I fix my code in order to get only 1 row back?
You can use `row_number()` with `conditional aggregation`: ``` with cte as ( select rname, ename, row_number() over (partition by rname order by ename) rn from relationships ) select rname, max(case when rn = 1 then ename end) ename1, max(case when rn = 2 then ename end) ename2 from cte group by rname ``` * [SQL Fiddle Demo](http://www.sqlfiddle.com/#!6/3333f/3)
In oracle ``` select * from (select rname,ename, RANK() over (partition by rname order by ename) Id FROM relationships) PIVOT(MAX(ename) for Id in (1 as ename1, 2 as ename2)) ```
How to pivot second column in tsql
[ "", "sql", "t-sql", "" ]
I'm using SUM to add up one of the fields but for the rest of the fields I would like those coming from the latest entry. **Current Query:** ``` SELECT cu.*, SUM(cu.use_count) AS total_use_count FROM coupons_used cu GROUP BY cu.coupon_id; ``` **Current result:** ``` Array ( [0] => stdClass Object ( [id] => 1 [timestamp] => 2015-05-08 04:52:31 [coupon_id] => 133 [user_id] => 104 [use_count] => 1 [redemption_code] => 123412341234 [total_use_count] => 2 ) ) ``` That timestamp is from the first entry and I'd like it to be from the last. I've tried adding `ORDER BY cu.timestamp` both `ASC` and `DESC` but I get the same result. `total_use_count` needs to stay as is but how do I get the rest of the fields to be from the last entry (per grouping of course)?
If the `(coupon_id,timestamp)` tuple is unique (that is, if we are guaranteed that there are not two (or more) rows with the same `timestamp` value for a given coupon\_id, at least for the "latest" timestamp).... ``` SELECT c.id , c.timestamp , c.coupon_id , c.user_id , c.use_count , c.redemption_code , m.total_use_count FROM ( SELECT MAX(mt.timestamp) AS latest_timestamp , SUM(mt.use_count) AS total_use_count , mt.coupon_id FROM coupons_used mt GROUP BY mt.coupon_id ) m JOIN coupons_uses c ON c.coupon_id = m.coupon_id AND c.timestamp = m.latest_timestamp ``` --- If the `(coupon_id,timestamp)` isn't unique; if there could be two rows with the same timestamp for given coupon\_id, and if we want to return only a single row for each coupon\_id... We can make use of a MySQL extension to the `GROUP BY` and add to the end of the query: ``` GROUP BY c.coupon_id ```
When you use `GROUP BY` the columns that you return should be one of two categories: * A column computed with an aggregate function, or * A column that is part of `GROUP BY` list. MySql relaxes this restriction by letting you put any columns in a select list, but this comes with understanding that *any* row in a group may be chosen by the engine at random to supply the value for your column. If you want the last time stamp, put a `MAX` function on it: ``` SELECT coupon_id , SUM(cu.use_count) AS total_use_count , MAX(cu.timestamp) AS timestamp FROM coupons_used cu GROUP BY cu.coupon_id; ```
mysql SUM aggregate with ORDER BY
[ "", "mysql", "sql", "" ]