Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I created a index called `abc` on a table called `table` with a `PRIMARY KEY` called `id` and three others `INT` columns `a`, `b` and `c` that can be `NULL`. Now I need this index to be UNIQUE, so I tried : ``` ALTER TABLE table DROP INDEX abc, ADD UNIQUE abc (a, b, c); ``` But I have duplicates, so MySQL answers : ``` #1062 - Duplicate entry '1-2-3' for key 'abc' ``` I probably have a lot of those duplicates, so I'm looking for an easy way to search & destroy them all. My first guess has been : ``` SELECT * FROM table GROUP BY abc ``` But sadly it seems that you can't group by indexes. Is there an easy way to find those duplicates, keep one line of each duplicate and delete the others ? EDIT : * `table` as an `id` field that is a `PRIMARY KEY` * `a`, `b` and `c` are all `INT` and can be `NULL`
No need to eliminate duplicates first, just use the [`IGNORE` option for `ALTER TABLE`](http://dev.mysql.com/doc/refman/5.1/en/alter-table.html) to do exactly what you want; ``` ALTER IGNORE TABLE table DROP INDEX abc, ADD UNIQUE abc (a, b, c); ``` [An SQLfiddle to test with](http://sqlfiddle.com/#!2/3a98f/1). > If IGNORE is not specified, the copy is aborted and rolled back if duplicate-key errors occur. If IGNORE is specified, only one row is used of rows with duplicates on a unique key. The other conflicting rows are deleted. ...and always remember to back up your data before running potentially destructive SQL from random people on the Internet.
``` SELECT a,b,c FROM table GROUP BY a,b,c HAVING count(*)>1 ```
Find duplicates on multiple columns in a SQL table in order to create UNIQUE index
[ "", "mysql", "sql", "indexing", "duplicates", "" ]
The background of this question is that I have a column with the following definition: ``` FirstName VARCHAR(100). ``` I can insert a THAI/Chinese/European value if I change the column datatype to NVARCHAR and when inserting a value I need to Prefix it with N, as ``` Insert into table ([FirstName]) value(N'THAI/Chinese/European value'). ``` Question: There are a lot of applications that update this particular column and for me to assist this change I need to make a lot of changes to the procedures and various other application level changes. Is there a way I can make a change at the database level where I can accommodate this change.
> Is there a way I can make a change at the database level where I can accommodate this change. I don't believe there is any way to force SQL Server to handle all varchars as unicode nvarchars. They are simply different datatypes. If you are using literals in your SQL code, you will have to use `N''`. Any columns, parameters, or variables that hold the data will have to be nchar/nvarchar. Your apps will all have to send unicode values to the DB. I would search for "sql server migrate to unicode" for additional reading before you take this on.
While I agree with @TimLehner that I do not know of a way to force SQL Server to handle all `varchar` columns as `nvarchar` columns, there are a few things that could make *your transition* to Unicode strings in the column easier: 1. To support Unicode values in the column one-off or in an upgrade script, use `ALTER TABLE [table] ALTER COLUMN FirstName nvarchar(100)`. (Of course, be sure to update your create script for `[table]` if applicable too - i.e. `CREATE TABLE [table] (FirstName nvarchar(100)...)`.) 2. Use Unicode (i.e. `N'SomeFirstName'`) literals where you expect to insert or set strings with Unicode characters; *but* continue to use non-Unicode (i.e. `'SomeFirstName'`) literals where you do not *in transition*. 3. Work your way up to altering procedures' parameters (i.e. from `varchar` to `nvarchar`) as needed. Basically, *ideally* you would change the column and everything related to it to support Unicode at once; but you may be able to limit *initial changes* to application(s), procedure(s) etcetera that initially need to leverage the column's underlying Unicode support.
Inserting Unicode string without prefix ' N....'
[ "", "sql", "sql-server", "sql-server-2012", "" ]
Hello i am using Microsoft SQL server manager with visual studio and i kinda stuck with Select command i am trying to achive. ``` SELECT ... FROM ... WHERE (t .companyID = @companyId) AND (DATEDIFF(DateInterval.Day, @start, r.date) >= 0) AND (DATEDIFF(DateInterval.Day, @end, r.date) <= 0) // last row to add AND (CASE WHEN @status != 'show all' THEN r.poStatus = @status END) ``` The command working well but now i need to add the last row as well ( user can choose status and only rows with the choosen status will be shown or if nothing choosen it will show all the rows - i got a vb code that gives 'show all" value in that case ) so i wanna make an extra term that checks if the value is 'show all" skip this and if not show the rows with the selected value but i cant find the right syntex to do this , if it possible ? p.s i seen it wasnt so clear what i ment by 'show all' is to show all of the table and only if its other then show all use the WHERE part to select the right rows in the table.
I'd suggest not doing it this way at all, and instead using an `IF/ELSE` block: ``` IF @Status = 'show all' BEGIN SELECT ... FROM ... WHERE t .companyID = @companyId AND DATEDIFF(DateInterval.Day, @start, r.date) >= 0 AND DATEDIFF(DateInterval.Day, @end, r.date) <= 0 END ELSE BEGIN SELECT .. FROM ... WHERE t .companyID = @companyId AND DATEDIFF(DateInterval.Day, @start, r.date) >= 0 AND DATEDIFF(DateInterval.Day, @end, r.date) <= 0 AND r.poStatus = @status END ``` The reason for this is that unless you run your query with `OPTION (RECOMPILE)` the query will use a full table scan even if you pass a value for `@status' and`poStatus` is indexed which will not be optimal. The reason for choosing a table scan as opposed to an index seek is because at compilation time it doesn't know whether it should be returning all values for poStatus or just one. If this is a stored procedure, or is passed through `sp_executesql` then the results could be worse still, at compile time the value will be known so the query plan will be created based on the parameter value. However, each subsequent time it is run it will use the cached plan from this first run, and you could end with a suboptimal query plan. For what it's worth, this predicate is not very good either, it is not [sargable](http://en.wikipedia.org/wiki/Sargable): ``` AND DATEDIFF(DateInterval.Day, @start, r.date) >= 0 AND DATEDIFF(DateInterval.Day, @end, r.date) <= 0 ``` Because you are evaluating a function or `r.date` you cannot take advantage of any index on the column. It would be better written as: ``` AND r.Date >= @Start AND r.Date < DATEADD(DAY, 1, @End); ``` If you must do it all in one query for whatever reason, then as mentioned, you should use the query hint `OPTION (RECOMPILE)` to ensure the query is compiled at runtime, and an appropriate plan can be chosen for the given value of `@Status`: ``` SELECT .. FROM ... WHERE t .companyID = @companyId AND r.Date >= @Start AND r.Date < DATEADD(DAY, 1, @End); AND (r.poStatus = @status OR @Status = 'show all') OPTION (RECOMPILE); ```
``` (CASE WHEN @status != 'show all' THEN r.poStatus = @status END) ``` is equivalent to ``` IF @status = 'show all' THEN ;// no additional conditions ELSE // so @status != 'show all' r.poStatus = @status END IF; ``` So you can put it in standard logical condition: ``` AND (@status = 'show all' OR (@status != 'show all' AND r.poStatus = @status)) ```
SQl select statement - putting Case inside the Where part
[ "", "sql", "case", "" ]
I need to select two fields but with different conditions, I tried this: ``` SELECT (select my_field from my_table where another_field=1) as column_one, (select my_field from my_table where another_field=2) as column_two; ``` but I got this error: `Subquery returns more than 1 row` is there a way to make this work?
In its current form, you can make it work by adding `limit` clause like this: ``` SELECT (select my_field from my_table where another_field=1 limit 1) as column_one, (select my_field from my_table where another_field=2 limit 1) as column_two; ``` However, logically it would not make any sense because `limit` clause without an accompanying `ORDER BY` does not guarantee specific results every time the query runs.
it can return more than 1 row so you should use limit 1 ``` SELECT (select my_field from my_table where another_field=1 limit 1) as column_one,(select my_field from my_table where another_field=2 limit 1) as column_two; ``` the query should be like this ``` select t1.my_field as column_one,t2.my_field as column_two from my_table as t1 left join (select my_field from my_table where another_field=2) t2 on t1.key_field=t2.key_field where t1.another_field=1; ```
Select a field twice with different conditions
[ "", "mysql", "sql", "field", "" ]
I have this query that shows how many hours, minutes and seconds a worker has done for a day: ``` SELECT Employee_Number, CAST([DateTime] as Date) as 'DateTime', MIN([DateTime]) as 'MIN', MAX([DateTime]) as 'MAX', [Hours Worked] = CAST((DATEDIFF(HOUR , min([DateTime]), max([DateTime])) % 24 ) AS VARCHAR) +':'+ CAST((DATEDIFF(MINUTE, min([DateTime]), max([DateTime])) % 60) AS VARCHAR) + ':' + CAST((DATEDIFF(SECOND, min([DateTime]), max([DateTime])) % 60) AS VARCHAR) + '0' from tblExtract group by Employee_Number, Cast([DateTime] as Date) ``` My problem is I want to convert the Hours worked column in hh:mm:ss. How can I resolve it?
``` CONVERT(NVARCHAR, MAX([DateTime]) - MIN(DateTime]), 108) AS [Hours Works] ```
use like this ``` [Hours Worked] = CONVERT(VARCHAR(15),CAST((DATEDIFF(HOUR , min([DateTime]),108) ``` or simply ``` [Hours Worked] = CONVERT(VARCHAR(15),min([DateTime]),108) ``` [Here](http://www.sqlusa.com/bestpractices/datetimeconversion/) are the different formats
How to change the string using convert in ms sql?
[ "", "sql", "sql-server", "sql-server-2012", "" ]
I am trying to solve this "issue", however still without success. What I'd like to achieve is, create a query that will select all friends of specific actor. Let's say I want to get list of First name, Last name and age of Jason Statham's friends. Below is an image of tables. PS: Are those tables correctly organized ? (especially those foreign keys) Thanks in advance ![enter image description here](https://i.stack.imgur.com/aykNp.png)
Does this do what you're looking for? ``` SELECT actors.first_name, actors.last_name FROM actors WHERE actors.login IN ( SELECT friendslist.loginf FROM friendslist WHERE friendslist.logina = 'xstad' ) ```
You'll need to include the Actors table twice - once for the focus person (Jason Statham) and once for his friends. ``` SELECT CONCAT(A.first_name," ",A.last_name) AS Actor, CONCAT(B.first_name," ",B.last_name) AS Friend FROM Actors AS A JOIN [Friends List] AS F on A.login=F.loginA JOIN Actors AS B on F.loginB=B.login ORDER BY A.last_name, B.last_name ```
Mysql query for selecting friends
[ "", "mysql", "sql", "database", "select", "foreign-keys", "" ]
We have a log table where user processes log entries (`success`/`failure`/`timeout`) each time they run. For e.g. ``` +----+----------+---------+ | id | username | status | +----+----------+---------+ | 1 | saqib | success | | 2 | mary | timeout | | 3 | scott | timeout | | 4 | saqib | success | | 5 | mary | timeout | | 6 | scott | timeout | | 7 | saqib | timeout | | 8 | mary | timeout | | 9 | scott | timeout | +----+----------+---------+ ``` We would like to get a usernames which have had a success in the past the but the latest entry for them was a timeout. (saqib in the above example) Is there single query that can do this? Right now we are doing this using a PHP script, but would like to use mySQL query for this. Thanks
[**SQL Fiddle**](http://sqlfiddle.com/#!2/cddc6/2/0) ``` SELECT DISTINCT m1.username FROM ( SELECT s1.username, s1.ids FROM ( SELECT username, MAX(id) as ids FROM MyTable GROUP BY username ) AS s1 INNER JOIN MyTable AS s2 ON s1.ids = s2.id WHERE s2.status = 'timeout' ) AS m1 INNER JOIN MyTable m2 ON m1.username = m2.username AND m2.status = 'success' ```
You can retrieve the latest `id` for each `username` and then `JOIN` it with the original table checking if there were entries for each user with `status` `success` and `id` less then maximum. ``` SELECT t.* FROM ( SELECT username , MAX(id) as ind FROM tbl GROUP BY username ) x JOIN tbl t ON t.username = x.username AND t.id = x.ind AND t.status IN ('timeout', 'failure') AND EXISTS ( SELECT * FROM tbl WHERE username = x.username AND id < x.ind AND status = 'success' ) ``` [**Example**](http://www.sqlfiddle.com/#!2/82836/5)
SQL query to compare earlier values in the Table
[ "", "mysql", "sql", "join", "subquery", "" ]
I have rails application and its using mysql. I have a piles table with two columns that I care for. The columns are name\_he\_il and name\_en\_us I don't have a problem doing these ``` select name_he_il from piles; select name_en_us from piles; ``` I need to insert data into the name\_he\_il column into the piles table where name\_en\_us = "a specific value" I tried something like this ``` insert into piles (name_he_il) values 'ืœื ืžืืคื™ื™ืŸ ื›ืœืœ' where name_en_us = "Extremely Uncharacteristic"; ``` I am getting syntax error. I was googling and I figured the sql should be insert into table (column 1) values (blah) where conditions; but its not working. Basically that hebrew text means extremely uncharacteristic.
Do UPDATE and not INSERT: ``` UPDATE piles SET name_he_il = 'ืœื ืžืืคื™ื™ืŸ ื›ืœืœ' WHERE name_en_us = "Extremely Uncharacteristic"; ```
You want to use `UPDATE ... WHERE` INSERT is for creating new records only.
insert statement sql inserting values into a table in mysql with a where clause condition
[ "", "mysql", "sql", "" ]
I am very much a beginner and I completely get what `NOT IN` does, but don't really get `EXISTS` or `NOT EXISTS`. Even more, I don't understand what this does: ``` SELECT TOP 1 1 FROM tblSomeTable ``` What does this query actually do? For reference, I have been working with something like this: ``` SELECT COUNT(E_ID) FROM tblEmployee e INNER JOIN tblManager m ON e.tbl_ID = m.tbl_ID WHERE NOT EXISTS(SELECT TOP 1 1 FROM tblEmployee e2 WHERE e2.E_ID = e.E_ID AND isFired = 'N' ) ``` I suppose I haven't read/seen a layman's explanation yet that makes sense to me. Even after reading [Diff between Top 1 1 and Select 1 in SQL Select Query](https://stackoverflow.com/questions/19359691/diff-between-top-1-1-and-select-1-in-sql-select-query) I still don't get it
Your first query will get you only top most record (very first record) out of the total rows in result set. So, if your query returns 10 rows .. you will get the first row. Read more about [TOP](http://msdn.microsoft.com/en-us/library/ms189463.aspx) ``` SELECT TOP 1 FROM tblSomeTable ``` In your Second query the part under `()` is a subquery, in your case it's a correlated subquery which will be evaluated once for each row processed by the outer query. `NOT EXISTS` will actually check for existence of the rows present in subquery ``` WHERE NOT EXISTS ( SELECT TOP 1 1 FROM tblEmployee e2 WHERE e2.E_ID = e.E_ID AND isFired = 'N' ) ``` Read more about [Correlated subquery](http://en.wikipedia.org/wiki/Correlated_subquery) as well as [Subqueries with EXISTS](http://technet.microsoft.com/en-us/library/ms189259%28v=sql.105%29.aspx)
The question that I think would actually need answering is whether `EXISTS (SELECT TOP 1 1 FROM MyTable)` is actually necessary. `Top 1 1` is telling the query to pick the constant "1" for any answer. The `Top 1` part is telling it to stop as soon as it finds a match and returns "1". Wouldn't `EXISTS (SELECT TOP 1 FROM MyTable)` be sufficient?
NOT IN vs NOT EXISTS and select 1 1?
[ "", "sql", "sql-server", "exists", "not-exists", "" ]
I have a custom crystal report which retrieves invoices from a database. There is a formula in the report that has the following code: ``` V6AttachmentsGetAttachment ({Command.AttachmentID},{?ReportAttachmentChannel} ) ``` From my understanding, the formula has a function called 'V6AttachmentsGetAttachment' which takes two parameters (the first is a report field and the second is a report parameter). This calculates a dynamic hyperlink. How can I determine how exactly this is calculated? I am trying to figure out if I can replicate this calculation in SQL. Is `V6AttachmentsGetAttachment` something that is stored in the SQL database? I can not find any references to it in Crystal Reports.
It could either be a custom function or one contained in a user-function library (UFL). **Custom Function** * Edit a formula (any formula will do) * Assuming that it is a custom function, it will be listed below the Report Custom Functions node: ![enter image description here](https://i.stack.imgur.com/f7vb2.png) **UFL** If this function is contained in a user-function library (UFL), you should see it listed in the Function tree: ![enter image description here](https://i.stack.imgur.com/pFfYQ.png) u252000.dll contains a single function `DateTimeto2000()`. If this is the case, then you will need to locate the source code for the UFL.
This is a custom function created by Viewpoint Construction Software (V6). It is used to connect with Construction Imaging, a document management system, to return images of things like invoices. I believe the custom function is available if you have the Viewpoint client installed.
How do I determine what a Crystal Reports function does?
[ "", "sql", "function", "crystal-reports", "formula", "" ]
I have the following sample data in an Oracle table (`tab1`) and I am trying to convert rows to columns. I know how to use Oracle pivot on one column. But is it possible to apply it to multiple columns? Sample data: ``` Type weight height A 50 10 A 60 12 B 40 8 C 30 15 ``` My intended output: ``` A-count B-count C-count A-weight B-weight C-weight A-height B-height C-height 2 1 1 110 40 30 22 8 15 ``` What I can do: ``` with T AS (select type, weight from tab1 ) select * from T PIVOT ( count(type) for type in (A, B, C, D,E,F) ) ``` The above query gives me the below result ``` A B C 2 1 1 ``` I can replace `count(*)` with `sum(weight)` or `sum(height)` to pivot height or weight. What I am looking to do, but I can't do, is pivot on all three (count, weight and height) in one query. Can it be done using pivot?
As [the documentation shows](http://docs.oracle.com/cd/B28359_01/server.111/b28286/statements_10002.htm#SQLRF55129), you can have multiple aggregate function clauses. So you can do this: ``` select * from ( select * from tab1 ) pivot ( count(type) as ct, sum(weight) as wt, sum(height) as ht for type in ('A' as A, 'B' as B, 'C' as C) ); A_CT A_WT A_HT B_CT B_WT B_HT C_CT C_WT C_HT ---- ---- ---- ---- ---- ---- ---- ---- ---- 2 110 22 1 40 8 1 30 15 ``` If you want the columns in the order you showed then add another level of subquery: ``` select a_ct, b_ct, c_ct, a_wt, b_wt, c_wt, a_ht, b_ht, c_ht from ( select * from ( select * from tab1 ) pivot ( count(type) as ct, sum(weight) as wt, sum(height) as ht for type in ('A' as A, 'B' as B, 'C' as C) ) ); A_CT B_CT C_CT A_WT B_WT C_WT A_HT B_HT C_HT ---- ---- ---- ---- ---- ---- ---- ---- ---- 2 1 1 110 40 30 22 8 15 ``` [SQL Fiddle](http://sqlfiddle.com/#!4/42497/1).
The second approach to name the columns is even better and solves more problems. I had a requirement where I wanted to sum up the data returned from PIVOT so having column names I could simply add 2 and get the required result in third one - ``` select a_ct, b_ct, c_ct, a_wt, b_wt, c_wt, a_ht, b_ht, c_ht, a_wt + b_wt + c_wt tot_wt from ( select * from ( select * from tab1 ) pivot ( count(type) as ct, sum(weight) as wt, sum(height) as ht for type in ('A' as A, 'B' as B, 'C' as C) ) ); A_CT B_CT C_CT A_WT B_WT C_WT A_HT B_HT C_HT TOT_WT ---- ---- ---- ---- ---- ---- ---- ---- ---- ------ 2 1 1 110 40 30 22 8 15 180 ``` Just beware that aggregate functions (like sum) won't behave as expected if one of the PIVOT column used returns null, in that case I have used CASE statement to get around it. Hope it helps someone.
Using pivot on multiple columns of an Oracle row
[ "", "sql", "oracle", "oracle11g", "pivot", "" ]
For example, I have a table like this: ``` ---------------------- | id | Name | Parent | ...................... | 1 | Joe | '' | | 2 | Alice| '' | | 3 | Manny| '' | | 4 | kid1 | 1 | | 5 | kid2 | 1 | | 6 | kid3 | 3 | ``` and I want to display it in a hierarchy manner like this: ``` | id | Name | Parent | ...................... | 1 | Joe | '' | | 4 | kid1 | 1 | | 5 | kid2 | 1 | | 2 | Alice| '' | | 3 | Manny| '' | | 6 | kid3 | 3 | ``` Can I do it using only SQL commands? --- Thank you so much guys. I'm new to stackoverflow, yet I'm already amazed by how fast you have answered my question. @amar duplantier, Thanks for the link it solved my problem!! I can't find that thread when I searched before. I'm sorry I didn't provide enough information for my question. here's the code I use based on amar's link: ``` select * from Table a order by case when Parent = '' then id else ( select id from Table parent where parent.id = a.Parent ) end DESC ```
If the `''` in the Parent column mean empty string then ``` SELECT * FROM table1 ORDER BY Concat(Parent, ID) , ID ``` will return a resultset with the order the OP want (if there is only only level of hierarchy)
You didn't mention a database in particular, so allow me to provide a solution using Oracle. In Oracle you can perform START WITH and CONNECTED BY: `select id, Name, Parent from your_table start with parent is null connect by prior id = parent;` Not all database have connected by, such as MySQL, but you can simulate it using functions: <http://explainextended.com/2009/03/17/hierarchical-queries-in-mysql/>
SELECT the table in hierarchy manner
[ "", "mysql", "sql", "" ]
i have two database with same number of tables & table structure. I want to copy data from one table into another with where condition. I have tried this below query, Is the below query is correct, ``` INSERT INTO db2.table (SELECT * FROM db1.table t where t.restaurant_id=12); ``` Please help **update**: i am looking for a single query similer with above
`v_db` and `f_db` databases are located in same server and it is work for me `INSERT INTO v_db.app_user (SELECT * FROM f_db.app_user AS t WHERE t.user_id = 100003083401232)`
Please try this query ``` select *into destination_database.dbo.destination table from _ source_database.dbo.source table where 1 = 2 ```
copy table data from one db to another with where condition
[ "", "mysql", "sql", "" ]
Suppose I have 2 years of data. From January 2010 to Dec 2011. I want to classify each of the months as periods. So January 2010 will be my 1, February 2010 my 2, and so on until December 2011 my 24 period. I know I could do it something like: ``` select year,mn, case when year=2010 and mn=01 then 1 else when year=2010 and mn=02 then 2 else when year=2010 and mn=03 then 3 //and so on until // else when year=2011 and mn=12 then 24 end from mytable; ``` The result would be something like: ``` year mn period 2010 1 1 2010 2 2 2010 3 3 2010 4 4 2010 5 5 2010 6 6 2010 7 7 2010 8 8 2010 9 9 2010 10 10 2010 11 11 2010 12 12 2011 1 13 2011 2 14 2011 3 15 2011 4 16 2011 5 17 2011 6 18 2011 7 19 2011 8 20 2011 9 21 2011 10 22 2011 11 23 2011 12 24 ``` I want to avoid this kind of long and not wise method.
``` select year, mn, row_number() over (order by year, mn) as period from t ```
A cheap version for this particular case: ``` SELECT year, mn, (year - 2010) * 12 + mn AS period FROM tbl; ``` This would also account for months that may be missing in your data. And it would give you consistent numbers even when only selecting *some* rows.
Classifying months in periods
[ "", "sql", "postgresql", "row-number", "" ]
Probably this will be really easy, but I can't figure out, how to get necessary values from my DB with one query. Just can't figure it out now. I'm going to make this query inside CodeIginiter system. Table 'information' construction: ``` CREATE TABLE information ( planid int(11) NOT NULL, production_nr int(11) NOT NULL, status int(11) NOT NULL ); ``` Table 'information' content: ![enter image description here](https://i.stack.imgur.com/EmvMh.jpg) Necessary output: I would like to get (at the best - with only one query, but if its not possible, then with multiple) all planid's where: ALL of this plan id's pruduction\_nrs has status >= 3. In this case, I would need to get these plandid's: 2 and 5 because each of these planid's ALL production\_nrs has status greater or equal than 3.
``` select planid, production_nr from information inf1 where not exists (select 1 from information inf2 where inf1.planid = inf2.planid and status < 3) ``` You might consider amending the select clause (first row) according to your needs: * Add distinct (if the table PK includes status column) * Change the column list
Try this, ``` SELECT planid , production_nr FROM information WHERE production_nr IN(SELECT production_nr FROM information) AND STATUS >=3 ```
SQL algorithm with three identifiers from one table
[ "", "sql", "database", "algorithm", "codeigniter", "" ]
# Setup Rails' `where` method can take a range in a hash to generate a query that will search for a value that is within the range. For example: ``` User.where(cash_money: 10..1000) #=> SELECT `users`.* FROM `users` WHERE (`users`.`cash_money` BETWEEN 10 AND 1000) ``` This can also be used with timestamps like ``` User.where(last_deposit: 10.days.ago..1000.days.ago) #=> SELECT `users`.* FROM `users` WHERE (`users`.`last_deposit` BETWEEN '2014-05-19 14:42:36' AND '2011-09-02 14:42:36') ``` I've found that you can do a simple less than or greater than with numbers using the hash syntax like this ``` User.where(cash_money: 10..Float::INFINITY) #=> SELECT `users`.* FROM `users` WHERE (`users`.`cash_money` >= 10) ``` and the same can be done with `-Float::INFINITY` for less than queries. # Question Is there a way to do this with timestamps so I can get a query like the following? ``` SELECT `users`.* FROM `users` WHERE (`users`.`last_deposit` >= '2014-05-19 14:42:36') ``` I cannot use `Float::INFINITY` or `Date::Infinity` with a range as they both error with `ArgumentError: bad value for range`. # Current Simple Solution ``` User.where('`users`.`last_deposit` >= ?', 10.days.ago) ``` will generate the same SQL but if this can be done with objects other than strings, I'd like to do so. # Potential (Meh) Answer This is kind of lousy but it could be done with ranges using `Time.at(0)` and `Time.at(Float::MAX)`. I have a feeling these could result in equally lousy SQL queries though.
# Edit 2 5/9/20 If you are using Ruby 2.6 you can do this with endless ranges and in Ruby 2.7 you can use begin-less ranges. E.g.: ``` # Ruby >= 2.6 User.where(last_deposit: 10.days.ago..) ``` generates ``` SELECT "users".* FROM "users" WHERE "user"."last_deposit" >= '2020-04-29 21:58:39.109419'" ``` and ``` # Ruby >= 2.7 User.where(last_deposit: ..10.days.ago) ``` generates ``` SELECT "users".* FROM "users" WHERE "users"."last_deposit" <= '2020-04-29 22:01:05.582055' ``` # Edit This is now possible in Rails 5! ``` User.where(last_deposit: 10.days.ago..DateTime::Infinity.new) ``` will generate the SQL ``` SELECT `users`.* FROM `users` WHERE (`users`.`last_deposit` >= '2018-06-30 17:08:54.130085'). ``` # Original (and Rails < 5) Answer It does not appear as if there is a way to use basic `where` hash syntax to generate a greater than or less than query for timestamps. The simplest and most readable way is outlined in my question under `Current Simple Solution`. Another way to do it makes use of ARel but you have to make some less commonly seen calls. First you can get a handle to the AR class' ARel table, access the column, pass the result of the greater than `gt`, greater than or equal to `gteq`, less than `lt`, and/or less than or equal to `lteq` method with an argument to `where`. In the situation above this would be done like: ``` last_deposit_column = User.arel_table[:last_deposit] last_deposit_over_ten_days_ago = last_deposit_column.gteq(10.days.ago) User.where(last_deposit_over_ten_days_ago) ```
Did you try this?: ``` User.where(last_deposit: Time.at(0)...10.days.ago) ``` SQL: ``` SELECT `users`.* FROM `users` WHERE (`users`.`last_deposit` >= '1970-01-01 00:00:00' AND `users`.`last_deposit` < '2015-01-10 17:15:19') ```
Rails `where` for time less than queries
[ "", "mysql", "sql", "ruby-on-rails", "date", "arel", "" ]
There seems to be a few blog posts on this topic but the solutions really are not so intuitive. Surely there's a "Canonical" way? I'm using Teradata SQL. How would I select 1. A range of number 2. A date range E.g. ``` SELECT 1:10 AS Nums SELECT 1-1-2010:5-1-2014 AS Dates1 ``` The result would be 10 rows (1 - 10) in the first SELECT query and ~(365 \* 3.5) rows in the second?
The "canonical" way to do this in SQL is using recursive CTEs, which the more recent versions of Teradata support. For your first example: ``` with recursive nums(n) as ( select 1 as n union all select n + 1 from nums where n < 10 ) select * from nums; ``` You can do something similar for dates. EDIT: You can also do this by using `row_number()` and an existing table: ``` with nums(n) as ( select n from (select row_number() over (order by col) as n from ExstingTable t ) t where n <= 10 ) select * from nums; ``` `ExistingTable` is just any table with enough rows. The best choice of `col` is the primary key. ``` with digits(n) as ( select 1 as n union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9 union all select 10 ) select * from digits; ``` If your version of Teradata supports multiple CTEs, you can build on the above: ``` with digits(n) as ( select 1 as n union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9 union all select 10 ), nums(n) as ( select d1.n*100 + d2.n*10 + d3.n from digits d1 cross join digits d2 cross join digits d3 ) select * from nums; ```
In Teradata you can use the existing sys\_calendar to get those dates: ``` SELECT calendar_date FROM sys_calendar.CALENDAR WHERE calendar_date BETWEEN DATE '2010-01-01' AND DATE '2014-05-01'; ``` Note: `DATE '2010-01-01'` is the only recommended way to write a date in Teradata There's probably another custom calendar for the specific business needs of your company, too. Everyone will have access rights to it. You might also use this for the range of numbers: ``` SELECT day_of_calendar FROM sys_calendar.CALENDAR WHERE day_of_calendar BETWEEN 1 AND 10; ``` But you should check Explain to see if the estimated number of rows is correct. sys\_calendar is a kind of template and day\_of\_calendar is a calculated column, so no statistics exists on that and Explain will return an estimated number of 14683 (20 percent of the number of rows in that table) instead of 10. If you use it in additional joins the optimizer might do a bad plan based on that totally wrong number. Note: If you use sys\_calendar you are limited to a maximum of 73414 rows, dates between 1900-01-01 and 2100-12-31 and numbers between 1 and 73414, your business calendar might vary. Gordon Linoff's recursive query is not really efficient in Teradata, as it's a sequential row-by-row processing in a parallel database (each loop is an "all-AMPs step" in Explain) and the optimizer doesn't know how many rows will be returned. If you need those ranges regularly you might consider creating a numbers table, I usually got one with a million rows or I use my calendar with the full range of 10000 years :-) ``` --DROP TABLE nums; CREATE TABLE nums(n INT NOT NULL PRIMARY KEY CHECK (n BETWEEN 0 AND 999999)); INSERT INTO Nums WITH cte(n) AS ( SELECT day_of_calendar - 1 FROM sys_calendar.CALENDAR WHERE day_of_calendar BETWEEN 1 AND 1000 ) SELECT t1.n + t2.n * 1000 FROM cte t1 CROSS JOIN cte t2; COLLECT STATISTICS COLUMN(n) ON Nums; ``` The COLLECT STATS is the most important step to get correct estimates. Now it's a simple ``` SELECT n FROM nums WHERE n BETWEEN 1 AND 10; ``` There's also a nice UDF on [GitHub](https://github.com/akuroda/teradata-udf-gen-sequence) for creating sequences which is easy to use: ``` SELECT DATE '2010-01-01' + SEQUENCE FROM TABLE(gen_sequence(0,DATE '2014-05-01' - DATE '2010-01-01')) AS t; SELECT SEQUENCE FROM TABLE(gen_sequence(1,10)) AS t; ``` But it's usually hard to convince your DBA to install any C-UDFs and the number of rows returned is unknown again.
Selecting a sequence in SQL
[ "", "sql", "teradata", "" ]
Anyone got any idea why this doesn't work. Im at a loss ![enter image description here](https://i.stack.imgur.com/MOqgp.jpg) The following ``` SELECT * FROM tblCustomerDetails WHERE AccountNo='STO00900' ``` Returns nothing however if i run the same query with any othe accoutn number it works. and this account will show when i run ``` SELECT TOP 10 * FROM tblCustomerDetails ORDER BY ID desc ``` Picture explains it better. Thanks
Try ``` SELECT * FROM tblCustomerDetails WHERE AccountNo LIKE '%STO00900%' ``` As there can be hidden characters.
Try as Notulysses suggested, but I would recommend it a bit differently: ``` SELECT * FROM tblCustomerDetails WHERE LTRIM(RTRIM(AccountNo)) = 'STO00900' ``` The `LIKE` operator will likely match more rows than you need (if te `AccountNo` column is not unique), so I'd go with trimming the whitespaces and then checking for a specific account.
SIMPLE SQL Select Where Query
[ "", "sql", "sql-server-2008", "" ]
I have a table of table records, call it "game" It has an id and timestamp. What I need to know is unrelated to the table specifically. In order to know the average number of games played per hour, I need to know : * Total games played for each hour over the date range * Number of hourly periods between the date range. Finding the first is a matter of extracting the hour from the timestamp and grouping by it. For the second, if the date range was rounded to the nearest day, finding this value would be easy (totalgames/numdays). Unfortunately I can't assume this. What I need help with is finding the number of **specific** hour periods existing within a time range. Example: If the range is 5 PM today to 8 PM tomorrow, there is one "00" hour (midnight to 1 AM), but **two** 17, 18, 19 hours (5-6, 6-7, 7-8) Thanks for the help Edit: for clarity, consider the following query: I have table game: id, daytime ``` select EXTRACT(hour from daytime) as hour_period, count (*) from game where daytime > dateFrom and daytime < dayTo group by hour_period ``` This will give me the number of games played broken down into hourly chunks for the time period. In order to find the average games played per hour, I need to know exactly how many **specific** hour durations are between two timestamps. Simply dividing by the number of days is not accurate. Edit: The ideal output will look something like this: ``` 00 275 01 300 02 255 ... ``` Consider the following: How many times does midnight occur between date 1 and date 2 ? If you have 1.5 days, that doesn't guarantee that midnight will occur twice. 6 AM today to 6 PM tomorrow night, for example, has 1 midnight, but 9PM tonight to 9 AM two days from now has 2 midnights. What I'm trying to find is how many of the EXACT HOUR occurs between two timestamps, so I can use it to average the number of games played at THAT HOUR over a time period.
**EDIT**: The following query gets the days, hours, and # of games, giving an output as below: ``` 29 23 100 29 00 130 30 22 140 30 23 150 ``` Then, the outer query adds up the number of games for each distinct hour and divides by the number of hours, as follows ``` 22 140 23 125 00 130 ``` The modified query is below: ``` SELECT hour_period, sum(hourly_no_of_games) / count(hour_period) FROM ( SELECT EXTRACT(DAY from daytime) as day_period, EXTRACT(HOUR from daytime) as hour_period, count (*) hourly_no_of_games from game where daytime > dateFrom and daytime < dayTo group by EXTRACT(DAY from daytime), EXTRACT(HOUR from daytime) ) hourly_data GROUP BY hour_period ORDER BY hour_period; ``` `SQL Fiddle demo`
If you need something to GROUP BY, you can truncate the timestamp to the level of hour, as in the following: ``` DECLARE @Date DATETIME SET @Date = GETDATE() SELECT @Date, DATEADD(Hour, DATEDIFF(Hour, 0, @Date), 0) AS RoundedDate ``` If you just need to find the total hours, you can just select the DATEDIFF in hours, such as with ``` SELECT DATEDIFF(Hour, '5/29/2014 20:01:32.999', GETDATE()) ```
Number of specific one-hour periods between two date/times
[ "", "sql", "date", "" ]
I have voting sites where each site is a row in the table. Now on my web server, I need to load all sites, and check for each site if its been voted (Voted means the site id exists in the other tables's row). So if the site id = 5 and row with `site_id` 5 exists in `callback_votes`, then the query will add that id to `'voted'`, if not it will be null. **example:** ``` SELECT sites.*, callback_votes.site_id AS voted FROM sites INNER JOIN callback_votes ON callback_votes.site_id = sites.id; ``` This query works, however, if I will not have any rows in `callback_votes`, the query will return no data. What I want to do, I want to still return `sites.*`, just for `voted` to be null in that case. Is that possible or are there other ways for this?
You're using `INNER JOIN` which will return rows where there is a matching row in each table, what you need is a `LEFT JOIN`. Using `LEFT JOIN` in simplistic terms means "select all rows from the left table, where there are no matching rows in the right table then return null". Here's how your query may look: ``` SELECT sites.*, callback_votes.site_id AS voted FROM sites LEFT JOIN callback_votes ON callback_votes.site_id = sites.id; ```
Just use left join insted of inner to adquire all data from sites and null values if not in callback\_votes ``` SELECT sites.*, callback_votes.site_id AS voted FROM sites LEFT JOIN callback_votes ON callback_votes.site_id = sites.id; ```
Join two tables in SQL and return result set which includes null values
[ "", "mysql", "sql", "sql-server", "" ]
I am having an issue with the following PostgreSQL query it takes more than 10 seconds to run is there any way to speed this query up to a rational speed, I am simply looking for the most relevant search terms associated with videos on a very large database. ``` SELECT count(*), videoid FROM term_search where word = 'tester' OR word = 'question' OR word = 'one' group by videoid order by count(*) desc limit 1800; ``` When the query is run with analyze the resultant query plan is as follows (<http://explain.depesz.com/s/yDJ>): ``` Limit (cost=389625.50..389630.00 rows=1800 width=4) (actual time=11766.693..11770.001 rows=1800 loops=1) Output: (count(*)), videoid -> Sort (cost=389625.50..389692.68 rows=26873 width=4) (actual time=11766.689..11767.818 rows=1800 loops=1) Output: (count(*)), videoid Sort Key: (count(*)) Sort Method: top-N heapsort Memory: 181kB -> HashAggregate (cost=387769.41..388038.14 rows=26873 width=4) (actual time=9215.653..10641.993 rows=1632578 loops=1) Output: count(*), videoid -> Bitmap Heap Scan on public.term_search (cost=44915.83..378163.38 rows=1921207 width=4) (actual time=312.449..7026.036 rows=2047691 loops=1) Output: id, videoid, word, termindex, weight Recheck Cond: (((term_search.word)::text = 'tester'::text) OR ((term_search.word)::text = 'question'::text) OR ((term_search.word)::text = 'one'::text)) Rows Removed by Index Recheck: 25512434 -> BitmapOr (cost=44915.83..44915.83 rows=1950031 width=0) (actual time=288.937..288.937 rows=0 loops=1) -> Bitmap Index Scan on terms_word_idx (cost=0.00..8552.83 rows=383502 width=0) (actual time=89.266..89.266 rows=419750 loops=1) Index Cond: ((term_search.word)::text = 'tester'::text) -> Bitmap Index Scan on terms_word_idx (cost=0.00..13171.84 rows=590836 width=0) (actual time=89.700..89.700 rows=604348 loops=1) Index Cond: ((term_search.word)::text = 'question'::text) -> Bitmap Index Scan on terms_word_idx (cost=0.00..21750.26 rows=975693 width=0) (actual time=109.964..109.964 rows=1023593 loops=1) Index Cond: ((term_search.word)::text = 'one'::text) ``` The schema for the table is as follows: ``` Column | Type | Modifiers | Storage | Description -----------+------------------------+----------------------------------------------------------+----------+------------- id | integer | not null default nextval('term_search_id_seq'::regclass) | plain | videoid | integer | | plain | word | character varying(100) | | extended | termindex | character varying(15) | | extended | weight | smallint | | plain | Indexes: "term_search_pkey" PRIMARY KEY, btree (id) "search_term_exists_idx" btree (videoid, word) "terms_caverphone_idx" btree (termindex) "terms_video_idx" btree (videoid) "terms_word_idx" btree (word, videoid) Foreign-key constraints: "term_search_videoid_fkey" FOREIGN KEY (videoid) REFERENCES videos(id) ON DELETE CASCADE Has OIDs: no ``` I have managed to get it down to 7 seconds with Index Only scans but it was still not low enough. I am running PostgreSQL 9.3 on Ubuntu 14.04 on an aws r3.xlarge instance, with approx 50 million rows in the table. Any advice is greatly appreciated! EDIT: Attached is the result of SELECT schemaname,tablename,attname,null\_frac,avg\_width,n\_distinct FROM pg\_stats WHERE schemaname='public' and tablename='term\_search'; ``` schemaname | tablename | attname | null_frac | avg_width | n_distinct ------------+-------------+-----------+-----------+-----------+------------ public | term_search | id | 0 | 4 | -1 public | term_search | videoid | 0 | 4 | 568632 public | term_search | word | 0 | 6 | 5054 public | term_search | termindex | 0 | 11 | 2485 public | term_search | weight | 0 | 2 | 3 ```
If I have a chance to disconnect users for a night I would: * create a new table with `words` from `term_search`, * create reference to the new table, * drop column `word`, something like this: ``` create table words ( word_id serial primary key, word text); insert into words (word) select distinct word from term_search; alter table term_search add column word_id integer; update term_search t set word_id = w.word_id from words w where t.word = w.word; alter table term_search add constraint term_search_word_fkey foreign key (word_id) references words (word_id); ``` Test: ``` SELECT count(*), videoid FROM term_search t JOIN words w on t.word_id = w.word_id WHERE w.word = 'tester' OR w.word = 'question' OR w.word = 'one' GROUP BY videoid ORDER BY count(*) desc LIMIT 1800; -- if was faster then alter table term_search drop column word; -- and on the fly... alter table term_search alter termindex type text; ``` After the revolution I'd have to take care of inserts and updates on `term_search`. I'd probably create a view with rules for insert and update.
Let's start by rephrasing the query to explain what it's really trying to do. The query: ``` SELECT count(*), videoid FROM term_search where word = 'tester' OR word = 'question' OR word = 'one' group by videoid order by count(*) desc limit 1800; ``` seems to mean: "In a table of search terms, find me videos with the search terms `tester`, `question` or `one`. Count the matches for each video and return the 1800 videos with the most matches". or, more generally: "Find me the videos that best match my search terms and show me the top n best matches". Correct? If so, why aren't you using [PostgreSQL's built-in full-text search and full-text indexing](http://www.postgresql.org/docs/current/static/textsearch.html)? An indexed `tsquery` match against a `tsvector` per video is likely to be a win here. Full-text search has fuzzy matching, ranking, and pretty much everything else you're going to want - and unlike your current approach it won't require the whole data set to be materialized and sorted only to discard most of it. You haven't provided sample data, so I can't really do a demo. --- How PostgreSQL currently executes your query could be explained like this: * Create a map with one bit for every disk page (8kb) in the table, where true indicates that the page *might* contain *one or more* matching rows. * For each search term, scan the index `terms_word_idx` and update the bitmap to set the bit where a match is found * Scan the table, skipping over pages where the bitmap says there can't be any matches, looking for rows that have any of the words. This is like a fast, skip-over-blanks seqscan. It's actually not tons faster than a plain seqscan if the percentage of matches is high. * For each matching row, sort it into a series of "buckets" based on the video id. Then at the end, count how many rows are in each bucket and return the count + the video ID for that bucket. (It's not that simple, but close enough). * As you count each bucket, put the result in between the results with next-highest and next-lowest counts. + Take the top 1800 results, and throw away all the rest of your hard work. That doesn't sound like much fun, but it doesn't have any choice. A b-tree index can't be descended to search simultaneously for multiple terms, so it has to do multiple index scans. The rest kind of follows from that. So: to make this more efficient, you need to fundamentally change how you tackle the problem. Adding an index or tuning some parameters isn't going to suddenly make this take 0.5s.
Issues optimizing postgres search query
[ "", "sql", "postgresql", "query-optimization", "top-n", "" ]
I had a table like ``` ID UserID rupees time 1 1 200 2014-01-05 --------------------------------- 2 1 500 2014-04-06 ---------------------------------- 3 2 10 2014-05-05 ---------------------------------- 4 2 20 2014-05-06 ---------------------------------- ``` I want the output lie ``` ID UserID Rupees time CumulativeSum 1 1 200 2014-01-05 200 ------------------------------------------------- 2 1 500 2014-04-06 700 ------------------------------------------------- 3 2 10 2014-05-06 10 ------------------------------------------------- 4 2 20 2014-05-06 30 --------------------------------------------------- ``` How can i get this table as purput
Please try using `CTE`: ``` ;With T as( select *, ROW_NUMBER() over(partition by UserId order by [time]) RN from tbl ) select UserID, rupees, [time], (select SUM(rupees) from T b where b.UserID=a.UserID and b.RN<=a.RN) CumulativeSum from T a ``` For records with column value time increasing, try the below query: ``` select UserID, rupees, [time], (select SUM(rupees) from tbl b where b.UserID=a.UserID and b.[time]<=a.[time]) CumulativeSum from tbl a ```
For SQL Server 2012 or later, you can use `SUM()` with an [`OVER` clause](http://msdn.microsoft.com/en-us/library/ms189461.aspx) that specifies a `ROW` clause: ``` declare @t table (ID int,UserID int,rupees int,[time] date) insert into @t(ID,UserID,rupees,[time]) values (1,1,200,'20140105'), (2,1,500,'20140406'), (3,2, 10,'20140505'), (4,2, 20,'20140506') select *, SUM(rupees) OVER ( PARTITION BY UserID ORDER BY id /* or time? */ ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) as total from @t ``` Result: ``` ID UserID rupees time total ----------- ----------- ----------- ---------- ----------- 1 1 200 2014-01-05 200 2 1 500 2014-04-06 700 3 2 10 2014-05-05 10 4 2 20 2014-05-06 30 ```
How to maintain cumulative sum for each User in SQL server
[ "", "sql", "sql-server", "" ]
I need to have a column with data taken from a count() function with a subquery. I am working with a temporary table at the moment and it is created by: ``` CREATE TABLE tempfact1 AS ( SELECT coursesloc.courseID, preferences.appid, campuses.campusid FROM coursesloc, preferences, campuses WHERE coursesloc.courseID = preferences.courseID AND coursesloc.campusid = campuses.campusid AND preferences.prefaccepted = 'Y' ) ; ALTER TABLE tempfact1 ADD TNA NUMERIC; ``` Then I run the following to try and generate the TNA (total number of applicants per campus) with the following: ``` UPDATE tempfact1 SET TNA = (SELECT COUNT(appid) FROM tempfact1 GROUP BY campusid) ``` Which I cannot do because it is returning more than one row. I'm unsure how to get it to only output one row at a time in the UPDATE, or if there is an easier and better way to just do this in the CREATE TABLE command.
I have created one small demo for your question... ``` /*Demo table....*/ create table mtp (rno integer,name varchar(max)) /*Demo Data....*/ insert into mtp values (1,'a') insert into mtp values (1,'b') insert into mtp values (2,'c') insert into mtp values (2,'d') insert into mtp values (2,'e') /* Select Result, use this countid column for your new table...*/ select COUNT(*) OVER (PARTITION BY rno)as countid ,rno ,name From mtp ```
You may try by removing the `GROUP BY` clause. It will return multiple rows , i.e. count for each campus\_id.
Update column based on aggregate function with sub query
[ "", "sql", "oracle", "" ]
So, I'm aware this isn't exactly a programming question but I felt it was still appropriate. What is the difference between Oracles proprietary RDBMS, that they license, vs MySQL, an open source DBMS they bought? Performance? Support? Security? Features? Also, I read MySQL isn't SQL complaint, yet it is compatible with the SQL language. What am I missing?
MySQL supports SQL, but doesn't support all features of SQL. Most databases implement a part of the SQL standard and a bunch of extra features. If a database implements everything of the standard, then queries following those standards are usable in all those databases (provided the table structure is the same. But both Oracle and MySQL implement most of those features. More info about MySQL sql compliancy: <http://dev.mysql.com/doc/refman/5.0/en/compatibility.html> Oracle also provides a comparison between MySQL and Oracle here, which is mainly a summary of technical differences and doesn't compare the big picture: <http://docs.oracle.com/cd/E12151_01/doc.150/e12155/oracle_mysql_compared.htm> A more high-level comparison can be found here: <http://www.rapidprogramming.com/questions-answers/What-is-the-difference-between-MySQL-and-Oracle--617> The main conclusion there seems to be that Oracle has more features on an enterprise level, like better tools, better support for stored procedures, better analytical features, and better user management. All in all Oracle is more a large data warehouse/enterprise level database, while MySQL is good for hosting websites, can scale quite well, but wouldn't be a primary choice for building a data warehouse in. Those features of Oracle come at a price, of course.
One big difference is that MySQL does not support CHECK constraints, which is an important part of maintaining database integrity. See here for an example that you cannot use under MySQL: <https://stackoverflow.com/a/23877154/8454> Also, please don't think that Oracle and MySQL are your only two choices. PostgreSQL is also open source like MySQL but far more featureful (and supports CHECK constraints). <http://www.postgresql.org/>
Oracle vs MySQL in context
[ "", "mysql", "sql", "oracle", "" ]
I have a table that contains the field as: `doses_given decimal(9,2)` that I want to multiply against this field: `drug_units_per_dose varchar(255)` So I did something like this: ``` CAST(ppr.drug_units_per_dose as decimal(9,2)) * doses_given dosesGiven, ``` However, looking at the data, I notice some odd characters: ``` select distinct(drug_units_per_dose) from patient_prescr NULL 1 1-2 1-4 1.5 1/2 1/4 10 12 15 1ร‚ยฝ 2 2-3 2.5 20 2ร‚ยฝ 3 3-4 30 4 5 6 7 8 ร‚ยฝ ``` As you can see, I am getting some characters that cannot be `CAST` to decimal. On the web page these fields are interpreted as a small `1/2` symbol: ![enter image description here](https://i.stack.imgur.com/xnT61.png) Is there anyway to replace the `ร‚ยฝ` field with a .5 to accurately complete the multiplication?
You have a rather nasty problem. You have a field `drug_units_per_dose` that a normal human being would consider to be an integer or floating point number. Clearly, the designers of your database are super-normal people, and they understand a much wider range of options for this concept. I say that partly tongue in cheek, but to make an important point. The column in the database does not represent a number, at least not in all cases. I would suggest that you have a translation table for `drug_units_per_dose`. It would have columns such as: ``` 1 1 1/2 0.5 3-4 ?? ``` I realize that you will have hundreds of rows, and a lot of them will look redundant because they will be "50,50" and "100,100". However, if you want to keep control of the business logic for turning these strings into a number, then a lookup table seems like the sanest approach.
The 1/2 symbol is ascii character 189, so to replace: ``` CAST(REPLACE(ppr.drug_units_per_dose,char(189),'.5') as decimal(9,2)) * doses_given dosesGiven ```
Multiplying a varchar and a decimal field together
[ "", "sql", "sql-server", "sql-server-2008-r2", "" ]
Here is some dummy data: ``` | ID | CREATED AT | | 367 | 2014-05-28 22:55:36 | | 367 | 2014-05-28 22:57:06 | | 369 | 2014-05-28 23:06:02 | | 369 | 2014-05-28 23:08:05 | | 369 | 2014-05-28 23:18:07 | | 350 | 2014-05-28 23:12:56 | | 261 | 2014-05-28 21:17:11 | | 261 | 2014-05-29 22:27:43 | ``` What I'd like to select from this, are the IDs (obviously not a primary key in this case) where by the `created_at` date has a difference of 24hrs or more. So in the case with the above data, ID `261` has two records in there, which were created over 24hrs apart. So in the collection that is returned I'd want to see ID `261` in there. What would be an effective way to structure this kind of query?
slower option ``` SELECT id, TIME_TO_SEC(TIMEDIFF(MAX(created_at),MIN(created_at))) as seconds_difference FROM table GROUP BY id HAVING seconds_difference > 3600*24 ``` faster option ``` SELECT t1.id, TIME_TO_SEC(TIMEDIFF(t2.created_at, t1.created_at) as seconds_difference FROM table t1 INNER JOIN table t2 ON (t2.id = t1.id AND t2.created_at > t1.created_at) WHERE TIME_TO_SEC(TIMEDIFF(t2.created_at, t1.created_at) > 3600*24 ```
This should work: ``` SELECT DISTINCT t1.`ID` FROM `tbl` t1 INNER JOIN `tbl` t2 ON t1.`ID`=t2.`ID` WHERE t2.`CREATED_AT` <= t1.`CREATED_AT` - INTERVAL 24 HOUR; ``` Edit: A better query (won't return a result if its min and max are 24 hours apart but there is something in the middle that is less than 24 hours apart): ``` SELECT DISTINCT t1.`ID` FROM `test` t1 INNER JOIN `test` t2 ON t1.`ID`=t2.`ID` LEFT JOIN `test` t3 ON t3.`ID`=t1.`ID` AND t3.`CREATED_AT` != t1.`CREATED_AT` AND TIME_TO_SEC(TIMEDIFF(t3.`CREATED_AT`, t1.`CREATED_AT`)) <= 3600 * 24 WHERE TIME_TO_SEC(TIMEDIFF(t2.`CREATED_AT`, t1.`CREATED_AT`)) >= 3600 * 24 AND t3.ID IS NULL ```
How would you select records from a table based on the difference between 'created' dates with MySQL?
[ "", "mysql", "sql", "" ]
I have two table Parent P and Child C **Parent** ``` Id Name 1 AAA 2 BBB 3 CCC ``` **Child** ``` Id ParId Name Value 11 1 XXX 1 12 1 YYY 7 19 1 ZZZ 9 13 2 XXX 1 14 2 YYY 2 20 1 ZZZ 7 15 3 XXX 1 16 3 YYY 2 18 3 ZZZ 8 ``` I want to fetch the parent records for which XXX value is 1 and YYY is not 2 or zzz value is not 7. In this case, I should get 1 and 3 as result. Please suggest.
The rules can be checked in the `HAVING` section of a query with a `CASE` statement. If all the name have to be checked, i.e. if the child don't have all the three names it should not be in the resulset, the check for three rules is static ``` SELECT c.ParId, p.Name FROM Child c INNER JOIN Parent p ON c.ParID = p.Id GROUP BY c.ParId, p.Name HAVING SUM(CASE WHEN c.Name = 'XXX' AND c.Value = 1 Then 1 WHEN c.Name = 'YYY' AND c.Value <> 2 Then 1 WHEN c.Name = 'ZZZ' AND c.Value <> 7 Then 1 ELSE 0 END) = 3; ``` If only the name that are in the data have to be checked, i.e. if the child have only 'XXX' and it's value is 1 then it's parent should be in the resulset, the check is dynamic ``` SELECT c.ParId, p.Name FROM Child c INNER JOIN Parent p ON c.ParID = p.Id WHERE c.NAME IN ('XXX', 'YYY', 'ZZZ') GROUP BY c.ParId, p.Name HAVING SUM(CASE WHEN c.Name = 'XXX' AND c.Value = 1 Then 1 WHEN c.Name = 'YYY' AND c.Value <> 2 Then 1 WHEN c.Name = 'ZZZ' AND c.Value <> 7 Then 1 ELSE 0 END) = COUNT(DISTINCT c.NAME) ```
``` select distinct p.id from parent p join child c on p.id=c.parid where (c.name='XXX' and c.value = 1) or (c.name='YYY' and c.value = 2) or ... ```
SQL Query to fetch parent record based on multiple conditions on child record
[ "", "sql", "database", "oracle", "" ]
How can I Find in MS-SQL table values that is max on 3 columns, plus max on a single column. I know to look at [the three](https://stackoverflow.com/questions/11032805/get-result-with-maximum-values-in-multiple-columns-at-once-in-sql-server) columns to get them, and I know to do a self join to get the single value - but how can I combine the two. Normally when I add a new value to this I have all the other data and as I add new ones I have all the values handy. But for this special case I don't have this. I have the survey\_id values and that is it. I need to find the next question ID for that survey, then find the last position in the survey. They may not be the same thing. I'll but doing this is vb.net, not that it makes any difference. I need to find for each survey\_id the highest question\_id AND the highest chapter, subchapter, question\_number - that is chapter 2, sub 1, question 1 is greater than chapter 1, sub 99, question 99. given a table that looks like this (survey\_id and question\_id form unique pair) ``` survey_id | question_id | chapter | subchapter | question_number ================================================================ 505 | 1 | 1 | 1 | 1 505 | 2 | 1 | 1 | 3 505 | 3 | 1 | 1 | 2 5858 | 1 | 1 | 1 | 1 5858 | 2 | 1 | 1 | 2 5858 | 3 | 1 | 1 | 2 5858 | 47 | 1 | 1 | 4 5858 | 45 | 2 | 1 | 1 5858 | 46 | 2 | 1 | 2 6060 | 1 | 1 | 1 | 1 6060 | 2 | 1 | 1 | 2 6060 | 3 | 1 | 1 | 2 6060 | 47 | 1 | 1 | 4 6060 | 45 | 2 | 1 | 1 6060 | 46 | 2 | 1 | 2 ``` My result should be ``` survey_id | suveyMAXquestion_id | Maxchapter | Maxsubchapter | Maxquestion_number ================================================================================= 505 | 2 | 1 | 1 | 3 5858 | 47 | 2 | 1 | 2 6060 | 47 | 2 | 1 | 2 ``` What I will end up doing is putting a new value into the table with survey\_id, question\_id +1 and chapter,subchapter, question\_number+1 My data the will be insert in the table will (after updating the other columns in table that I have not shown) be: ``` survey_id | question_id | chapter | subchapter | question_number ================================================================= 505 | 3 | 1 | 1 | 4 5858 | 48 | 2 | 1 | 3 6060 | 48 | 2 | 1 | 3 ```
This is a little bit complicated, but do-able. Since you mentioned that you have the survey\_id, I included that as a parameter. There are two separate thoughts in what you are trying to do. First, you want the max question\_id for the survey\_id. The other thought is that you want the greatest question number for the highest chapter and subchapter listed. This must be found in order. First, we need the greatest chapter, then the greatest subchapter, and lastly the max question\_number. ``` Select q.survey_id, (Select max(question_id) from tblQuestion where survey_id=q.survey_id), q.chapter, q.subchapter, max(q.question_number) From tblQuestion q Where q.survey_id = @survey_id and q.chapter = (Select max(chapter) From tblQuestion qq Where qq.survey_id=q.survey_id) and q.subchapter = (Select max(subchapter) From tblQuestion qq Where qq.survey_id=q.survey_id and qq.chapter = q.chapter) Group by q.survey_id, q.chapter, q.subchapter ``` [SQLFiddle](http://sqlfiddle.com/#!2/73a9c/4)
You can get both using Windowed Aggregate Functions without additional join: ``` SELECT survey_id, max_question_id, chapter, subchapter, question_number FROM ( SELECT survey_id, chapter, subchapter, question_number, MAX(question_id) OVER (PARTITION BY survey_id) AS max_question_id, ROW_NUMBER() OVER (PARTITION BY survey_id ORDER BY chapter DESC, subchapter DESC, question_number DESC) AS rnk FROM tblquestion ) AS dt WHERE rnk = 1 ```
Find in SQL table values that is max on 3 columns, plus max on a single column
[ "", "sql", "sql-server", "vb.net", "" ]
This is the 3rd edit. Based on all your feedback I was able to generate the following query with multiple search criteria. Please note that this is an existing system and there budget is an issue so I am trying to do all I can to improve existing queries. The search you see was manually done based on arrays and there was no joins. The same search was taking 2-3 minutes to process whereas thanks to all of you rocking gurus it now takes 7-8 seconds to process :) ``` SELECT SQL_CALC_FOUND_ROWS fname, lname, desig, company, region, state, country, add_uid, contacts.`id` as id FROM contacts INNER JOIN contact_to_categories ON contact_to_categories.contactid = contacts.id AND ( contact_to_categories.catid = '2' ) INNER JOIN contact_professional_details ON contact_professional_details.contact_id = contacts.id AND ( FIND_IN_SET('1', contact_professional_details.pd_insid) OR FIND_IN_SET(' 8', contact_professional_details.pd_insid) OR FIND_IN_SET(' 33', contact_professional_details.pd_insid) ) AND ( FIND_IN_SET('4', contact_professional_details.pd_secid) OR FIND_IN_SET('3', contact_professional_details.pd_secid) OR FIND_IN_SET('5', contact_professional_details.pd_secid) OR FIND_IN_SET('7', contact_professional_details.pd_secid) OR FIND_IN_SET('12', contact_professional_details.pd_secid) OR FIND_IN_SET('11', contact_professional_details.pd_secid) OR FIND_IN_SET('9', contact_professional_details.pd_secid) OR FIND_IN_SET('38', contact_professional_details.pd_secid) OR FIND_IN_SET('35', contact_professional_details.pd_secid) OR FIND_IN_SET('115', contact_professional_details.pd_secid) ) INNER JOIN contact_address ON contact_address.contact_id = contacts.id AND ( contact_address.hmregion IN ('AF', 'EU', 'OC', 'SA') OR contact_address.hmcountry IN ('Algeria', 'Angola', 'Benin', 'Comoros', 'Andorra', 'Austria', 'Belarus', 'Belgium', 'American Samoa', 'Australia', 'French Polynesia', 'Guam', 'Kiribati', 'Marshall Islands', 'Colombia', 'Ecuador', 'Falkland Islands', 'Guyana', 'Paraguay', 'Peru', 'Laos', 'Malaysia', 'Myanmar', 'Singapore', 'Vietnam') OR contact_address.hmcity = 'singapore' ) INNER JOIN contact_offices ON contact_offices.contact_id = contacts.id AND ( contact_offices.off_region IN ('AF', 'EU', 'OC', 'SA') OR contact_offices.off_country IN ('Algeria', 'Angola', 'Benin', 'Comoros', 'Andorra', 'Austria', 'Belarus', 'Belgium', 'American Samoa', 'Australia', 'French Polynesia', 'Guam', 'Kiribati', 'Marshall Islands', 'Colombia', 'Ecuador', 'Falkland Islands', 'Guyana', 'Paraguay', 'Peru', 'Laos', 'Malaysia', 'Myanmar', 'Singapore', 'Vietnam') OR contact_offices.off_city = 'singapore' ) WHERE 1 AND ( FIND_IN_SET('1', contacts.ins_id) OR FIND_IN_SET(' 8', contacts.ins_id) OR FIND_IN_SET(' 33', contacts.ins_id) ) AND ( FIND_IN_SET('4', contacts.sec_id) OR FIND_IN_SET('3', contacts.sec_id) OR FIND_IN_SET('5', contacts.sec_id) OR FIND_IN_SET('7', contacts.sec_id) OR FIND_IN_SET('12', contacts.sec_id) OR FIND_IN_SET('11', contacts.sec_id) OR FIND_IN_SET('9', contacts.sec_id) OR FIND_IN_SET('38', contacts.sec_id) OR FIND_IN_SET('35', contacts.sec_id) OR FIND_IN_SET('115', contacts.sec_id) ) AND ( FIND_IN_SET('Tier 1', `vip_tier`) OR FIND_IN_SET('Tier 3', `vip_tier`) ) AND ( FIND_IN_SET('Tier A', `vip_coll_tier`) ) AND ( FIND_IN_SET('Yes', `vip_influencer`) ) AND ( FIND_IN_SET('Contemporary', `vip_class_art_coll`) OR FIND_IN_SET('Modern', `vip_class_art_coll`) OR FIND_IN_SET('Geographic', `vip_class_art_coll`) ) AND ( FIND_IN_SET('Sculpture', `vip_med_art_coll`) OR FIND_IN_SET('Photography', `vip_med_art_coll`) OR FIND_IN_SET('Video', `vip_med_art_coll`) OR FIND_IN_SET('Installation', `vip_med_art_coll`) ) AND ( FIND_IN_SET('Japan', `vip_geo_int`) OR FIND_IN_SET('Korea', `vip_geo_int`) OR FIND_IN_SET('Southeast Asia', `vip_geo_int`) OR FIND_IN_SET('Oceania', `vip_geo_int`) ) AND ( FIND_IN_SET('HNWI', `vip_seniority`) OR FIND_IN_SET('Top Social Leaders', `vip_seniority`) OR FIND_IN_SET('Other Executives', `vip_seniority`) ) AND ( `status` = 'a' ) ORDER BY fname asc LIMIT 0, 50 ``` I know this can be further improved by moving the find in set values to separate tables and creating relational tables between the contacts master table and the values master table. But as I said budget is a big issue for this guys so I guess this is more than efficient for them. However any further improvement ideas are most welcome.
Making a major assumption that you are interested in contacts in a specified region OR a specified country OR a specified city, and cleaning your code up a bit:- ``` SELECT SQL_CALC_FOUND_ROWS fname, lname, desig, company, region, state, country, add_uid, contacts.`id` as id FROM contacts INNER JOIN contact_to_categories ON contact_to_categories.contactid = contacts.id AND contact_to_categories.catid = '2' INNER JOIN contact_professional_details ON contact_professional_details.contact_id = contacts.id AND ( FIND_IN_SET('4', contact_professional_details.pd_secid) OR FIND_IN_SET('3', contact_professional_details.pd_secid) OR FIND_IN_SET('5', contact_professional_details.pd_secid) OR FIND_IN_SET('7', contact_professional_details.pd_secid) OR FIND_IN_SET('12', contact_professional_details.pd_secid) OR FIND_IN_SET('11', contact_professional_details.pd_secid) OR FIND_IN_SET('9', contact_professional_details.pd_secid) OR FIND_IN_SET('38', contact_professional_details.pd_secid) OR FIND_IN_SET('35', contact_professional_details.pd_secid) OR FIND_IN_SET('115', contact_professional_details.pd_secid) ) INNER JOIN contact_address ON contact_address.contact_id = contacts.id INNER JOIN contact_offices ON contact_offices.contact_id = contacts.id WHERE 1 AND (( contact_address.hmregion IN ('AF', 'EU', 'OC', 'SA') OR contact_address.hmcountry IN ('Algeria', 'Angola', 'Benin', 'Comoros', 'Andorra', 'Austria', 'Belarus', 'Belgium', 'American Samoa', 'Australia', 'French Polynesia', 'Guam', 'Kiribati', 'Marshall Islands', 'Colombia', 'Ecuador', 'Falkland Islands', 'Guyana', 'Paraguay', 'Peru', 'Laos', 'Malaysia', 'Myanmar', 'Singapore', 'Vietnam') OR contact_address.hmcity='singapore' ) OR ( contact_offices.off_region IN ('AF', 'EU', 'OC', 'SA') OR contact_offices.off_country IN ('Algeria', 'Angola', 'Benin', 'Comoros', 'Andorra', 'Austria', 'Belarus', 'Belgium', 'American Samoa', 'Australia', 'French Polynesia', 'Guam', 'Kiribati', 'Marshall Islands', 'Colombia', 'Ecuador', 'Falkland Islands', 'Guyana', 'Paraguay', 'Peru', 'Laos', 'Malaysia', 'Myanmar', 'Singapore', 'Vietnam') OR contact_offices.off_city='singapore' ) ) AND ( FIND_IN_SET('1', contacts.ins_id) OR FIND_IN_SET(' 8', contacts.ins_id) OR FIND_IN_SET(' 33', contacts.ins_id) ) AND ( FIND_IN_SET('4', contacts.sec_id) OR FIND_IN_SET('3', contacts.sec_id) OR FIND_IN_SET('5', contacts.sec_id) OR FIND_IN_SET('7', contacts.sec_id) OR FIND_IN_SET('12', contacts.sec_id) OR FIND_IN_SET('11', contacts.sec_id) OR FIND_IN_SET('9', contacts.sec_id) OR FIND_IN_SET('38', contacts.sec_id) OR FIND_IN_SET('35', contacts.sec_id) OR FIND_IN_SET('115', contacts.sec_id) ) AND ( FIND_IN_SET('Tier 1', `vip_tier`) OR FIND_IN_SET('Tier 3', `vip_tier`) ) AND (FIND_IN_SET('Tier A', `vip_coll_tier`)) AND (FIND_IN_SET('Yes', `vip_influencer`)) AND (FIND_IN_SET('Contemporary', `vip_class_art_coll`) OR FIND_IN_SET('Modern', `vip_class_art_coll`) OR FIND_IN_SET('Geographic', `vip_class_art_coll`)) AND (FIND_IN_SET('Sculpture', `vip_med_art_coll`) OR FIND_IN_SET('Photography', `vip_med_art_coll`) OR FIND_IN_SET('Video', `vip_med_art_coll`) OR FIND_IN_SET('Installation', `vip_med_art_coll`)) AND (FIND_IN_SET('Japan', `vip_geo_int`) OR FIND_IN_SET('Korea', `vip_geo_int`) OR FIND_IN_SET('Southeast Asia', `vip_geo_int`) OR FIND_IN_SET('Oceania', `vip_geo_int`)) AND (FIND_IN_SET('HNWI', `vip_seniority`) OR FIND_IN_SET('Top Social Leaders', `vip_seniority`) OR FIND_IN_SET('Other Executives', `vip_seniority`)) AND (`status`='a') ORDER BY fname asc LIMIT 0, 50 ``` Note that the use of FIND\_IN\_SET suggests a poorly normalised database with fields containing comma separated lists of values.
This is the part which giving error in your query ``` INNER JOIN contact_professional_details ON contact_professional_details.contact_id = contacts.id AND ( <-- Here INNER JOIN contact_to_categories ON contact_to_categories.contactid = contacts.id AND ( contact_to_categories.catid = '2' ) ``` change this to ``` INNER JOIN contact_professional_details ON contact_professional_details.contact_id = contacts.id INNER JOIN contact_to_categories ON contact_to_categories.contactid = contacts.id AND contact_to_categories.catid = '2' ``` **EDIT:** Your posted query is total messy, you did join the same table(s) multiple times and did use chained `OR` condition instead of `IN` clause. So, below is your modified query. ``` SELECT SQL_CALC_FOUND_ROWS fname, lname, desig, company, region, state, country, add_uid, contacts.`id` as id FROM contacts INNER JOIN contact_to_categories ON contact_to_categories.contactid = contacts.id AND contact_to_categories.catid = '2' INNER JOIN contact_professional_details ON contact_professional_details.contact_id = contacts.id AND ( FIND_IN_SET('4', contact_professional_details.pd_secid) OR FIND_IN_SET('3', contact_professional_details.pd_secid) OR FIND_IN_SET('5', contact_professional_details.pd_secid) OR FIND_IN_SET('7', contact_professional_details.pd_secid) OR FIND_IN_SET('12', contact_professional_details.pd_secid) OR FIND_IN_SET('11', contact_professional_details.pd_secid) OR FIND_IN_SET('9', contact_professional_details.pd_secid) OR FIND_IN_SET('38', contact_professional_details.pd_secid) OR FIND_IN_SET('35', contact_professional_details.pd_secid) OR FIND_IN_SET('115', contact_professional_details.pd_secid) ) INNER JOIN contact_address ON contact_address.contact_id = contacts.id AND contact_address.hmregion IN ('AF','EU','OC','SA') AND contact_address.hmcountry IN ('Algeria', 'Angola', 'Benin', 'Comoros', 'Andorra', 'Austria', 'Belarus', 'Belgium', 'American Samoa', 'Australia', 'French Polynesia', 'Guam', 'Kiribati', 'Marshall Islands', 'Colombia', 'Ecuador', 'Falkland Islands', 'Guyana', 'Paraguay', 'Peru', 'Laos', 'Malaysia', 'Myanmar', 'Singapore', 'Vietnam' ) AND contact_address.hmcity='singapore' INNER JOIN contact_offices ON contact_offices.contact_id = contacts.id AND contact_offices.off_region IN ('AF','EU','OC','SA') AND contact_offices.off_country IN ('Algeria', 'Angola', 'Benin', 'Comoros', 'Andorra', 'Austria', 'Belarus', 'Belgium', 'American Samoa', 'Australia', 'French Polynesia', 'Guam', 'Kiribati', 'Marshall Islands', 'Colombia', 'Ecuador', 'Falkland Islands', 'Guyana', 'Paraguay', 'Peru', 'Laos', 'Malaysia', 'Myanmar', 'Singapore', 'Vietnam' ) AND contact_offices.off_city='singapore' WHERE 1 AND ( FIND_IN_SET('1', contacts.ins_id) OR FIND_IN_SET(' 8', contacts.ins_id) OR FIND_IN_SET(' 33', contacts.ins_id) ) AND ( FIND_IN_SET('4', contacts.sec_id) OR FIND_IN_SET('3', contacts.sec_id) OR FIND_IN_SET('5', contacts.sec_id) OR FIND_IN_SET('7', contacts.sec_id) OR FIND_IN_SET('12', contacts.sec_id) OR FIND_IN_SET('11', contacts.sec_id) OR FIND_IN_SET('9', contacts.sec_id) OR FIND_IN_SET('38', contacts.sec_id) OR FIND_IN_SET('35', contacts.sec_id) OR FIND_IN_SET('115', contacts.sec_id) ) AND (FIND_IN_SET('Tier 1', `vip_tier`) OR FIND_IN_SET('Tier 3', `vip_tier`)) AND (FIND_IN_SET('Tier A', `vip_coll_tier`)) AND (FIND_IN_SET('Yes', `vip_influencer`)) AND (FIND_IN_SET('Contemporary', `vip_class_art_coll`) OR FIND_IN_SET('Modern', `vip_class_art_coll`) OR FIND_IN_SET('Geographic', `vip_class_art_coll`)) AND (FIND_IN_SET('Sculpture', `vip_med_art_coll`) OR FIND_IN_SET('Photography', `vip_med_art_coll`) OR FIND_IN_SET('Video', `vip_med_art_coll`) OR FIND_IN_SET('Installation', `vip_med_art_coll`)) AND (FIND_IN_SET('Japan', `vip_geo_int`) OR FIND_IN_SET('Korea', `vip_geo_int`) OR FIND_IN_SET('Southeast Asia', `vip_geo_int`) OR FIND_IN_SET('Oceania', `vip_geo_int`)) AND (FIND_IN_SET('HNWI', `vip_seniority`) OR FIND_IN_SET('Top Social Leaders', `vip_seniority`) OR FIND_IN_SET('Other Executives', `vip_seniority`)) AND (`status`='a') ORDER BY fname asc LIMIT 0,50 ```
MySQL multiple JOIN clause with multiple WHERE
[ "", "mysql", "sql", "join", "inner-join", "where-clause", "" ]
**I have:** Two schemes `scheme1` and `scheme2`. **I want** To delete something from `scheme1`. **Problem:** `scheme2` have dependencies on `scheme1` so I can't delete what I want. **Question:** Is there a way to print all dependencies in `scheme2` to `scheme1`? How to do that? **Question2:** It would be perfect if you know how to get ALL dependencies on `some_table` from all schemes. Do you know?
Answer for second question, if it's tables that are dependent on "some\_table": ``` select TABLE_NAME from information_schema.REFERENTIAL_CONSTRAINTS where REFERENCED_TABLE_NAME = <some_table> ``` of the other way around: ``` select REFERENCED_TABLE_NAME from information_schema.REFERENTIAL_CONSTRAINTS where TABLE_NAME = <some_table> ```
If you by dependency mean foreign key you can check that in: ``` select * from information_schema.REFERENTIAL_CONSTRAINTS; ```
MySQL: get all references to another scheme
[ "", "mysql", "sql", "relational-database", "database-schema", "" ]
I have data following data structure.. ``` _ID _BEGIN _END 7003 99210 99217 7003 10225 10324 7003 111111 ``` I want to look through every \_BEGIN and \_END and return all rows where the input value is between the range of values including the values themselves (i.e. if 10324 is the input, row 2 would be returned) I have tried this filter but it does not work.. ``` where @theInput between a._BEGIN and a._END --THIS WORKS where convert(char(7),'10400') >= convert(char(7),a._BEGIN) --BUT ADDING THIS BREAKS AND RETURNS NOTHING AND convert(char(7),'10400') < convert(char(7),a._END) ```
This would be the obvious answer... ``` SELECT * FROM <YOUR_TABLE_NAME> a WHERE @theInput between a._BEGIN and a._END ``` If the data is string (assuming here as we don't know what DB) You could add this. ``` Declare @searchArg VARCHAR(30) = CAST(@theInput as VARCHAR(30)); SELECT * FROM <YOUR_TABLE_NAME> a WHERE @searchArg between a._BEGIN and a._END ``` If you care about performance and you've got a lot of data and indexes you won't want to include function calls on the column values.. you could in-line this conversion but this assures that your predicates are [Sargable](http://en.wikipedia.org/wiki/Sargable).
*Less than* `<` and *greater than* `>` operators work on xCHAR data types without any syntactical error, but it may go semantically wrong. Look at examples: **1** - `SELECT 'ab' BETWEEN 'aa' AND 'ac' # returns TRUE` **2** - `SELECT '2' BETWEEN '1' AND '10' # returns FALSE` Character `2` as being stored in a xCHAR type has greater value than `1xxxxx` So you should `CAST` types here. [*Exampled on MySQL - For standard compatibility change `UNSIGNED` to `INTEGER`*] ``` WHERE CAST(@theInput as UNSIGNED) BETWEEN CAST(a._BEGIN as UNSIGNED) AND CAST(a._END as UNSIGNED) ``` You'd better change the types of columns to avoid ambiguity for later use.
Search Through All Between Values SQL
[ "", "sql", "" ]
I would like to double the value of column C in a table. Is there a built-in function to achieve this? Current table ``` A B C X 1 2 3 Y 4 5 6 Z 7 8 9 ``` After running the T-SQL or any query table values should be as below ``` A B C X 1 2 6 Y 4 5 12 Z 7 8 18 ```
1. To **select** the data: ``` SELECT A,B,C*2 as C FROM TableName ``` Result: ``` A B C 1 2 6 4 5 12 7 8 18 ``` See result in [**SQL Fiddle**](http://www.sqlfiddle.com/#!3/be42e/1). 2. If you want to **update** the table: ``` UPDATE TableName SET C=(C * 2) ```
Just multiplying C by 2 (I mean `c*2`) makes any problem here ``` select A, B, C * 2 as c from your_table ``` (OR) If you want, create your own scalar UDF which will return the double value as ``` create function doubleval(@val int) returns int as begin return @val * 2; end ``` Then, in your select use it as ``` select A, B, doubleval(C) as c from your_table ```
How to multiply column values in SQL Server?
[ "", "sql", "sql-server", "" ]
I have a column named **BillsOfMonth** (nvarchar) in which i have only Month and year values in format Jan-2014, Feb-2014 , Mar-2014 I want to convert it to Date Format and order by Date. i have tried ``` select rid , name , billsofmonth from reports order by CONVERT (datetime, "1-" & billsofmonth) ``` Since "1-" & billsOfMonth which is Jan-2014 will become '1-Jan-2014'. But convert will work only in Columns. ย ย  How can i concatenate Constant String to column BillsOfMonth and convert it to date? **I can not make Function and T-Statement. I Can Only Use Query.**
I found what i am doing wrong. In Sql Server we use single inverted comma (') for constant expression not double inverted comma ("). So, It will be SELECT [rid] ,[name], [billsofmonth] FROM [reports] order by CAST ( ('1-' + billsofmonth) as datetime)
You should use single quotes instead of double quotes, and also use `+` not `&` to concatenate strings: ``` ORDER BY CONVERT(DATETIME, N'1-' + billsofmonth); ``` For what it's worth, there is no need to store this as NVARCHAR, VARCHAR would do just fine as there are no non-ascii characters in the format you are using. This article on [choosing the wrong data type](https://sqlblog.org/2009/10/12/bad-habits-to-kick-choosing-the-wrong-data-type) is a useful read. Although better still would be to store it as a date, making it the first of month, then put in a check constraint to ensure only the first day of the month is used. e.g ``` CREATE TABLE Reports ( BillsOfMonth DATE NOT NULL, CONSTRAINT CHK_Reports_BillsOfMonth CHECK (DATEPART(DAY, BillsOfMonth) = 1) ); ``` This provides much more flexibility with comparison, and ensuring data integrity i.e. I with your column I could enter 'xxxxxxx', which would go into the column fine, but when you came to run the convert to datetime an error would be thrown. The check constraint would stop any invalid entries, so if I ran: ``` INSERT Reports (BillsOfMonth) VALUES ('20140502'); ``` I would get the error: > The INSERT statement conflicted with the CHECK constraint "CHK\_Reports\_BillsOfMonth". The conflict occurred in database "TestDB", table "dbo.Reports", column 'BillsOfMonth'. Then if you will often need your date in the format `MMM-yyyy` you could add a computed column: ``` CREATE TABLE Reports ( BillsOfMonth DATE NOT NULL, BillsOfMonthText AS LEFT(DATENAME(MONTH, BillsOfMonth), 3) + '-' + DATENAME(YEAR, BillsOfMonth), CONSTRAINT CHK_Reports_BillsOfMonth CHECK (DATEPART(DAY, BillsOfMonth) = 1) ); ``` **[Example on SQL Fiddle](http://sqlfiddle.com/#!3/14585/1)**
Concatenate Constant value in string in where or Order by clause in sql server?
[ "", "sql", ".net", "sql-server", "database", "" ]
Simplify sample of my database is a table which as ``` CN REG# CAPACITY ---------- ---------- ---------- 0 LUCY02 43000 6 QRT834 5550 3 SYF777 3333 12 PKR768 1234 5 SST005 12000 6 KKK007 10000 3 PKR008 22000 ``` This has been created from joining two tables. Here is join statement: ``` select count(tp.reg#) as cn,t.reg#, CAPACITY from truck t join trip tp on t.reg# = tp.reg# group by tp.reg#, CAPACITY, t.reg# ``` Now, how can I write a statement to print only the lines which, has the minimum number of occurrence and nonzero. In our example it is 3. Expected output is ``` 3 SYF777 3333 3 PKR008 22000 ``` What I have done so far but is not working is: ``` select count(tp.reg#) as cn,t.reg#, CAPACITY from truck t left join trip tp on t.reg# = tp.reg# left join ( select dd, min(d) as ds from( select count(*)as d, ti.CAPACITY, ti.reg# as dd from truck ti, trip tp where tp.reg# = ti.reg# group by ti.reg#, ti.CAPACITY ) group by dd ) on t.reg# = dd group by tp.reg#, CAPACITY, t.reg# ```
Hi you could try this, ``` select * from (select count(tp.reg#) as cn,t.reg#, CAPACITY from truck t join trip tp on t.reg# = tp.reg# group by tp.reg#, CAPACITY, t.reg#) where cn = (select min(cn) from (select count(tp.reg#) as cn,t.reg#, CAPACITY from truck t join trip tp on t.reg# = tp.reg# where cn > 0 group by tp.reg#, CAPACITY, t.reg#)) ``` I hope is usful for you... Thanks
If you want to filter your query based on an aggregate function like "count", simply add this filter in "having" statement: ``` select count(tp.reg#) as cn,t.reg#, CAPACITY from truck t join trip tp on t.reg# = tp.reg# group by tp.reg#, CAPACITY, t.reg# having count(tp.reg#) >= 3 ``` and if you want query records with minimum occurrence dynamically, you can use the code below: ``` select count(tp.reg#) as cn,t.reg#, CAPACITY from truck t join trip tp on t.reg# = tp.reg# group by tp.reg#, CAPACITY, t.reg# having count(tp.reg#) = (select distinct top 1 count(tp.reg#) cn from truck t join trip tp on t.reg# = tp.reg# group by tp.reg#, CAPACITY, t.reg# having count(tp.reg#) >0 order by cn) ```
How to select only records that have minimum number of occurrence in a table?
[ "", "sql", "oracle", "join", "oracle11g", "" ]
I'm going to implement a bookstore database. I have created the table `book`, `author`, and `publisher`. I'd like to make the following two relationships. ``` Book is written by Author. Book is published by Publisher. ``` In order to implement these relationships, I write some SQL statements like: ``` create table book( ISBN varchar(30) NOT NULL, title varchar(30) not null, author varchar(30) not null, stock Int, price Int, category varchar(30), PRIMARY KEY ( ISBN ) ); create table author( author_id int not null auto_increment, author_name varchar(15) NOT NULL, address varchar(50) not null, ISBN varchar(30) not null, primary key (author_id) ); alter table author add constraint ISBN foreign key (ISBN) references book (ISBN); create table publisher( publisher_id int not null auto_increment, publisher_name varchar(15) NOT NULL, address varchar(50) not null, ISBN varchar(30) not null, primary key (publisher_id) ); alter table publisher add constraint ISBN foreign key (ISBN) references book (ISBN); ``` When MySQL shell executes the last `alter` statement, I get this error. ``` ERROR 1022 (23000): Can't write; duplicate key in table '#sql-2b8_2' ``` Originally, can't foreign key be designated two times? What's wrong with? Thank you in advance.
You are getting the `duplicate key error` cause there is already a constraint named `ISBN` present in database per your first `alter` statement to `author` table ``` alter table author add constraint ISBN foreign key (ISBN) references book (ISBN); ``` Try using a different name for the constraint in `Publisher` table ``` alter table publisher add constraint ISBN1 foreign key (ISBN) references book (ISBN); ```
Your data structure is strange. You should have entity tables for `Books`, `Authors`, and `Publishers`. These would have auto-incremented ids as primary keys and additional information. For instance, books have "titles" and "isbn" numbers. Authors have names. Publishers have names and addresses. Then you want junction tables. So, books have one or more authors (ignoring "editors" that compile chapters from other authors), and authors can write one or more books. This suggests a `BookAuthors` table, with one row per book and per author in the book. Books would generally have one publisher, so this is a one-to-many relationship. You can implement this by having `PublisherId` in the `Books` table.
MySQL: ERROR 1022 (23000): Can't write; duplicate key in table '#sql-2b8_2'
[ "", "mysql", "sql", "" ]
Consider the following ``` -- Get all objects from database A use database_a; select o.objectnumber into #temp from EDDSDBO.objects o group by d.objectnumber; -- #temp holds 0001, 0002 -- Get all objects from database B use database_b; select o.objectnumber, case when o.objectnumer in #temp then 1 else 0 end as Match from EDDSDBO.objects o group by o.objectnumber; -- Expected output objectnumber Match 0001 1 0002 1 0003 0 ``` But I get an error: `incorrect syntax near objectnumber`. I cannot seem to get this query right. What is the right syntax here? Any help is greatly appreciated :-) P.S. I'm on SQL Server 2008
If objectnumber is unique in each table, then you can just use `LEFT JOIN` with a 3 part object name: ``` SELECT b.objectnumber, Match = CASE WHEN a.objectnumber IS NOT NULL THEN 1 ELSE 0 END FROM database_b.EDDSDBO.objects b LEFT JOIN database_a.EDDSDBO.objects a ON a.objectnumber = b.objectnumber ``` If it is not unique, you can still do this, but you will need to use group by and an aggregate: ``` SELECT b.objectnumber, Match = MAX(CASE WHEN a.objectnumber IS NOT NULL THEN 1 ELSE 0 END) FROM database_b.EDDSDBO.objects b LEFT JOIN database_a.EDDSDBO.objects a ON a.objectnumber = b.objectnumber GROUP BY b.objectnumber; ``` The important part is there is no need to use a temporary table, this is unnecessary overhead on tempdb, and you also lose the use of any index on `objectnumber`.
Change the second part of your query to ``` use database_b; select o.objectnumber, case when o.objectnumer in (select distinct objectnumber from #temp) then 1 else 0 end as Match from EDDSDBO.objects o group by o.objectnumber; ```
Check if an account# exist in another database
[ "", "sql", "sql-server", "sql-server-2008", "" ]
``` ALTER PROCEDURE [dbo].[SP_GetLatestMmdData] AS DECLARE @vReturnVar VARCHAR(10) ,@vRCount VARCHAR(10) BEGIN SET NOCOUNT ON; SET @vRCount =(SELECT Distinct Shift From [dbo].[tblMMDLogEntry] Where Shift Like 'Night ' And Date=(SELECT MAX(Date) FROM [dbo].[tblMMDLogEntry])) IF(@vRCount>0) BEGIN SET @vReturnVar='Night ' END Else BEGIN SET @vReturnVar='Evening ' END SELECT convert(varchar,[Date],103) Date ,Shift ,SlNo ,TotalManpower ,JobDescription ,PermitNo ,StartTime ,EndTime ,AllotedManpower ,Supervisior ,AShift ,GShift ,BShift ,CShift ,Remarks FROM [dbo].[tblMMDLogEntry] WHERE Date=(SELECT MAX(Date) FROM [dbo].[tblMMDLogEntry]) AND Shift= @vReturnVar END ``` I have the Following Code in My stored Procedure. i have already declared that @vReturnVar but still showing 'Must Declare the scalar variable "@vReturnVar".
Shouldn't it include a "COUNT()" ``` SET @vRCount =(SELECT COUNT(Distinct Shift) From [dbo].[tblMMDLogEntry] Where Shift Like 'Night ' And Date=(SELECT MAX(Date) FROM [dbo].[tblMMDLogEntry])) ```
Put ``` DECLARE @vReturnVar VARCHAR(10) ,@vRCount VARCHAR(10) ``` After the `BEGIN` of the stored procedure
Must Declare the scalar variable error in Stored Procedure
[ "", "sql", "sql-server-2008", "" ]
Just wondering if anyone can spot anything obviously wrong here. I'm getting strange results with a SELECT in MySQL. I can't see anything wrong with the query so I wondered if some data is throwing it out, but the data does look ok.. So I guess my sql has gone a bit wrong! I have 2 tables, CITY and WEATHER: ``` CITY: CITY_ID INT(3) CITY_NAME VARCHAR(100) WEATHER: CITY_ID INT(3) SDATE DATETIME TEMP INT(3) RAIN INT(3) ``` The CITY table is very small; one row per defined city. The WEATHER table has many rows; it has forecasts (several times per day) for each city. Also holds historical forecasts, so there's data in there going back a few months. I want to see a summary for each city for each month, on each line. I want all cities info for a month, then the previous month etc. Within each month I want the results order by temperature (descending) and then rain (descending). So, my query: ``` SELECT a.city_name as CITY , DATE_FORMAT(b.sdate, '%M %Y') as MONTH , ROUND(avg(b.temp)) as AVG_TEMP_C , SUM(b.rain) as RAIN FROM CITY a , WEATHER b WHERE a.city_id = b.city_id GROUP BY a.city_name , DATE_FORMAT(b.sdate, '%M %Y') ORDER BY sdate desc, ROUND(avg(b.temp)) desc , SUM(b.rain) desc; ``` Results I would expect: ``` CITY MONTH TEMP RAIN ISTANBUL June 20 0 MUNICH June 15 9 PARIS June 15 7 MILAN June 14 8 ISTANBUL May 19 22 etc. ``` Results I get: ``` CITY MONTH TEMP RAIN MUNICH June 15 9 MILAN June 14 8 ISTANBUL June 20 0 PARIS June 15 7 ISTANBUL May 19 22 etc. ``` Like I say, the data *looks* ok, so I guess I've missed something amongst all the formatting, the join, group and order by..
This is your query with an explicit join and more sensible table aliases (table abbreviations rather than arbitrary letters): ``` SELECT c.city_name as CITY, DATE_FORMAT(w.sdate, '%M %Y') as MONTH, ROUND(avg(w.temp)) as AVG_TEMP_C, SUM(w.rain) as RAIN, FROM CITY a JOIN WEATHER w ON c.city_id = w.city_id GROUP BY c.city_name , DATE_FORMAT(w.sdate, '%M %Y') ORDER BY sdate desc, ROUND(avg(w.temp)) desc , SUM(w.rain) desc; ``` The first column in the `order by` is `sdate`. However, that is not included in your `group by` clause. So, a value of `sdate` from an indeterminate row is chosen for each city by MySQL (in other databases, this would produce an error). Instead, change your `order by` to use the month and year: ``` ORDER BY year(sdate) desc, month(sdate) desc, ROUND(avg(w.temp)) desc , SUM(w.rain) desc; ```
You are sorting by sdate, which is in the aggregated result table, but not subject of an aggregate function. Therefore mysql takes any sdate value which matches the other conditions (probably the first one?) and sorts by this value.
Strange results of mysql query with GROUP BY and ORDER BY
[ "", "mysql", "sql", "group-by", "sql-order-by", "" ]
How do I get the entire query executed in sqldeveloper, which would give me the time taken for execution? As of now, it just fetches 50-100 rows, and fetches the rest as and when I scrolled down. It must be a fairly simple solution, but googling it didn't help me. It would be great if someone could point me in the right direction.
Click in your result grid and press `Ctrl-End`.
You will get every thing discussed here : <https://community.oracle.com/thread/449344>
Fetching the complete results in sqldeveloper
[ "", "sql", "oracle", "oracle-sqldeveloper", "" ]
What's the difference between `is null` and `<=> NULL` ? ``` mysql> SELECT * FROM param WHERE num is null; +-----+------+ | id | num | +-----+------+ | 8 | NULL | | 225 | NULL | +-----+------+ 2 rows in set (0.00 sec) mysql> SELECT * FROM param WHERE num<>NULL; Empty set (0.00 sec) mysql> SELECT * FROM param WHERE num<=>NULL; +-----+------+ | id | num | +-----+------+ | 8 | NULL | | 225 | NULL | +-----+------+ ``` difference in standards versions? I do not see the practical use of parameter `<=>` if it has `is null`
**NEVER** check for nulls using `foo = null` or `foo <> null` or `foo != null` ``` mysql> SELECT 1 <> NULL; -> NULL ``` **Not even NULL is equal to NULL!** ``` mysql> SELECT NULL = NULL; -> NULL ``` Instead use one of the following operators --- The `<=>` is the [Null-Safe Operator](http://dev.mysql.com/doc/refman/5.0/en/comparison-operators.html#operator_equal-to) > **NULL**-safe equal. This operator performs an equality comparison like the **=** operator, but returns `1` rather than **NULL** if both operands are **NULL**, and `0` rather than **NULL** if one operand is **NULL**. ``` mysql> SELECT 1 <=> 1, NULL <=> NULL, 1 <=> NULL; -> 1, 1, 0 mysql> SELECT 1 = 1, NULL = NULL, 1 = NULL; -> 1, NULL, NULL ``` --- On the other hand, [IS NULL](http://dev.mysql.com/doc/refman/5.0/en/comparison-operators.html#operator_is-null) is a little more straight forward > Tests whether a value is **NULL**. ``` mysql> SELECT 1 IS NULL, 0 IS NULL, NULL IS NULL; -> 0, 0, 1 ``` **Important:** Read the [IS NULL documentation](http://dev.mysql.com/doc/refman/5.0/en/comparison-operators.html#operator_is-null) to see how the `sql_auto_is_null` setting affects this operator. **See also:** [IS NOT NULL](http://dev.mysql.com/doc/refman/5.0/en/comparison-operators.html#operator_is-not-null) to test for values *not* equal to NULL. --- You might be interested in [COALESCE](http://dev.mysql.com/doc/refman/5.0/en/comparison-operators.html#function_coalesce) too.
# Theoretical difference **`<=>` Operator** [`<=>`](http://dev.mysql.com/doc/refman/5.0/en/comparison-operators.html#operator_equal-to) is a safe null-comparison operator. That means you may use it and do not worry if you'll compare with `NULL` - it will behave properly. To illustrate, here is a simple query: ``` mysql> SELECT v, v<=>NULL, v<=>1, v<=>0 FROM test; +------+----------+-------+-------+ | v | v<=>NULL | v<=>1 | v<=>0 | +------+----------+-------+-------+ | 1 | 0 | 1 | 0 | | NULL | 1 | 0 | 0 | +------+----------+-------+-------+ 2 rows in set (0.00 sec) ``` So what `<=>` does - is normal comparison, with paying attention is one or two compared operands are `NULL`. **`IS NULL`** On the other hand, [`IS NULL`](http://dev.mysql.com/doc/refman/5.0/en/comparison-operators.html#operator_is-null) is very similar. It will check if checked argument is `NULL` or not. But - no, it's not exactly same as using `<=>` - at least, because `IS NULL` will return boolean value: ``` mysql> SELECT v, v IS NULL FROM test; +------+-----------+ | v | v IS NULL | +------+-----------+ | 1 | 0 | | NULL | 1 | +------+-----------+ 2 rows in set (0.00 sec) ``` **How they are equivalent** But - **yes**, we can replace `<=>` with `IS NULL`, using [`IF`](http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_if). That will be done with: ``` mysql> SELECT v, IF(v IS NULL, 1, 0) AS `v<=>NULL`, IF(v IS NULL, 0, v=1) AS `v<=>1`, IF(v IS NULL, 0, v=0) AS `v<=>0` FROM test; +------+----------+-------+-------+ | v | v<=>NULL | v<=>1 | v<=>0 | +------+----------+-------+-------+ | 1 | 0 | 1 | 0 | | NULL | 1 | 0 | 0 | +------+----------+-------+-------+ 2 rows in set (0.00 sec) ``` Thus, `<=>` is equivalent for combination of `IF`, `IS NULL` and plain comparison. # Practical difference I already said, that `<=>` can be replaced with `IS NULL` and `IF` - but `<=>`, actually, have one great benefit. It may be used safely in [**prepared statements**](http://dev.mysql.com/doc/refman/5.0/en/sql-syntax-prepared-statements.html). Imagine that we want to check some condition with incoming value. Using `<=>` we can do this, using prepared statement: ``` mysql> PREPARE stmt FROM 'SELECT * FROM test WHERE v<=>?'; Query OK, 0 rows affected (0.00 sec) Statement prepared ``` And now we can just do not care if we'll pass `NULL` or not - it will work properly: ``` mysql> SET @x:=1; Query OK, 0 rows affected (0.03 sec) mysql> EXECUTE stmt USING @x; +------+ | v | +------+ | 1 | +------+ 1 row in set (0.00 sec) ``` Or with `NULL`: ``` mysql> SET @x:=NULL; Query OK, 0 rows affected (0.00 sec) mysql> EXECUTE stmt USING @x; +------+ | v | +------+ | NULL | +------+ 1 row in set (0.00 sec) ``` That, of course, will be same for all drivers, which are relying on prepared statements (such as [PDO for PHP](https://www.php.net/pdo_mysql), for example). Well. we *can* replace that with `IS NULL` - but `<=>` is a great thing to use in this case, so we can have our prepared statement short and clean.
What's the difference between "is null " AND "<=> NULL"
[ "", "mysql", "sql", "null", "isnull", "" ]
Getting this error with the following query in SQL Server 2012. **An expression of non-boolean type specified in a context where a condition is expected, near 'RETURN'.** ``` CREATE FUNCTION [dbo].[GetPMResources](@UserResourceNo nvarchar(250)) RETURNS @Resources TABLE ( ResourceNo nvarchar(250) COLLATE Latin1_General_CS_AS not null, Name nvarchar(250) COLLATE Latin1_General_CS_AS not null ) AS BEGIN Declare @RoleID int, @UserDivision nvarchar(20) SELECT TOP(1) @RoleID = r.ReportingRole, @UserDivision = r.DivisionCode FROM Resource r WHERE r.ResourceNo = @UserResourceNo INSERT @Resources SELECT r.ResourceNo,Name = r.ResourceNo + ' ' + r.Name FROM Resource r WHERE r.ResourceNo IN ( SELECT m.ResourceNo FROM JobMember m JOIN Job j ON j.JobNo = m.JobNo WHERE j.ProjectManagerNo = @UserResourceNo OR j.AlternateProjectManagerNo = @UserResourceNo ) OR ( SELECT m.ResourceNo FROM JobMember m JOIN JobTask t ON t.JobTaskNo = m.JobTaskNo WHERE t.TaskManagerNo = @UserResourceNo OR t.AlternateTaskManagerNo = @UserResourceNo ) RETURN END ```
That is invalid syntax. You are mixing relational expressions with scalar operators (`OR`). Specifically you cannot combine `expr IN (select ...) OR (select ...)`. You probably want `expr IN (select ...) OR expr IN (select ...)`. Using union would also work: `expr IN (select... UNION select...)`
*An expression of non-boolean type specified in a context where a condition is expected* I also got this error when I forgot to add ON condition when specifying my join clause.
SQL Server - An expression of non-boolean type specified in a context where a condition is expected, near 'RETURN'
[ "", "sql", "sql-server", "sql-server-2012", "" ]
Why is '`null * 0 = null`'? There can not be any other result, why is `null*0` not evaluated to `0`? Is it possible that this has to do with some higher level math which i am unfamiliar with, or would it may be have some implications for the dbms?
`NULL` in SQL represents "unknown". When a math operation is performed with an unknown, you could not possibly get anything known, even when you multiply it by zero. Similar situation happens in math: although multiplying any number by zero yields zero, [multiplying an infinity by zero doesn't](https://math.stackexchange.com/q/28940).
It's a sane design choice to maintain `NULL`ness through all manners of operation. Making an exception for multiplication by 0 would raise inconsistencies, e.g. ``` 0 * NULL; # 0 1 * NULL; # NULL .5 * NULL; # HALFNULL? ;-) ```
Why is null * 0 = null in sql when it is not in math or other languages
[ "", "sql", "math", "null", "" ]
I try write a SQL sintax for MS SQL 2000 with a row\_number function... SQL 2000 doesn't support row\_number with OVER, so I tried this code.... WHen I wrote this: ``` SELECT P1.*, (SELECT COUNT(*) FROM Persons P2 WHERE P2.Value<= P1.Value ) AS NewValue FROM Persons P1 WHERE ... ``` everything is OK, a I get a new column 'NewValue' with row\_numbers... But, when I try to update one column in table with this new column, I always get an error: "Derived table is not updatable because a column of the derived table is derived or constant..."???? What is wrong?? Here is a complete sintax: ``` UPDATE t SET t.Value= t.NewValue FROM (SELECT P1.*, (SELECT COUNT(*) FROM Persons P2 WHERE P2.Value<= P1.Value) AS NewValue FROM Persons P1) t WHERE .... ``` 'Value' is a column in table which I can't update with a values from 'NewValue' column... Thank you very much!!! :)
You should shift things around and do your counting directly as a subquery in the `SET` clause: ``` UPDATE P1 SET Value= (SELECT COUNT(*) FROM Persons P2 WHERE P2.Value<= P1.Value) FROM Persons P1 WHERE .... ```
It means that SQL Server is unable to determine how to update the actual data. This error may appear in two cases: 1. You're trying to update a constant field. Example: ``` update T set Title = N'New title goes here' from (select 'Old title' as Title) as T ``` 2. You're trying to update a derived value. Example: ``` update T set MaxPrice = 512 from (select max(Price) as MaxPrice) as T ``` In order to avoid this issue, you may consider adding a primary key to your table, or base your update on an unique index. There are few cases where you would need a table with no primary key or an unique index. If you're completely sure that any primary key or unique index will harm the schema, you may want to simulate the `row_number`, for example [like this](http://sqlserverplanet.com/sql-2000/simulate-row_number-in-sql-2000): ``` select RowNumber = identity(int, 1, 1), c.LastName, c.FirstName into #Customer_RowID from SalesLT.Customer c order by c.LastName asc ``` Given the lack of unique constraint, make sure you do the select-update within a transaction to avoid updating a different row.
ms sql 2000 row_number code
[ "", "sql", "sql-server-2000", "row-number", "" ]
A quick Question. I have a query that brings back 2 columns 'Description' and 'Amount' In the Description we have 3 outcomes. 'Gold - owned', 'Bronze - no land' and 'Silver - identified / offered' I would like the result to show in an order of Gold,Silver,Bronze Order By Asc or Desc does not achieve this. Is there a way to customize a Order by clause? Any Help on this Would be appreciated thanks Rusty
Inside of a `CASE`, you may ascribe a numeric value to each and order those ascending. If you will need to query a large table, consider adding an index on `Description` to improve sorting performance. ``` ORDER BY CASE WHEN Description = 'Gold - owned' THEN 0 WHEN Description = 'Silver - identified / offered' THEN 1 WHEN Description = 'Bronze - no land' THEN 2 ELSE 99 /* Any other value (which you should not have) sorts after all */ END ASC /* And don't forget to be explicit about ASC order though it's the default */ ``` Since this works like a normal column in the `ORDER BY`, if you needed to then sort by the `Amount` or other column, it can be appended with a comma. ``` ORDER BY CASE WHEN Description = 'Gold '... END ASC, Amount DESC, AnotherColumn ASC ```
Try to use [CASE](http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#operator_case) statement: ``` ORDER BY ( CASE Description WHEN 'Gold - owned' THEN 1 WHEN 'Silver - identified / offered' THEN 2 WHEN 'Bronze - no land' THEN 3 ELSE 0 END) ```
SQL Custom Order By Clause
[ "", "sql", "sql-order-by", "" ]
This query always returns the same amount of rows but, in a different order, every time. Why does this happen? I have more filters to add but I can't get past this step. ``` BEGIN DECLARE @lastStatus Varchar(10) SELECT [Job].[Job], [Job].[Part_Number], [Job].[Rev], [Job_Operation].[Description], [Job].[Customer_PO], [Job].[Customer_PO_LN], [Delivery].[Promised_Date], [Job_Operation].[Operation_Service], [Job].[Note_Text], [Job_Operation].[Status], [Job_Operation].[Sequence] INTO [#tmpTbl] FROM [PRODUCTION].[dbo].[Job_Operation] INNER JOIN [Job] ON [Job_Operation].[Job]=[Job].[Job] INNER JOIN [Delivery] ON [Job_Operation].[Job]=[Delivery].[Job] WHERE [Job].[Status]='Complete' ORDER BY [Job_Operation].[Job],[Job_Operation].[Sequence] SELECT * FROM [#tmpTbl] DROP TABLE [#tmpTbl] END ```
Put the `Order By` on the `Select * From #tmpTbl`, not on the `insert`.
You can do initials on your table and you can remove your bracket for non spaces so you can make your code shorter. ``` SELECT j.Job, ,j.[Part_Number] ,j.Rev ,j_O.Description ,j.Customer_PO ,j.[Customer_PO_LN] ,d.[Promised_Date] ,j_o.[Operation_Service] ,j.[Note_Text], ,j_o.Status, ,j_o.Sequence ,j.[Customer_PO], ,j.[Customer_PO_LN], ,d.[Promised_Date], ,j_o.[Operation_Service], ,j.[Note_Text], ,j_o.[Status], [Job_Operation].[Sequence] INTO [#tmpTbl] FROM [PRODUCTION].[dbo].[Job_Operation] j_o INNER JOIN Job j ON j_o.Job = j.Job INNER JOIN Delivery d ON j_o.Job= d.Job WHERE j.Status='Complete' ORDER BY j_o.Job,j_o.Sequence SELECT * FROM [#tmpTbl] DROP TABLE [#tmpTbl] END ```
Query returns a different result every time it is run
[ "", "sql", "sql-server", "t-sql", "sql-order-by", "" ]
I have table called 'loan' with constrain `Loan_Date < Return_Date`. I'm unable to insert null values to Return date. Constaint: `CONSTRAINT LOAN_DATE_CK CHECK (Loan_Date < Return_Date)` My SQL statement: ``` INSERT INTO Loan (Borrower_Id, Copy_Id, Loan_Date ,Return_Date) VALUES ('00001','00001','2014-04-01','2014-04-10'), ('00001','00007','2014-04-10',' '); ``` Error: ``` ****Msg 547, Level 16, State 0, Line 1 The INSERT statement conflicted with the CHECK constraint "LOAN_DATE_CK". The conflict occurred in database "lib_database", table "dbo.Loan". The statement has been terminated.**** ``` I can enter records with both values. Can anyone please help me to correct this error?
Either Removed the constraint or add another criteria in the constraint and OR it such ``` CONSTRAINT LOAN_DATE_CK CHECK (Loan_Date < Return_Date OR Return_Date is null) ``` and do insert like this ``` INSERT INTO Loan (Borrower_Id, Copy_Id, Loan_Date ,Return_Date) VALUES ('00001','00001','2014-04-01','2014-04-10'), ('00001','00007','2014-04-10', null); --make null value instead of ' ' or '' ```
I'm assuming the Loan\_Date ,Return\_Date fields are of type Date/DateTime - You cannot successfully evaluation a Date data type against a null value (2014-01-01 > or < null) this value will always return false I Would suggest add an ISNULL check condition to return a static date that you can check against ``` CONSTRAINT LOAN_DATE_CK CHECK (Loan_Date < ISNULL(Return_Date, CAST('CustomDate' as DATE))) ```
SQL query - Unable to insert values with date check constrain -
[ "", "sql", "sql-server", "datetime", "constraints", "sql-insert", "" ]
Consider the following table: ``` MyValue -------------------- 123 122_DELETED 45670 42425 43_NO_VIEW 4365463_CORRUPT 53 4335_NO_VIEW_ALLOWED ``` I'm trying to get only the numbers returned. In other words: string everything after the first underscore (`_`): ``` select left(MyValue, charindex(('_', MyValue)-1) from DB.Table ``` However, this returns the error `Invalid length parameter passed to the LEFT or SUBSTRING function.` I believe this is because the value is `NULL` in case the current value has no underscore (for instance, `123`). How can I account for this exception? Any help is greatly appreciated. I am on SQL Server 2008.
try this! ``` select myval,case when myval like '%[_]%' then substring(myval,1,patindex('%[_]%',myval)-1) else myval end from t ``` `##DEMO USING PATINDEX` `##DEMO USING CHARINDEX`
``` select SUBSTRING(MyValue,CHARINDEX('_',MyValue)+1,LEN(MyValue)) from DB.Table ```
Select substring of retrieved value
[ "", "sql", "sql-server", "sql-server-2008", "" ]
Here is a table I created to explain what I want to do: ``` create table #test ( PlaceID int, ItemID int, ItemCount int, Amount dec(11,2) ) ``` I would like to get 3 things: 1. sum by Place 2. sum by Place and Item 3. sum by Place and Non-item The first two are simple: ``` sum(Amount) over (partition by PlaceID) as PlaceAmount sum(Amount) over (partition by PlaceID, ItemID) as PlaceItemAmount ``` But how do I get the sum for all items in the place that are NOT the current item? Here is a [SQL Fiddle](http://sqlfiddle.com/#!6/297ad/1) with the data and query set up:
``` select t1.PlaceID, t1.ItemID, t1.ItemCount , t1.Amount as 'AmtMe' , SumPlace.sum as 'AmtPlace' , SumPlace.sum - t1.Amount as 'AmtPlaceNoMe' from #test as t1 join (select PlaceID, sum(Amount) as 'sum' from #test group by PlaceID) as SumPlace on t1.PlaceID = SumPlace.PlaceID ```
``` SELECT PlaceID, ItemID, ItemCount, Amount, sum(ItemCount) over (partition BY PlaceID) AS PlaceItemCount, sum(Amount) over (partition BY PlaceID) AS PlaceAmount , sum(Amount) over (partition BY PlaceID, ItemID) AS PlaceItemAmount , sum(Amount) over (partition BY PlaceID) - sum(Amount) over (partition BY PlaceID, ItemID) AS PlaceItemAmountMinusGroup , sum(Amount) over (partition BY PlaceID) - Amount PlaceItemAmountMinusThis FROM tblTest ``` `PlaceItemAmountMinusGroup` is the total amount by place without the total amount of `ItemID` `PlaceItemAmountMinusThis` is the total amount by place without the amount of the row. [SQLFiddle demo](http://sqlfiddle.com/#!3/297ad/2)
Is it possible to filter within a windowing function's partition
[ "", "sql", "t-sql", "" ]
I have a table named `Locations` that has column named `effective_date` with many dates from many years, and I want to retrieve only those that are ***not*** on the first day of the month or the last day of that month.
If your `effective_date` columns is of type `date`, then this SQL query will return all rows with a non-null `effective_date` value that is other than the 1st or last day of the month: ``` select t.effective_date , count(*) from dbo.foo t where 1 = 1 -- just for clarity -- after the 1st day of the month and t.effective_date > dateadd(day , 1-day( t.effective_date ) , t.effective_date ) -- and prior to the last day of the month and t.effective_date < dateadd( day , -day( dateadd(month,1,t.effective_date) ) , dateadd(month,1,t.effective_date) ) ``` If your column carries a time component with it, that is, any of: * `datetime` * `smalldatetime` * `datetime2` * `datetimeoffset` You'll want to cover your bases and modify the query, something like ``` select * from dbo.foo t where 1=1 -- added for clarity -- effective date on or after the 2nd of the month and t.effective_date >= convert(date, dateadd(day , 2-day( t.effective_date ) , t.effective_date ) ) -- and prior to the last day of the month and t.effective_date < convert(date, dateadd(day, -day( dateadd(month,1,t.effective_date) ) , dateadd(month,1,t.effective_date) ) ) ```
Here is a [SQL Fiddle Demo](http://sqlfiddle.com/#!6/ae56d/3/0) with the detail below. Generate a table and some sample test data: ``` CREATE TABLE Locations( effective_date DATETIME ) INSERT INTO Locations VALUES('2014-01-01') -- First day so we would expect this NOT to be returned INSERT INTO Locations VALUES('2014-01-02') -- This should be returned INSERT INTO Locations VALUES('2014-01-31') -- Last day of January so this should NOT be returned ``` Then the query below works out the last day of the month for each date in the table, records are only returned is if the `effective_date` is not the first or last day of the month as calculated. ``` SELECT effective_date FROM Locations WHERE -- not the first day (the easy bit!) DATEPART(day, effective_date) <> 1 -- not the last day (slightly more complex) AND DATEPART(day, effective_date) <> DATEPART(day, DATEADD(second,-1,DATEADD(month, DATEDIFF(month,0,effective_date)+1,0))) ``` When executed only `January, 02 2014 00:00:00+0000` is returned as expected. The clever bit here is the function to calculate the last day of the current month when given a date, lets examine that and break it down: ``` DECLARE @sampleDate DATETIME SET @sampleDate = '2014-01-02' -- Calculate the number of months between '1900-01-01' and the @sampleDate -- +1 as we want to shift into the following month so we can work back: SELECT DATEDIFF(month,0,@sampleDate) + 1 -- Result --> 1369 -- Create a new date by adding the result of the previous step in -- months to '1900-01-01' SELECT DATEADD(month, DATEDIFF(month,0,@sampleDate)+1,0) -- Result --> '2014-02-01' (giving first day of the following month) -- Subtract one second from this SELECT DATEADD(second,-1,DATEADD(month, DATEDIFF(month,0,@sampleDate)+1,0)) -- Result --> '2014-01-31 23:59:59' (giving the very end of the original month) -- Finally extract the day of the month SELECT DATEPART(day, DATEADD(second,-1,DATEADD(month, DATEDIFF(month,0,@sampleDate)+1,0))) -- Result --> 31 ```
Find dates that are not on the first or last day of the month
[ "", "sql", "sql-server", "sql-server-2008", "" ]
Im quite a beginner at SQL and I've been trying to workout for hours now what the problem is with this : ``` select to_char(date_Created, 'MON DD YYYY') as jours, action, count(ID) from Logs group by action, to_char(date_Created, 'MON DD YYYY') union select distinct to_char(date_Created, 'MON DD YYYY'), action, 0 from Logs WHERE jours BETWEEN 'AVR. 14 2014' AND 'AVR. 15 2014' ``` When I try it, it returns an error: ``` ORA-00904: "JOURS" : identificateur non valide 00904. 00000 - "%s: invalid identifier" *Cause: *Action: Erreur ร  la ligne 7, colonne 6 ``` Thanks !
How about: ``` select to_char(date_Created, 'MON DD YYYY') as jours, action, count(ID) from Logs group by action, to_char(date_Created, 'MON DD YYYY') union select distinct to_char(date_Created, 'MON DD YYYY') as jours, action, 0 from Logs WHERE to_char(date_Created, 'MON DD YYYY') BETWEEN 'AVR 14 2014' AND 'AVR 15 2014' ```
Your problem is caused because Standard SQL doesn't allow you to refer to a column alias in a WHERE clause. This restriction is imposed because when the WHERE code is executed, the column value may not yet be determined.You cannnot USE the 'JOURS' label because the where code is not aware of it yet.
SQL - Invalid Identifier,, but where?
[ "", "sql", "oracle", "union", "" ]
This is my table: ``` CREATE TABLE `tab_adasf` ( `adasf_id` bigint(20) unsigned NOT NULL AUTO_INCREMENT, `adasf_shopId` int(10) unsigned NOT NULL, `adasf_localId` bigint(20) unsigned NOT NULL, `adasf_shopState` varchar(255) DEFAULT NULL, `adasf_shopCity` varchar(255) DEFAULT NULL, `adasf_shopName` varchar(255) DEFAULT NULL, `adasf_shopDoor` varchar(255) DEFAULT NULL, `adasf_computerName` varchar(255) DEFAULT NULL, `adasf_channel` bigint(20) NOT NULL, `adasf_totalInside` bigint(20) NOT NULL, `adasf_totalOutside` bigint(20) NOT NULL, `adasf_createdAt` datetime NOT NULL, PRIMARY KEY (`adasf_id`), KEY `adasf_shopId` (`adasf_shopId`), KEY `adasf_localId` (`adasf_localId`), KEY `adasf_shopState` (`adasf_shopState`,`adasf_shopCity`,`adasf_shopName`,`adasf_shopDoor`), KEY `adasf_computerName` (`adasf_computerName`,`adasf_channel`,`adasf_createdAt`), CONSTRAINT `tab_adasf_ibfk_1` FOREIGN KEY (`adasf_shopId`) REFERENCES `tab_shop` (`shop_id`) ON UPDATE CASCADE ) ENGINE=InnoDB AUTO_INCREMENT=1453500 DEFAULT CHARSET=utf8 ``` As AUTO\_INCREMENT's value says: it has 1453500 rows. In order to generate a XML file, I need the resultset as follows: ``` SELECT UPPER(adasf_shopState) AS adasf_shopState, UPPER(adasf_shopCity) AS adasf_shopCity, UPPER(adasf_shopName) AS adasf_shopName, UPPER(adasf_shopDoor) AS adasf_shopDoor, adasf_computerName, adasf_channel, SUM(adasf_totalInside) AS adasf_totalInside, SUM(adasf_totalOutside) AS adasf_totalOutside, YEAR(adasf_createdAt) AS year, MONTH(adasf_createdAt) AS month, DAY(adasf_createdAt) AS day, HOUR(adasf_createdAt) AS hour FROM tab_adasf WHERE 1=1 AND adasf_shopId = '1' AND HOUR(adasf_createdAt) BETWEEN '10:00' AND '21:00' GROUP BY UPPER(adasf_shopState), UPPER(adasf_shopCity), UPPER(adasf_shopName), UPPER(adasf_shopDoor), adasf_computerName, adasf_channel, YEAR(adasf_createdAt), MONTH(adasf_createdAt), DAY(adasf_createdAt), HOUR(adasf_createdAt) ORDER BY UPPER(adasf_shopState), UPPER(adasf_shopCity), UPPER(adasf_shopName), UPPER(adasf_shopDoor), UPPER(adasf_computerName), adasf_channel, adasf_createdAt ``` It takes 3 minutes to run and fetch. My question is: what am I doing wrong? How can I speed up this query or table? Thanks in advance!
As others have said, a query like this that reports on the whole table is inherently time-consuming. That being said, here's a suggestion or two. First, eliminate the `UPPER()` function calls in the `GROUP BY` clause. MySQL's collations are case-insensitive anyhow. Second, instead of `GROUP BY` year, month, day, hour, try using this expression in the GROUP BY. ``` DATE_FORMAT(adasf_createdAt, '%Y-%m-%d %H:00:00') ``` This will basically round off your `createdAt` value to the immediately preceding hour. Third, let's refactor the `WHERE` item that reads ``` HOUR(adasf_createdAt) BETWEEN '10:00' AND '21:00' ``` It should say ``` HOUR(adasf_createdAt) BETWEEN 10 AND 21 ``` And if you remove that from your main query it will speed up. You can then wrap your query in another query like so: ``` SELECT * FROM ( /*your whole query without the WHERE HOUR() BETWEEN clause */ ) AS q WHERE q.hour BETWEEN 10 AND 21 ``` Finally, try creating a compound covering index on ``` adasf_shopId, adasf_shopState, adasf_shopCity, adasf_shopName, adasf_shopDoor, adasf_computerName, adasf_channel, adasf_CreatedAt, adasf_totalInside, adasf_totalOutside ``` This index has all the information required to satisfy your query arranged in sequential order. It's possible this will speed up your query. So, your ultimate query looks like this: ``` SELECT * FROM ( SELECT UPPER(adasf_shopState) AS adasf_shopState, UPPER(adasf_shopCity) AS adasf_shopCity, UPPER(adasf_shopName) AS adasf_shopName, UPPER(adasf_shopDoor) AS adasf_shopDoor, adasf_computerName, adasf_channel, SUM(adasf_totalInside) AS adasf_totalInside, SUM(adasf_totalOutside) AS adasf_totalOutside, YEAR(adasf_createdAt) AS year, MONTH(adasf_createdAt) AS month, DAY(adasf_createdAt) AS day, HOUR(adasf_createdAt) AS hour FROM tab_adasf WHERE 1=1 AND adasf_shopId = '1' GROUP BY adasf_shopState, adasf_shopCity, adasf_shopName, adasf_shopDoor, adasf_computerName, adasf_channel, DATE_FORMAT(adasf_createdAt, '%Y-%m-%d %H:00:00') ORDER BY adasf_shopState, adasf_shopCity, adasf_shopName, adasf_shopDoor, adasf_computerName, adasf_channel, DATE_FORMAT(adasf_createdAt, '%Y-%m-%d %H:00:00') ) AS q WHERE q.hour BETWEEN 10 AND 21 ``` It's possible this simplification of your query, combined with the covering index, will make the query faster. Please note that I haven't debugged this query and don't have the test data to do so.
To speed up the query, you can create an index on `tab_adasf(adasf_shopId)`. This should help performance a lot if you have many shops. If you need to do a lot of queries of this type, then consider splitting the `adasf_createdAt` column into a date component and a time component. Then you can create an index on `tab_adasf(adasf_shopId, adasf_createdAt_time)`, further helping the query. In general splitting the time from the datetime is not recommended unless you have a good reason. Increasing performance of this type of query constitutes a "good reason".
MySQL query optimization: how can I speed up this query?
[ "", "mysql", "sql", "query-optimization", "" ]
someone please help me with this query, i have 2 tables **Employee** ``` EmployeeID LanguageID 1 1 1 2 1 3 2 1 2 3 3 1 3 2 4 1 4 2 4 3 ``` **Task** ``` TaskID LanguageID LangaugeRequired 1 1 1 1 2 0 2 1 1 2 2 1 2 3 1 3 2 0 3 3 1 ``` LangaugeID is connected to table langauge (this table is for explaination only) ``` LangaugeID LanguageName 1 English 2 French 3 Italian ``` is there a possilbe way to make a query which gets employees where they can speak all the languages required for each task? for example: 1. Task ID 1 requires only LanguageID = 1, so the result should be EmployeeID 1,2,3,4 2. Task ID 2 requires all 3 languages, so the result should be EmployeeID 1,4 3. Task ID 3 requires only LanguageID = 3, so the result should be EmployeeID 1,2,4
here is another variant to do this: ``` select t1.taskid, t2.employeeid from ( select a.taskid, count(distinct a.languageid) as lang_cnt from task as a where a.LangaugeRequired=1 group by a.taskid ) as t1 left outer join ( select a.taskid, b.employeeid, count(distinct b.languageid) as lang_cnt from task as a inner join employee as b on (a.LangaugeRequired=1 and a.languageid=b.languageid) group by a.taskid, b.employeeid ) as t2 on (t1.taskid=t2.taskid and t1.lang_cnt=t2.lang_cnt) ### here you can insert where statement, like: where t1.taskid=1 and t2.employeeid=1 if such query returns row - this employee can work with this task, if no rows - no ### order by t1.taskid, t2.employeeid ``` as you see, this query creates two temporary tables and then joins them. first table (t1) calculates how many languages are required for each task second table (t2) finds all employees who has at least 1 language required for task, groups by task/employee to find how many languages can be taken by this employee the main query performs LEFT JOIN, as there can be situations when no employees can perform task here is the output: ``` task employee 1 1 1 2 1 3 1 4 2 1 2 4 3 1 3 2 3 4 ``` update: simpler, but less correct variant, because it will not return tasks without possible employees ``` select a.taskid, b.employeeid, count(distinct b.languageid) as lang_cnt from task as a inner join employee as b on (a.LangaugeRequired=1 and a.languageid=b.languageid) group by a.taskid, b.employeeid having count(distinct b.languageid) = (select count(distinct c.languageid) from task as c where c.LangaugeRequired=1 and c.taskid=a.taskid) ```
Another version using `NOT EXISTS` Retrieve all task-employee combinations where a missing language does not exist ``` SELECT t1.EmployeeId, t2.TaskId FROM ( SELECT DISTINCT EmployeeID FROM Employee ) t1 , ( SELECT DISTINCT TaskID FROM Task ) t2 WHERE NOT EXISTS ( SELECT 1 FROM Task t LEFT JOIN Employee e ON e.EmployeeID = t1.EmployeeID AND e.LanguageID = t.LanguageID WHERE t.TaskID = t2.TaskID AND LanguageRequired = 1 AND e.EmployeeID IS NULL ) ``` <http://www.sqlfiddle.com/#!6/e3c78/1>
SQL Server matching all rows from Table1 with all rows from Table2
[ "", "sql", "sql-server", "database", "" ]
I have this `users` table: ![users](https://i.stack.imgur.com/GnUCa.png) and this `relationships` table: ![enter image description here](https://i.stack.imgur.com/FfqRN.png) So each user is paired with another one in the `relationships` table. Now I want to get a list of `users` which are not in the `relationships` table, in either of the two columns (`user_id` or `pair_id`). **How could I write that query?** First try: ``` SELECT users.id FROM users LEFT OUTER JOIN relationships ON users.id = relationships.user_id WHERE relationships.user_id IS NULL; ``` Output: ![enter image description here](https://i.stack.imgur.com/ujZhL.png) This is should display only 2 results: 5 and 6. The result 8 is not correct, as it already exists in `relationships`. Of course I'm aware that the query is not correct, how can I fix it? * I'm using PostgreSQL.
You need to compare to both values in the `on` statement: ``` SELECT u.id FROM users u LEFT OUTER JOIN relationships r ON u.id = r.user_id or u.id = r.pair_id WHERE r.user_id IS NULL; ``` In general, `or` in an `on` clause can be inefficient. I would recommend replacing this with two `not exists` statements: ``` SELECT u.id FROM users u WHERE NOT EXISTS (SELECT 1 FROM relationships r WHERE u.id = r.user_id) AND NOT EXISTS (SELECT 1 FROM relationships r WHERE u.id = r.pair_id); ```
This is a special case of: [Select rows which are not present in other table](https://stackoverflow.com/questions/19363481/select-rows-which-are-not-present-in-other-table/19364694#19364694) I suppose this will be simplest and fastest: ``` SELECT u.id FROM users u WHERE NOT EXISTS ( SELECT 1 FROM relationships r WHERE u.id IN (r.user_id, r.pair_id) ); ``` In Postgres, `u.id IN (r.user_id, r.pair_id)` is just short for:`(u.id = r.user_id OR u.id = r.pair_id)`. The expression is transformed that way internally, which can be observed from `EXPLAIN ANALYZE`. To clear up speculations in the comments: Modern versions of Postgres are going to use matching indexes on `user_id`, and / or `pair_id` with this sort of query.
How can I get records from one table which do not exist in a related table?
[ "", "sql", "postgresql", "" ]
A misconfigured manual import imported our entire AD into our help desk user database, creating a bunch of extraneous/duplicate accounts. Of course, no backup to restore from. To facilitate the cleanup, I want to run a query that will find users not currently linked to any current or archived tickets. I have three tables, `USER`, `HD_TICKET`, and `HD_ARCHIVE_TICKET`. I want to compare the `ID` field in `USER` to the `OWNER_ID` and `SUBMITTER_ID` fields in the other two tables, returning the only the values in `USER.ID` that do not exist in any of the other four columns. How can this be accomplished?
Do a left join for each relationship where the right table id is null: ``` select user.* from user left join hd_ticket on user.id = hd_ticket.owner_id left join hd_ticket as hd_ticket2 on user.id = hd_ticket2.submitter_id left join hd_archive_ticket on user.id = hd_archive_ticket.owner_id left join hd_archive_ticket as hd_archive_ticket2 on user.id = hd_archive_ticket2.submitter_id where hd_ticket.owner_id is null and hd_ticket2.submitter_id is null and hd_archive_ticket.owner_id is null and hd_archive_ticket2.submitter_id is null ```
How about something like: ``` SELECT id FROM user WHERE id NOT IN ( SELECT owner_id FROM hd_ticket UNION ALL SELECT submitter_id FROM hd_ticket UNION ALL SELECT owner_id FROM hd_archive_ticket UNION ALL SELECT submitter_id FROM hd_archive_ticket ) ```
Find unique values that do not exist in multiple columns and tables
[ "", "mysql", "sql", "" ]
I am trying to fetch data from remote db by using dblink through function but getting an error `query has no destination for result data`. I am using plpgsql language to do the same. **Function**: ``` CREATE OR REPLACE FUNCTION fun() RETURNS text AS $$ begin select dblink_connect( 'port=5432 dbname=test user=postgres password=****'); WITH a AS ( SELECT * FROM dblink( 'SELECT slno,fname,mname,lname FROM remote_tbl' ) AS t (slno int, fname text, mname text, lname text) ) , b AS ( INSERT INTO temptab1 SELECT slno, name FROM a ) , c AS ( INSERT INTO temptab2 SELECT slno, name FROM a ) INSERT INTO temptab3 SELECT slno, name FROM a; select dblink_disconnect(); end; $$ LANGUAGE plpgsql; ``` **Calling Function**: ``` SELECT fun(); ``` > ERROR: query has no destination for result data
The stored procedure won't just return the result of the last SELECT. You need to actually return the value: ``` CREATE OR REPLACE FUNCTION fun() RETURNS text AS $$ BEGIN --- .... RETURN(SELECT dblink_disconnect()); END $$ LANGUAGE plpgsql; ``` You're getting the error because Postgres expects the function to return something of type text, but your function doesn't return anything.
Use a **plain SQL function instead of PL/PgSQL**, or use `SELECT INTO` and ordinary `RETURN`.
PostgreSQL: Query has no destination for result data
[ "", "sql", "database", "postgresql", "plpgsql", "" ]
I have following query which is join of two tables as you can see From the result I want to avoid data from one column. From below result set I want to avoid Other row ``` IncidentCategory No_Incident Near Miss 25 Road Events 129 Service Strike 150 Other Null SELECT BT.Name AS IncidentCategory, SUM(IR.ID) AS No_Incident From BBTypes BT LEFT JOIN IncidentReports IR ON BT.ID = IR.BbType AND IR.WbsElementId = 23 GROUP BY BT.Name ORder By BT.Name DESC ```
Try adding ``` where BT.Name <> 'Other' ```
In your query you use GROUP BY so it is better to use HAVING COUNT to boost your query performance. HAVING COUNT(IR.ID)>0 is avoid null row in No\_Incident column. ``` SELECT BT.Name AS IncidentCategory, SUM(IR.ID) AS No_Incident From BBTypes BT LEFT JOIN IncidentReports IR ON BT.ID = IR.BbType AND IR.WbsElementId = 23 GROUP BY BT.Name HAVING COUNT(IR.ID)>0 ORDER BY BT.Name DESC ```
how to avoid data from one column in SQL Server
[ "", "sql", "sql-server", "sql-server-2008-r2", "" ]
Failed to find the answer in the specs. So, I wonder: Can I do something like that in hive? ``` insert into table my_table with a as ( select * from ... where ... ), b as ( select * from ... where ... ) select a.a, a.b, a.c, b.a, b.b, b.c from a join b on (a.a=b.a); ```
With is available in Hive as of version `0.13.0`. [Usage documented here](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Select).
I guess you could always use subqueries: ``` insert into table my_table select a.a, a.b, a.c, b.a, b.b, b.c from ( select * from ... where ... ) a join ( select * from ... where ... ) b on a.a = b.a; ```
Is there sql WITH clause equivalent in hive?
[ "", "sql", "hadoop", "hive", "" ]
I am learning SQL with GalaXQL and can't figure out the following question: > Build a query which returns starids from planets. The starids should be selected so that for each starid (x) in the list: > - there should exist a planet with a starid that's three times x but > - there should not exist a planet with starid two times x. > Only use starid:s from the planets table. What would be a query that would accomplish this using UNION, UNION ALL, INTERSECT, EXCEPT? Please digest this in your answer as I am a beginner. Thank you in advance! Here is the database schema: ``` CREATE TABLE stars (starid INTEGER PRIMARY KEY, name TEXT, x DOUBLE NOT NULL, y DOUBLE NOT NULL, z DOUBLE NOT NULL, class INTEGER NOT NULL, intensity DOUBLE NOT NULL); CREATE TABLE hilight (starid INTEGER UNIQUE); CREATE TABLE planets (planetid INTEGER PRIMARY KEY, starid INTEGER NOT NULL, orbitdistance DOUBLE NOT NULL, name TEXT, color INTEGER NOT NULL, radius DOUBLE NOT NULL); CREATE TABLE moons (moonid INTEGER PRIMARY KEY, planetid INTEGER NOT NULL, orbitdistance DOUBLE NOT NULL, name TEXT, color INTEGER NOT NULL, radius DOUBLE NOT NULL); CREATE INDEX planets_starid ON planets (starid); CREATE INDEX moons_planetid ON moons (planetid); ``` Here is my query: ``` SELECT planets.starid FROM planets UNION SELECT starid*3 FROM planets EXCEPT SELECT starid*2 FROM planets ```
Easiest is to start with exists/not exists: ``` SELECT starid FROM planets p1 WHERE EXISTS ( SELECT 1 FROM planets p2 WHERE p2.starid = 3*p1.starid ) AND NOT EXISTS ( SELECT 1 FROM planets p3 WHERE p3.starid = 2*p1.starid ) ``` If you want to express this in terms of union/intersect, let A be all starids from planet, and let B be the starids \* 3. Since both of these must exists we are talking intesection ``` SELECT starid FROM planets INTERSECT SELECT 3*starid FROM planets ``` From that set we want to exclude starids \* 2. That is all elements but starids \* 2. This is known as EXCEPT or MINUS: ``` SELECT starid FROM planets EXCEPT SELECT 2*starid FROM planets ``` Since the result should belong to both of these sets we once again apply intersection: ``` (SELECT starid FROM planets INTERSECT SELECT 3*starid FROM planets) INTERSECT (SELECT starid FROM planets EXCEPT SELECT 2*starid FROM planets) ``` Did that help?
![nice explanation ](https://i.stack.imgur.com/wAA8T.jpg) for more details check link [this help me](http://www.essentialsql.com/learn-to-use-union-intersect-and-except-clauses/)
SQL - UNION, UNION ALL, INTERSECT, EXCEPT
[ "", "sql", "" ]
I am running a query against **MS SQL Server 2008** and am selecting an accountnumber and the max of the column mydate grouped by accountnumber: ``` select AccountNumber, max(mydate), from #SampleData group by AccountNumber ``` I want to add a column to the result that contains the second highest mydate that is associated with the AccountNumber group. I know it would have to be something like: ``` select max(mydate) from #SampleData where mydate < (select max(mydate) from #SampleData) ``` But how do I get both the max and 2nd max in one select query?
Something like this should select the second highest: ``` select AccountNumber, max(mydate), (select max(SD2.mydate) from #SampleData SD2 where SD2.AccountNumber=#SampleData.AccountNumber AND SD2.mydate<max(#SampleData.mydate)) from #SampleData group by AccountNumber ```
You didn't specify your DBMS so this is ANSI SQL: ``` select accountnumber, rn, mydate from ( select accountnumber, mydate, row_number() over (partition by accountnumber order by mydate desc) as rn from #SampleData ) t where rn <= 2; ```
SQL Select MAX and 2nd MAX
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have come across some sql that has an inner join right before a left join. ``` From Table t INNER JOIN vw_User i ON t.UserID = i.UserID LEFT JOIN UserTypeTwo it ON t.UserTypeTwoID = it.UserTypeTwoID ``` Can someone please help me visualize how this works. I can visualize the two joins separately, but since they are one after the other, how do they work together?
See the following image, don't have enough rep to embed it. <https://i.stack.imgur.com/1S8fF.png>
What will happens is that you are just appending more columns to `Table T`. Basically what happens is that `Table T` has no relationship to `UserTypeTwo`, but it does to `vw_user`. Then you need some data from `UserTypeTwo` but since `Table T` can't join to `userTypeTwo` you need a table to connect them both, which is where `vw_user` comes in which should have a relation to `UserTypeTwo`. Basically this query is being used because you need columns from `Table T` and `UserTypeTwo`(and probably vw\_User as well) so you need to join all three of them. * `Table T` has `FK` for `vw_User` * `vw_User` has `FK` to `UserTypeTwo` * `UserTypeTwo` has column that you need (it doesnt necessarily need to be a foreign key but I'm assuming you know this since you have join knowledge). I hope I was able to clarify what is happening.
SQL - inner join before left join
[ "", "sql", "" ]
I've searched, but I can't find quite what I'm looking for. I have two tables: Table1 ``` | d | text | | 101 | 'description 101' | | 102 | 'description 102' | | 103 | 'description 103' | ``` Table2 ``` | id | d1 | d2 | d3 | d4 | | 01 | 104 | 242 | 102 | 222 | | 02 | 423 | 553 | | | | 03 | 832 | 142 | 102 | | ``` etc. I want a count of how many times each "d" from Table1 is used as the d1, d2, d3, and d4 in Table2. Output would look like this: ``` | d | count_d1 | count_d2 | count_d3 | count_d4 | | 101 | 30032 | 108 | 5002 | 392 | | 102 | 440 | 5330 | 24 | 5 | | 103 | 0 | 309 | 2220 | 4 | ``` etc. I'm sure there's something obvious that I'm just not thinking of, but I've been looking at this for over an hour now, and I got lost in a mess of joins and subqueries.
``` SELECT t1.d, ( SELECT COUNT(*) FROM Table2 s2a WHERE s2a.d1 = t1.d ) AS count_d1, ( SELECT COUNT(*) FROM Table2 s2b WHERE s2b.d2 = t1.d ) AS count_d2, ( SELECT COUNT(*) FROM Table2 s2c WHERE s2c.d3 = t1.d ) AS count_d3, ( SELECT COUNT(*) FROM Table2 s2d WHERE s2d.d4 = t1.d ) AS count_d4 FROM Table1 as t1 ```
You'll have to split this up into 4 queries (1 for each column) ``` SELECT d, sum(Count_d1), sum(Count_d2), sum(Count_d3), sum(Count_d4) FROM ( SELECT d, count(1) AS Count_d1, null AS Count_d2, null AS Count_d3, null AS Count_d4 FROM Table1 LEFT JOIN Table2 on Table1.d=Table2.d1 UNION ALL SELECT d, null AS Count_d1, count(1) AS Count_d1, null AS Count_d3, null AS Count_d4 FROM Table1 LEFT JOIN Table2 on Table1.d=Table2.d2 UNION ALL SELECT d, null AS Count_d1, null AS Count_d2, count(1) AS Count_d3, null AS Count_d4 FROM Table1 LEFT JOIN Table2 on Table1.d=Table2.d3 UNION ALL SELECT d, null AS Count_d1, null AS Count_d2, null AS Count_d3, count(1) AS Count_d4 FROM Table1 LEFT JOIN Table2 on Table1.d=Table2.d4 ) GROUP BY d ```
Count for one value in multiple columns SQL
[ "", "sql", "count", "subquery", "" ]
I want to count unique BOOKING\_NUMBERS, to display the total next to each STAFF\_ID; I'm using the following statement: ``` SELECT c.Staff_Id , c.(COUNT Booking_Number) AS "Number of Lessons" FROM AM_Lessons c JOIN AM_Staff m ON (c.Staff_Id = m.Staff_Id) ORDER BY c.Staff_Id ``` I am getting the following error, despite all Column names and tables being correct > ORA-01747: invalid user.table.column, table.column, or column specification What is the correct syntax?
You have `c.COUNT(Booking_Number)`, but should be using `COUNT(c.Booking_Number)` as you need to count an element, not look at a "count" property of an element. The following statement should solve your problem: ``` SELECT c.Staff_Id, COUNT(c.Booking_Number) AS "Number of Lessons" FROM AM_Lessons c JOIN AM_Staff m ON (c.Staff_Id = m.Staff_Id) GROUP BY c.Staff_ID ORDER BY c.Staff_Id ``` Please note that you then need to group by the items you are not counting, as shown. This is because all columns in the SELECT must also be in the GROUP BY, [unless they are an aggregate](https://stackoverflow.com/a/1520641/458741).
In your question you have specified that you are looking for UNIQUE Booking Number. In that case here is the query - ``` SELECT c.Staff_Id, COUNT(DISTINCT c.Booking_Number) AS "Number of Lessons" FROM AM_Lessons c JOIN AM_Staff m ON (c.Staff_Id = m.Staff_Id) GROUP BY c.Staff_ID ORDER BY c.Staff_Id ```
SQL count and join statement
[ "", "sql", "oracle", "" ]
Hi guys I have a pl/sql cursor that takes too long to execute. I want to know how can I make the same process but with better performance and probably better code. I am new to PL/SQL. ``` Declare Cursor Cursor1 is select * from table1 where field1 IS NULL or field2 IS NULL or field3 IS NULL or field4 is null or field5 IS NULL or field6 IS NULL; Begin For i in Cursor1 loop if i.field1 IS NULL then update table1 set field1=0 where recordId=i.recordId; end if; if i.field2 IS NULL then update table1 set field2=0 where recordId=i.recordId; end if; if i.field3 IS NULL then update table1 set field3=0 where recordId=i.recordId; end if; if i.field4 IS NULL then update table1 set field4=0 where recordId=i.recordId; end if; if i.field5 IS NULL then update table1 set field5=0 where recordId=i.recordId; end if; if i.field6 IS NULL then update table1 set field6=0 where recordId=i.recordId; end if; End loop; End; ``` The question basically is how can I update a field of one specific record, taking into account the conditions of the field. The thing is that the update can occur in the same record many times if the condition apply for many fields in the record. Thanks...
It's possible to do the same with one `UPDATE` ``` UPDATE table1 SET field1 = COALESCE(field1, 0) , field2 = COALESCE(field2, 0) , field3 = COALESCE(field3, 0) , field4 = COALESCE(field4, 0) , field5 = COALESCE(field5, 0) , field6 = COALESCE(field6, 0) WHERE field1 IS NULL OR field2 IS NULL OR field3 IS NULL OR field4 IS NULL OR field5 IS NULL OR field6 IS NULL ```
Here's another take on this: ``` UPDATE TABLE1 SET FIELD1 = NVL(FIELD1, 0), FIELD2 = NVL(FIELD2, 0), FIELD3 = NVL(FIELD3, 0), FIELD4 = NVL(FIELD4, 0), FIELD5 = NVL(FIELD5, 0), FIELD6 = NVL(FIELD6, 0); ``` Rationale: any query which performs this update is going to do a full table scan anyways because it's looking for NULLs, which won't be indexed in the usual case, and even if they ARE indexed there's a fair chance the optimizer will choose a full table scan anyways. Why waste time checking six different fields for NULLs? Share and enjoy.
How can I make this pl/sql cursor more efficient?
[ "", "sql", "oracle", "plsql", "" ]
I have a SQL query that only consists of sub queries, But I cannot run it I get an SQL error, What is the correct Syntax to run query consisting of sub queries only...? Query: ``` SELECT COUNT(id) AS 'totalNumOfCustomers' FROM customers WHERE customers.isDemo =0 AND customers.isLead=0 AND regTime < '2012-01-01' SELECT COUNT(id) AS 'totalNumOfCustomersPerMonth' FROM customers WHERE customers.isDemo =0 AND customers.isLead=0 AND regTime >='2012-01-01' AND regTime < '2012-02-01' SELECT COUNT(positions.id) AS 'totalPositions', ROUND(SUM(amount),2) - ROUND(SUM(payout),2) AS 'grossIncome' FROM positions LEFT JOIN customers ON positions.customerId = customers.id WHERE date >= '2012-01-01' AND date < '2012-02-01' AND customers.isDemo=0 AND customers.isLead=0 AND status != 'canceled' ``` I am running this in WorkBench, The error message is this: ``` Error Code: 1064. You Have an error in your SQL syntax ```
How about this ``` select (SELECT COUNT(id) FROM customers WHERE customers.isDemo =0 AND customers.isLead=0 AND regTime < '2012-01-01') as totalNumOfCustomers, (SELECT COUNT(id) FROM customers WHERE customers.isDemo =0 AND customers.isLead=0 AND regTime >='2012-01-01' AND regTime < '2012-02-01') as totalNumOfCustomersPerMonth, (SELECT COUNT(positions.id) FROM positions LEFT JOIN customers ON positions.customerId = customers.id WHERE date >= '2012-01-01' AND date < '2012-02-01' AND customers.isDemo=0 AND customers.isLead=0 AND status != 'canceled') as totalPositions, (SELECT ROUND(SUM(amount),2) - ROUND(SUM(payout),2) FROM positions LEFT JOIN customers ON positions.customerId = customers.id WHERE date >= '2012-01-01' AND date < '2012-02-01' AND customers.isDemo=0 AND customers.isLead=0 AND status != 'canceled') as grossIncome; ```
I think you get error, because there is no `;` ant the end of each query. No error message, so just guessing
Run a query consisting of subqueries only
[ "", "sql", "" ]
I'm looking for a way to excluding some records before i the left join executes. My sql statement looks as follows: ``` SELECT * FROM users LEFT JOIN (SELECT * FROM premissions WHERE post_id = 1) AS p ON p.user_id = users.id WHERE p.id IS NULL ``` How can I exclude the records with the id 1 and two in the user table?
You could add criteria to your `WHERE` clause: ``` SELECT * FROM users LEFT JOIN ( SELECT * FROM premissions WHERE post_id = 1 ) AS p ON p.user_id = users.id WHERE p.id IS NULL AND users.id NOT IN (1,2) ```
Use a subselect ``` SELECT * FROM (select * from users where id not in(1,2)) u LEFT JOIN (SELECT * FROM premissions WHERE post_id = 1) AS p ON p.user_id = u.id WHERE p.id IS NULL ```
How to excluding some records before left join executes
[ "", "mysql", "sql", "left-join", "" ]
I have multiple computers that have the task of sending out emails found in a table on a common SQL Server. Each computer polls the email table to look for messages it can send by looking at a status flag set to 0. If a computer does a ``` SELECT * FROM tblEmailQueue where StatusFlag=0 ``` if it returns a record it immediately sets the `StatusFlag` to `1` which should cause the other computer polling the same table not to find this record. My fear is that if two computer find the record at the same time before either can update the `StatusFlag`, the email will be sent twice. Does anyone have ideas on how to ensure only one computer will get the record? I know I might be able to do a table lock but I would rather now have to do this.
Instead of using two queries which may cause a race condition, you can update the values and output the updated rows at once using the [`OUTPUT` clause](http://msdn.microsoft.com/en-us/library/ms177564.aspx). This will update the rows with statusflag=0 and output all of the updated ones; ``` UPDATE tblEmailQueue SET statusflag=1 OUTPUT DELETED.* WHERE statusflag=0; ``` [An SQLfiddle to test with](http://sqlfiddle.com/#!3/e61cd/1). EDIT: If you're picking one row, you may want some ordering. Since the update itself can't order, you can use a common table expression to do the update; ``` WITH cte AS ( SELECT TOP 1 id, statusflag FROM tblEmailQueue WHERE statusflag = 0 ORDER BY id ) UPDATE cte SET statusflag=1 OUTPUT DELETED.*; ``` [Another SQLfiddle](http://sqlfiddle.com/#!3/d524c/1).
You can perform select and send email in the same transaction. Also you can use `ROWLOCK` hint and don't commit transaction until you send email or set new value for `StatusFlag`. It means that nobody (exept transaction with hint `NOLOCK` or `READ UNCOMMITED` isolation level) can read this row as long as you commit transaction. ``` SELECT * FROM tblEmailQueue WITH(ROWLOCK) where StatusFlag=0 ``` In addition you should check isolation level. For your case isolation level should be `READ COMMITED` or `REPEATABLE READ`. See information about isolation levels [here](http://msdn.microsoft.com/ru-ru/library/ms173763.aspx)
How to guarantee only one process picks up a processing task
[ "", "sql", "sql-server", "" ]
Hey i am using a query like this: ``` INSERT INTO likes( likes_memory_id, likes_comment_id, likes_owner_id, likes_like ) VALUES ( :likes_memory_id, :likes_comment_id, :likes_owner_id, :likes_like) ``` when ever an user click the like button, this query adds a new row. so this query allows to like multiple time. to prevent this i may use a select statement and i might succeed in two queries but i assue there is a better way to do it. (I made research about if not exists statement but i didnt understand too much ) How do I avoid multiple likes?
The simplest is to create a unique index on your columns which you want unique; ``` CREATE UNIQUE INDEX uq_mem_own ON likes( likes_memory_id, likes_owner_id ); ``` ...and insert likes using INSERT IGNORE, which will insert the value if it's not prevented by the index, otherwise just ignore it; ``` INSERT IGNORE INTO likes( likes_memory_id, likes_owner_id, likes_like ) VALUES ( :likes_memory_id, :likes_owner_id, :likes_like) ```
``` INSERT INTO likes( likes_memory_id, likes_comment_id, likes_owner_id, likes_like SELECT * FROM (:likes_memory_id, :likes_comment_id, :likes_owner_id ,:likes_like) tmp WHERE NOT EXISTS ( SELECT * FROM `likes` WHERE `likes_memory_id` = :likes_memory_id AND `likes_comment_id` = :likes_comment_id AND `likes_owner_id` = :likes_owner_id AND `likes_like` = :likes_like ) LIMIT 1; ```
duplicate rows MySQL
[ "", "mysql", "sql", "" ]
I can get it to work in two seperate queries but not in one. Can someone help me out please? I need the output something like this: ``` +-----------------------------------------------------------------------------------------------------+ | PARENT_AK | PARENT_RK |PARENT_RESOURCE_NAME| C_RESOURCE_NAME | C_AK | C_RK | +-----------------------------------------------------------------------------------------------------+ | CONTAINER | LOB |SYSTEMROLE | DEV |CONTAINER |LOB Options | +-----------------------------------------------------------------------------------------------------+ | CONTAINER | LOB |SYSTEMROLE | PRODUCTION |CONTAINER |LOB Options | +-----------------------------------------------------------------------------------------------------+ | CONTAINER | LOB |SYSTEMROLE | TEST |CONTAINER |LOB Options | +-----------------------------------------------------------------------------------------------------+ | CONTAINER | LOB |SYSTEMROLE | UAT |CONTAINER |LOB Options | +-----------------------------------------------------------------------------------------------------+ | CONTAINER | LOB |SERVER_FUNCTION | APPLICATION SERVER |CONTAINER |LOB Options | +-----------------------------------------------------------------------------------------------------+ | CONTAINER | LOB |SERVER_FUNCTION | DATABASE SERVER |CONTAINER |LOB Options | +-----------------------------------------------------------------------------------------------------+ | CONTAINER | LOB |SERVER_FUNCTION | WEB SERVER |CONTAINER |LOB Options | +-----------------------------------------------------------------------------------------------------+ ``` Query 1: ``` select 'CONTAINER' as PARENT_AK, 'LOB' as PARENT_RK, 'SYSTEMROLE' as PARENT_RESOURCE_NAME, SYSTEMROLE as C_RESOURCE_NAME, 'CONTAINER' as C_AK, 'LOB Options' as C_RK FROM CMDB GROUP by C_RESOURCE_NAME; ``` Query 2: ``` select 'CONTAINER' as PARENT_AK, 'LOB' as PARENT_RK, 'SERVER_FUNCTION' as PARENT_RESOURCE_NAME, SERVER_FUNCTION as C_RESOURCE_NAME, 'CONTAINER' as C_AK, 'LOB Options' as C_RK FROM CMDB GROUP by C_RESOURCE_NAME; ``` Table (CMDB): ``` +-------------------------------------------------+ | NAME | SYSTEMROLE | SERVER_FUNCTION | +-------------------------------------------------+ | Server1 | Test |APPLICATION SERVER | +-------------------------------------------------+ | Server2 | PRODUCTION |APPLICATION SERVER | +-------------------------------------------------+ | Server3 | UAT |DATABASE SERVER | +-------------------------------------------------+ | Server4 | DEV |WEB SERVER | +-------------------------------------------------+ | Server5 | DEV |WEB SERVER | +-------------------------------------------------+ ``` SQLFiddle: <http://www.sqlfiddle.com/#!2/08e6a/12>
You are trying to UNPIVOT your table, but as far as I know MySQL does not have built-in UNPIVOT functionality. Therefore you will have to resort to what you are doing. Note though you can use a union to make a single request to the server as opposed to two ([SQL Fiddle demo](http://www.sqlfiddle.com/#!2/08e6a/14)): ``` select 'CONTAINER' as PARENT_AK, 'LOB' as PARENT_RK, 'SYSTEMROLE' as PARENT_RESOURCE_NAME, SYSTEMROLE as C_RESOURCE_NAME, 'CONTAINER' as C_AK, 'LOB Options' as C_RK FROM CMDB GROUP by C_RESOURCE_NAME; UNION ALL select 'CONTAINER' as PARENT_AK, 'LOB' as PARENT_RK, 'SERVER_FUNCTION' as PARENT_RESOURCE_NAME, SERVER_FUNCTION as C_RESOURCE_NAME, 'CONTAINER' as C_AK, 'LOB Options' as C_RK FROM CMDB GROUP by C_RESOURCE_NAME; ``` --- If instead you are using SQL Server 2008, as you note in your comments, you can use UNPIVOT to get your results: ``` select 'CONTAINER' as PARENT_AK, 'LOB' as PARENT_RK, PARENT_RESOURCE_NAME, C_RESOURCE_NAME, 'CONTAINER' as C_AK, 'LOB Options' as C_RK FROM ( SELECT CI_NAME, SYSTEMROLE, SERVER_FUNCTION FROM CMDB ) x UNPIVOT ( C_RESOURCE_NAME FOR PARENT_RESOURCE_NAME IN (SYSTEMROLE, SERVER_FUNCTION) ) p ``` [SQL Fiddle example](http://www.sqlfiddle.com/#!3/08e6a/6)
You can use a UNION statement in MySQL: <http://dev.mysql.com/doc/refman/5.0/en/union.html> ``` SELECT 'CONTAINER' as PARENT_AK, 'LOB' as PARENT_RK, 'SYSTEMROLE' as PARENT_RESOURCE_NAME, SYSTEMROLE as C_RESOURCE_NAME, 'CONTAINER' as C_AK, 'LOB Options' as C_RK FROM CMDB GROUP by C_RESOURCE_NAME UNION SELECT 'CONTAINER' as PARENT_AK, 'LOB' as PARENT_RK, 'SERVER_FUNCTION' as PARENT_RESOURCE_NAME, SERVER_FUNCTION as C_RESOURCE_NAME, 'CONTAINER' as C_AK, 'LOB Options' as C_RK FROM CMDB GROUP by C_RESOURCE_NAME; ```
How to get two queries joined. Subqueries?
[ "", "sql", "sql-server", "sql-server-2008", "join", "union", "" ]
Using SQL and Looking at a list provided by w3schools on Date conversion (<http://www.w3schools.com/sql/func_convert.asp>) it looks like there isn't really away to get Hours/Minutes AND AM/PM without getting a whole bunch of stuff in the front of the time. I'm currently using a 24 hour time in my query ``` CONVERT(VARCHAR(5),sa.StartDateTime,108) ApptStartTime ``` This spits out something like this ``` 14:55 ``` And I'm looking for something like this ``` 02:55 PM ```
BTW - so this post could have a definitive answer - from @Goat CO ``` SELECT CONVERT(VARCHAR(15),CAST(sa.StartDateTime AS TIME),100 ```
Here's a method that adds a leading 0 and inserts a space before the AM/PM: ``` SELECT STUFF(REPLACE(SUBSTRING(CONVERT(VARCHAR(20),sa.StartDateTime,100),13,7),' ','0'),6,0,' '); ``` or just ``` SELECT STUFF(SUBSTRING(CONVERT(VARCHAR(20),sa.StartDateTime,100),13,7),6,0,' '); ``` if you don't need the leading 0 Note that this would be easier/cleaner in the UI/Report layer if that's a possibility.
Change Time to include Hours/Minutes AND AM/PM
[ "", "sql", "time", "" ]
I have the following query: ``` DECLARE @BeginDateQ1 date = '01-01-2010', @EndDateQ1 date = '12-31-2010' BEGIN SELECT H.Type, H.SN, Sum(H.Hours), H.Date as 'Report Date', H.OName, R.ReceivedDate, R.Confirmed, R.Part AS 'Part Number' FROM Hours H INNER JOIN Repair R ON H.SN = R.SN WHERE (R.Confirmed NOT LIKE 'O%' AND R.Confirmed NOT LIKE 'Y%') AND (H.Date BETWEEN @BeginDateQ1 AND @EndDateQ1) GROUP BY H.Type, H.SN, H.Date, H.OName, R.ReceivedDate, R.Confirmed, R.Part ORDER BY Date ASC END ``` Which gives me my desired result. I am having trouble with the following IF statement: ``` IF (SELECT R.ReceivedDate FROM Repair BETWEEN @BeginDate1 AND @EndDate1) THEN COUNT(R.Confirmed) END IF ``` ALSO, how would I properly include this IF statement in the original query that produced the original results I want? NOTE that I need the count result from the IF statement to be used as a denominator for an equation and the numerator is the SUM(Hours) from line 3 of original query in the post. This is a first time for me writing IF statements in SQL (only have done in C#). Any help would be appreciated. Thanks.
IF is a control flow statement, not to be used within a query. ``` IF (somecondition) THEN <sql statements> ELSE <sql statements> END ``` You can do that within a stored procedure. To do conditionals within a query, you need to use CASE... ``` SELECT col1, col2, CASE when col1='x' then 'HIT' else 'MISS' END from myTable ``` which would give you a result with three columns, in the last column you would have the text 'HIT' or 'MISS' based on the value in col1.
A case construct might be what you are after. Something like this. ``` , case when R.ReceivedDate BETWEEN @BeginDate1 AND @EndDate1 then count(r.confirmed) else sum(0) end confirmations ``` **Edit Starts Here** Note that this answer only shows the general idea. Since the OP mentioned later that the result is to be a denominator, the alias has to go away. Also, sum(0) has to be replaced by something representing sum(hours), whatever that might be
SQL: How Do I Correctly Use Keyword 'BETWEEN' and Aggregate Function in IF, THEN, Else SQL Statements
[ "", "sql", "sql-server", "t-sql", "if-statement", "aggregate", "" ]
I want to format my date column in SQL Server like this **Wed, 23** from given date format **4/23/2014**. ![Exactly like this in image below](https://i.stack.imgur.com/ZjGPU.jpg) Is there any way to do this...? SQL Server version is 2008
Try like this ``` SELECT LEFT(DATENAME(dw, GETDATE()), 3) + ' , ' + CAST(Day(GetDate()) AS Varchar(10)) ``` **[Fiddle Demo](http://sqlfiddle.com/#!6/d41d8/18011)** Query would be like this ``` SELECT mydate,LEFT(DATENAME(dw, mydate), 3) + ' , ' + CAST(Day(mydate) AS Varchar(10)) As Date From tbl ``` [**SQL FIDDLE**](http://sqlfiddle.com/#!6/a56c2/2) **O/P** ``` MYDATE DATE 2014-04-21 Mon ,21 2014-04-22 Tue ,22 2014-04-23 Wed ,23 2014-04-24 Thu ,24 ```
Try this! ``` declare @a table(a date) insert into @a values('4/21/2014'),('5/21/2014'),('6/21/2014') select left(DATENAME(dw,a),3)+','+convert(varchar(10),datepart(day,a)) from @a ``` # [DEMO](http://www.sqlfiddle.com/#!6/d41d8/18012)
SQL Server - Change Date Format
[ "", "sql", "sql-server-2008", "date", "datetime-format", "" ]
Hello I'm trying to migrate from MySQL to Oracle. So far so good until I came across this query: ``` INSERT INTO waiting ( dbcolumn, dbcolumn, dbcolumn, dbcolumn, dbcolumn, dbcolumn, dbcolumn, dbcolumn, dbcolumn, dbcolumn, dbcolumn, dbcolumn, dbcolumn, dbcolumn, dbcolumn, dbcolumn, dbcolumn, dbcolumn, dbcolumn, dbcolumn, dbcolumn, dbcolumn ) SELECT (case NVL(max(dbcolumn)) when 0 then max(dbcolumn)+1 when 1 then 3 end), (case NVL(max(dbcolumn)) when 0 then max(dbcolumn)+2 when 1 then 4 end), "value" , "value" , "" , "value" , "value" , "value" , "" , "" , "" , "" , "" , "" , "" , "" , "" , "value" , "" , "" , "value" , "value" FROM waiting WHERE dbcolumn < value; ``` The values who are empty can be null. When I try to run the query it gives me: `"missing right parenthesis"` But that doesn't seem right because I'm closing all parenthesis. Can someone give me the magic answer why I get this error message? Thanks in advance.
it should be something like this `(case WHEN NVL(max(dbcolumn), '0') = 0 THEN max(dbcolumn1) + 1 when NVL(max(dbcolumn), '0') = 1 then 3 end)` In oracle, the value you intend to match comes after `when`.
Do `NVL(max(dbcolumn),0)` instead of `NVL(max(dbcolumn))` Don't use parenthesis for `CASE` : ``` SELECT case NVL(max(dbcolumn),0) when 0 then max(dbcolumn)+1 when 1 then 3 end, ``` And I'm not sure but use single quote `'` instead of double quote `"`.
Oracle insert into query with 2 select cases
[ "", "mysql", "sql", "oracle", "" ]
I have a table like: ``` product_id | attribute_id | text -------------------------------- 52 | 16 | 1.0 Inch - 2.9 Inches 52 | 15 | Radio 52 | 14 | Simple Phones 134 | 16 | 1.0 Inch - 2.9 Inches 134 | 15 | Wifi 134 | 14 | Dual SIM Phones ``` I am writing the query for getting products that are Simple Phones with 1.0 Inch - 2.9 Inches Screen. I want to place two conditions for one column. When I am writing the query: ``` select * from product_attribute where (text = '1.0 Inch - 2.9 Inches') and (text = 'Simple Phones') ``` so I am getting "0 Results". While I am running the following Query: ``` select * from product_attribute where text IN('1.0 Inch - 2.9 Inches','Simple Phones') ``` then I am getting the following result: ``` product_id | attribute_id | text -------------------------------- 52 | 16 | 1.0 Inch - 2.9 Inches 52 | 14 | Simple Phones 134 | 16 | 1.0 Inch - 2.9 Inches ``` But I need only product\_id = 52 because this product has both the filters either 1.0 Inch - 2.9 Inches and Simple Phones while product\_id = 134 only has 1.0 Inch - 2.9 Inches Please help me out to sort out this problem. \*\* Sorry for the English :)
Use `HAVING` clause: ``` select * from product_attribute where text IN('1.0 Inch - 2.9 Inches','Simple Phones') group by product_id having count(product_id)=2 ``` See example in [**SQL Fiddle**](http://www.sqlfiddle.com/#!2/8d57fd/5). **EDIT:** For getting all records: ``` select * from product_attribute T1 LEFT JOIN (select product_id from product_attribute where text IN('1.0 Inch - 2.9 Inches','Simple Phones') group by product_id having count(product_id)=2) T2 on T1.product_id=T2.product_id WHERE T2.product_id IS NOT NULL AND T1.text IN('1.0 Inch - 2.9 Inches','Simple Phones') ``` Result: ``` PRODUCT_ID ATTRIBUTE_ID TEXT 52 16 1.0 Inch - 2.9 Inches 52 14 Simple Phones ``` See result in [**SQL Fiddle**](http://www.sqlfiddle.com/#!2/8d57fd/10).
Here each row is considered as a separate entity, so even though they are having the same product\_id, the 2 matching entries for product\_id 52 would be considered differently. Better to group by the rows on product\_id and then apply the in condition Eg. SELECT id, com\_string from (SELECT id, GROUP\_CONCAT(string SEPARATOR ' ') as com\_string FROM table GROUP BY id) temp where com\_string like ('%1.0 Inch - 2.9 Inches%') and com\_string like ('%Simple Phones%');
Specifying "AND" Condition with more than 1 time for same column in MYSQL
[ "", "mysql", "sql", "" ]
``` UPDATE edw.dbo.load_control SET ROW_COUNT=?, end_time=getdate() WHERE package_name=? AND load_control_id=( SELECT MAX(load_control_id) FROM edw.dbo.load_control ) ``` I'm more concerned about the WHERE clause. Would it just select the max id and return that or would it evaluate the max id with the and package\_name. So for example if the max(id) was 6 but package\_names were different and the next max(id) is 5 and the package names are the same it would update id 5 right?
The specifications in your where clause are not conditional, sql server interprets exactly what you tell it. Everything not equal to your package\_name would be filtered out. Then, everything that has a load\_control\_id not equal to the max would also be filtered out. Sql server has no problem with returning an empty set. if you want the max load\_control\_id for the specified package\_name, that is a condition that you can add in your subselect. ``` (select max(load_control_id) from edw.dbo.load_control Where package_name = ?) ```
The `UPDATE` statement will update a row that * has a `package_name` that matches the parameter passed in * has `load_control_id` equal to the `max(id)` **without** taking into account that `package_name` (after all, there's **no condition** for `package_name` in that subquery!) Since in your case, there is no row with the given `package_name` and the value of 6 - no row will be updated.
How would this query be evaluated (nested)
[ "", "sql", "sql-server", "" ]
Need some best-practice wisdom as I'm new to database architecture. This is for a marketplace type app. The main issue I'd like advice on is handling users. What I'd like to achieve is buyer and seller accounts share login/signup functionality and some characteristics, but seller accounts have the ability to sell, receive payments, etc. Buyers can send requests, sellers can complete them. Basically a seller can do everything a buyer can do, but additionally view and fulfill requests, and sell. I'd go with simple roles, except sellers have more complex relationships than buyers in terms of products and payment info. Also sellers need to be able to be publicly listed, and have public profiles. I'm not sure that having 1 big table with both users types would be ideal for this. My current idea is to use polymorphic associations between a base User table and a seller table (potentially a buyer specific table as well): ## User (buyer) * name * email * encrypted\_password * (other authentication fields) * location * occupation * meta\_id * meta\_type ## Seller * name * location * occupation * etc ## requests (belongs to buyer and seller) * type * description * complete * buyer\_id * seller\_id ## products (belongs to seller) * description * category\_id * seller\_id As you can see, a big issue is that fact that buyers and sellers have duplicate data. The reason for this is because when I'm displaying sellers I don't want to have to perform multi-table queries, but maybe that's not an issue? Another option would be to have the User base table and then buyer and seller tables, but they'd still contain duplicate info. Open to all possibilities.. What's the best way to go?
You can represent this kind of relationship using database supertypes and subtypes. For your example, I would split the data model into two groups: *users* and *roles*. A role can be either a buyer or a seller, and a user can have zero or more roles. I would then create the following logical entities to represent the *role* relationship: **Supertype** * UserRole (this name is likely to be too generic; I would suggest a name that better reflects the role of both the buyer and the seller within your application). **Subtypes** * Buyer * Seller For your physical design, I would suggest one of the following designs: 1. A single table that contains the columns for the supertype entity as well as columns for each of the subtype entities. Check constraints can be used to enforce not-null constraints for the subtype columns. 2. One table for the supertype, with a separate table for each subtype entity. Columns that are common to each subtype are stored in the supertype table, with the other columns stored in the appropriate subtype table. A type column is added to the supertype table to indicate the type of the entity. Each subtype table includes a foreign key relationship to the supertype table. 3. Hybrid approach that combines aspects from each of the above designs. **Access Patterns** One factor to consider when deciding how to model a subtype and supertype relationship is whether your queries will need to access columns from both the supertype and the subtype tables. If most of your queries will access columns from the supertype and subtype tables, then a single table may be a better design. **Edit -** I would suggest using the first design, unless there is a compelling reason to create separate tables for the subtypes. Foreign keys that include the type column can be used to restrict relationships to a particular subtype. **Mapping a user to a role** To assign a user to a role, you can simply create a many-to-many relationship between the User table and the supertype (UserRole) table.
Have you heard about **polymorphism** ? You can create a 'mother' class `User` (with [devise](https://github.com/plataformatec/devise) for example) and create two 'children' classes `Buyer` and `Seller` ! Good tutorial [here](http://robots.thoughtbot.com/back-to-basics-polymorphism-and-ruby)
User database design for marketplace-style app
[ "", "sql", "ruby-on-rails", "database", "database-design", "" ]
Here is how my data looks like ``` CompanyName | CompanyCode ----------- ----------- B190 ARBOT B213 INFRA A946 MichalesTest B207 MyCompany Alerf MyCompany lerf SNELSY A857 SNELSY A954 ``` I want to sort the list with `CompanyName = 'MyCompany'` to be the first in the list with its associated `CompanyCode` sorted in ASC. And all other items to be listed after `'MyCompany'` with its `CompanyName` sorted in ASC. Entries with null `CompanyName` can be at the end of the list. Iโ€™ve tried so far with the below query but I didnโ€™t get the expected result. I was following the link [here](http://msdn.microsoft.com/en-us/library/ms188385.aspx#Case) ``` SELECT [CompanyName], [CompanyCode] FROM [dbo].COND_ORDERBY_TEST ORDER BY CASE CompanyName WHEN 'MyCompany' THEN CompanyName ELSE CompanyName END, CompanyCode ``` I expect my final result set like the below ``` CompanyName | CompanyCode ----------- ----------- MyCompany Alerf MyCompany lerf ARBOT B213 INFRA A946 MichalesTest B207 SNELSY A857 SNELSY A954 B190 ``` Iโ€™m sure there should be an elegant way to do this but I'm not getting it. Any help would be appreciated.
When I face this kind of situation, I generally do this: ``` SELECT [CompanyName] , [CompanyCode] , SortOrder = case when CompanyName is null then 3 when CompanyName = 'MyCompany' then 1 else 2 end FROM [dbo].COND_ORDERBY_TEST ORDER BY 3 asc, CompanyName asc ``` As you are always ordering by the same field, SQL Server will not guess what order do you want... Try it :)
I would add a column in the select to solve this problem and then sort using that column ``` SELECT [CompanyName], [CompanyCode], (CASE WHEN COMPANYNAME='MyCompany' THEN 0 ELSE 1 END) AS SortCol FROM [dbo].COND_ORDERBY_TEST ORDER BY SortCol, CompanyName, CompanyCode ```
SQL Query with conditional order by for a specific condition
[ "", "sql", "sql-server", "" ]
I'm Working with Sql Server 2008.There are 3 tables table1,table2,table3 table 1, ``` Id Name group 1 ddd a 2 aaa b 3 sss a ``` table2 Contains: ``` Id Name group 1 fff c 2 gg a 3 saa b ``` table 3, ``` Id group 1 a 2 b 3 c ``` I want to get the Following Result, ``` group count(table1) count(table2) a 2 1 b 1 1 c 0 1 ``` What Query i can write to get Appropriate Result
Try this: ``` SELECT T3.[group], COUNT(T1.[group]) as Count1, COUNT(T2.[group]) as Count2 FROM Table3 T3 LEFT JOIN Table1 T1 on T3.[group]=T1.[group] LEFT JOIN Table2 T2 on T3.[group]=T2.[group] GROUP BY T3.[group] ``` Result: ``` GROUP COUNT1 COUNT2 a 2 2 b 1 1 c 0 1 ``` See result in [**SQL Fiddle**](http://www.sqlfiddle.com/#!3/92ca5/11). **Explanation:** `COALESCE` will return the first parameter which is not null. So if count is null, it will return 0.
You can try like this ``` SELECT group, (SELECT COUNT(*) FROM table1 WHERE group=G.group) AS table1count, (SELECT COUNT(*) FROM table2 WHERE group=G.group) AS table2count FROM table3 G ```
Get Count values from 2 different table
[ "", "sql", "sql-server", "sql-server-2008", "sql-server-2005", "sql-server-2008-r2", "" ]
Currently working with a Firebird 1.5 database and attempting to pull the data in the correct format natively with SQL. Consider the following database: ``` ID | Full Name 1 Jon Doe 2 Sarah Lee ``` What I am trying to achieve is a simple split on the full name field (space) within a query. ``` ID | First Name | Last Name 1 Jon Doe 2 Sarah Lee ``` The issue faced is Firebird POSITION() was introduced in v2.0. Is there any known workaround to split on a space that anyone has come across? Much appreciate your assistance!
For Firebird 1.5, a solution is to find a UDF that either combines both functions, or provides the position (I don't use UDFs, so I am not sure if one already exists). If none is available, you might have to write one. The other solution is to write a stored procedure for this functionality, see for example: [Position of substring function in SP](https://web.archive.org/web/20180322180542/http://mujweb.cz/iprenosil/interbase/ip_ib_code_string.htm#_code_strpos) ``` CREATE PROCEDURE Pos (SubStr VARCHAR(100), Str VARCHAR(100)) RETURNS (Pos INTEGER) AS DECLARE VARIABLE SubStr2 VARCHAR(201); /* 1 + SubStr-lenght + Str-length */ DECLARE VARIABLE Tmp VARCHAR(100); BEGIN IF (SubStr IS NULL OR Str IS NULL) THEN BEGIN Pos = NULL; EXIT; END SubStr2 = SubStr || '%'; Tmp = ''; Pos = 1; WHILE (Str NOT LIKE SubStr2 AND Str NOT LIKE Tmp) DO BEGIN SubStr2 = '_' || SubStr2; Tmp = Tmp || '_'; Pos = Pos + 1; END IF (Str LIKE Tmp) THEN Pos = 0; END ``` This example (taken from the link) can be extended to then use [`SUBSTRING`](https://www.firebirdsql.org/file/documentation/html/en/refdocs/fblangref25/firebird-25-language-reference.html#fblangref25-functions-scalarfuncs-substring) to split on the space. For searching on a single character like a space, a simpler solution can probably be devised than above stored procedure. For your exact needs you might need to write a selectable stored procedure specifically for this purpose. However, upgrading your database to Firebird 2.5 will give you much more [powerful internal functions](https://www.firebirdsql.org/file/documentation/html/en/refdocs/fblangref25/firebird-25-language-reference.html#fblangref25-functions) that simplify this query (and your life)!
I also wanted to split a full name string to first and last name and I used the following SQL statements in firebird 2.1 Database: **Patients** is the table name. The **Name** field holds the full name string e.g.: "Jon Doe". The **FIRST\_NAME** field will store the first name and the **LAST\_NAME** field the last name First get the first name (string part before the first space) and execute a TRIM UPDATE statement to remove any spaces. ``` UPDATE "Patients" SET "Patients".FIRST_NAME = (SUBSTRING("Patients"."Name" FROM 1 FOR (POSITION(' ' IN "Patients"."Name")))) UPDATE "Patients" SET "Patients".FIRST_NAME = TRIM(BOTH ' ' FROM "Patients".FIRST_NAME) ``` Then get the last name (the string after the first space) and execute a TRIM UPDATE statement to remove any spaces ``` UPDATE "Patients" SET "Patients"."LAST_NAME" = (SUBSTRING("Patients"."Name" FROM (POSITION(' ' IN "Patients"."Name")+1))) UPDATE "Patients" SET "Patients".LAST_NAME = TRIM(BOTH ' ' FROM "Patients".LAST_NAME) ``` The result will be: ``` ID | NAME | FIRST_NAME | LAST_NAME 1 Jon Doe Jon Doe 2 Sarah Lee Sarah Lee ```
Firebird Database Split String on Field
[ "", "sql", "firebird", "" ]
I feel like this is a common problem, but it seems that none of the answers that I have found on SO or other sites seem to address the issue of a while loop with a counter. Let's say that I am trying to write a stored procedure in SQL that will populate a user's timesheet by inserting a row for each day for the remainder of the month. If the `@endMonth` variable holds the last day of the month, then I know that I could easily write a while loop and do something along these lines: ``` WHILE @date <= @endMonth BEGIN //Do some action with the date, like an insert SET @date = DATEADD(d, 1, @date) //increment the date by one day END ``` However, looking at answers [here](https://stackoverflow.com/questions/4487546/do-while-loop-in-sql-server-2008?rq=1) and on other sites leads me to believe that it would be best to avoid using a while loop if at all possible. So my question is this: is there a way I can implement a loop with a counter in SQL without using the `WHILE` structure? What technique would I use to go about converting a loop similar to the one I posted? Or with something like this, do I have to bite the bullet and just use a while loop? As an aside, some of the following questions come close, but none of them seem to quite address the issue of needing a counter as a loop condition. Most of the answers seem to condemn using `WHILE` loops, but I can't seem to find a general purpose solution as to an alternative. [sql while loop with date counter](https://stackoverflow.com/questions/1725379/sql-while-loop-with-date-counter) [SQL Server 2008 Insert with WHILE LOOP](https://stackoverflow.com/questions/15182468/sql-server-2008-insert-with-while-loop) (this one was close, but unfortunately for me it only works with an auto increment column)
I saw many examples of populating data. First you create dates from starting to ending dates in cte and then you can insert it into table. One of them is with cte: ``` DECLARE @StartDate DateTime = '2014-06-01' DECLARE @EndDate DateTime = '2014-06-29' ;WITH populateDates (dates) AS ( SELECT @StartDate as dates UNION ALL SELECT DATEADD(d, 1, dates) FROM populateDates WHERE DATEADD(d, 1, dates)<=@EndDate ) SELECT * INTO dbo.SomeTable FROM populateDates ``` You should try to look for on internet `how to populate date in sql table`
As a general case, you can increment values without using cursors by assigning values and incrementing the variable in the same select, like this: ``` DECLARE @i INT = 0 DECLARE @table TABLE ( ID INT , testfield VARCHAR(5) ) INSERT INTO @table ( testfield ) VALUES ( 'abcd'), ( 'efgh' ), ( 'ijkl' ), ( 'mnop' ) UPDATE @table SET @I = ID = @i + 1 SELECT * FROM @table ```
Avoiding while loops in SQL when a counter is required
[ "", "sql", "sql-server", "loops", "while-loop", "" ]
I have the following regex that checks for a list of valid characters: ``` ^([a-zA-Z0-9+?/:().,' -]){1,35}$ ``` What I now need to do now is search for any existing columns in our DB that invalidates the above regex. I'm using the oracle SQL `REGEXP_LIKE` command. The problem I have is I can't seem to negate the above expression and return a value when it finds a character not in the expression e.g. `"a-valid-filename.xml"` => this shouldn't be returned as it's valid. `"an_invalid-filename.xml"` => I need to find these i.e. anything with an invalid character. The obvious answer to me is to define a list of invalid characters... but that could be a long list.
You can match it against the following regex which uses the `[^...]` negation character class: ``` ([^a-zA-Z0-9+?/:().,' -]) ``` This will match any single character that is not part of the list of characters that are allowed.
Try this: ``` where not regexp_like(col, '^([a-zA-Z0-9+?/:().,'' -]){1,35}$') ``` or ``` where regexp_like(col, '[^a-zA-Z0-9+?/:().,'' -]') ```
Regular Expression to return when invalid character found
[ "", "sql", "regex", "" ]
Environment: Ruby 2.0.0, Rails 4.0.3, Windows 8.1, PostreSQL, Datatable 1.12.2, Will\_Paginate 3.0.5 Iโ€™ve successfully implemented the Railscast 340 solution, thanks to help provided here. However, my table has columns that are not native to the displayed table. These columns are polymorphic relationships using has\_many through. The table being displayed is: ``` class Product < ActiveRecord::Base has_one :location, dependent: :destroy has_one :patron, through: :location, source: :locator, source_type: 'Patron' has_one :shelf, through: :location, source: :locator, source_type: 'Shelf' ``` EDIT to add the location (through) table: ``` class Location < ActiveRecord::Base belongs_to :product belongs_to :locator, polymorphic: true ``` One polymorphic column that needs to be used for sorting is: ``` class Shelf < ActiveRecord::Base has_many :locations, as: :locator, dependent: :nullify has_many :products, through: :locations, dependent: :nullify class Patron < ActiveRecord::Base has_many :locations, as: :locator, dependent: :nullify has_many :products, through: :locations, dependent: :nullify ``` The column is being displayed normally. However, since the column doesnโ€™t exist in the Product table, it is not available to sort using the basic statement: ``` products = Product.order("#{sort_column} #{sort_direction}") ``` I know that I could read the table, sort it and make it available for display, but that is what I am trying to avoid by implementing Railscast 340 due to the performance hit. I assume there is some kind of query, join or sort sequence that would allow me to do this, but I am at a loss as to where to startโ€ฆ Reading the ActiveRecord Query Guide, though it is relatively good as far as guides go, leaves me more confused than when I started. Any direction as to how to attack this would be appreciated. Thanks. EDIT Stack Trace Follows Sort was: ``` products = Product.all.joins(:location).order("location.product.readable_loc #{sort_direction}") ``` Stack trace was: ``` Started GET "/products.json?sEcho=3&iColumns=8&sColumns=&iDisplayStart=0&iDisplayLength=10&mDataProp_0=0&mDataProp_1=1&mDataProp_2=2&mDataProp_3=3&mDataProp_4=4&mDataProp_5=5&mDataProp_6=6&mDataProp_7=7&sSearch=&bRegex=false&sSearch_0=&bRegex_0=false&bSearchable_0=true&sSearch_1=&bRegex_1=false&bSearchable_1=true&sSearch_2=&bRegex_2=false&bSearchable_2=true&sSearch_3=&bRegex_3=false&bSearchable_3=true&sSearch_4=&bRegex_4=false&bSearchable_4=true&sSearch_5=&bRegex_5=false&bSearchable_5=true&sSearch_6=&bRegex_6=false&bSearchable_6=true&sSearch_7=&bRegex_7=false&bSearchable_7=true&iSortCol_0=6&sSortDir_0=desc&iSortingCols=1&bSortable_0=true&bSortable_1=true&bSortable_2=true&bSortable_3=true&bSortable_4=true&bSortable_5=true&bSortable_6=true&bSortable_7=true&_=1401813038880" for 127.0.0.1 at 2014-06-03 14:50:55 -0400 Processing by ProductsController#index as JSON Parameters: {"sEcho"=>"3", "iColumns"=>"8", "sColumns"=>"", "iDisplayStart"=>"0", "iDisplayLength"=>"10", "mDataProp_0"=>"0", "mDataProp_1"=>"1", "mDataProp_2"=>"2", "mDataProp_3"=>"3", "mDataProp_4"=>"4", "mDataProp_5"=>"5", "mDataProp_6"=>"6", "mDataProp_7"=>"7", "sSearch"=>"", "bRegex"=>"false", "sSearch_0"=>"", "bRegex_0"=>"false", "bSearchable_0"=>"true", "sSearch_1"=>"", "bRegex_1"=>"false", "bSearchable_1"=>"true", "sSearch_2"=>"", "bRegex_2"=>"false", "bSearchable_2"=>"true", "sSearch_3"=>"", "bRegex_3"=>"false", "bSearchable_3"=>"true", "sSearch_4"=>"", "bRegex_4"=>"false", "bSearchable_4"=>"true", "sSearch_5"=>"", "bRegex_5"=>"false", "bSearchable_5"=>"true", "sSearch_6"=>"", "bRegex_6"=>"false", "bSearchable_6"=>"true", "sSearch_7"=>"", "bRegex_7"=>"false", "bSearchable_7"=>"true", "iSortCol_0"=>"6", "sSortDir_0"=>"desc", "iSortingCols"=>"1", "bSortable_0"=>"true", "bSortable_1"=>"true", "bSortable_2"=>"true", "bSortable_3"=>"true", "bSortable_4"=>"true", "bSortable_5"=>"true", "bSortable_6"=>"true", "bSortable_7"=>"true", "_"=>"1401813038880"} Company Load (1.0ms) SELECT "companies".* FROM "companies" WHERE "companies"."prefix" = 'ucf' ORDER BY "companies"."id" ASC LIMIT 1 Device Load (0.0ms) SELECT "devices".* FROM "devices" WHERE "devices"."company_id" = 54 AND "devices"."id" = 601 ORDER BY "devices"."id" ASC LIMIT 1 (1.0ms) SELECT COUNT(*) FROM "roles" INNER JOIN "devices_roles" ON "roles"."id" = "devices_roles"."role_id" WHERE "devices_roles"."device_id" = $1 AND (((roles.name = 'admin') AND (roles.resource_type IS NULL) AND (roles.resource_id IS NULL))) [["device_id", 601]] (0.0ms) SELECT COUNT(*) FROM "products" WHERE "products"."company_id" = 54 (1.0ms) SELECT COUNT(*) FROM "products" INNER JOIN "locations" ON "locations"."product_id" = "products"."id" AND "locations"."company_id" = 54 WHERE "products"."company_id" = 54 Product Load (2.0ms) SELECT "products".* FROM "products" INNER JOIN "locations" ON "locations"."product_id" = "products"."id" AND "locations"."company_id" = 54 WHERE "products"."company_id" = 54 ORDER BY location.product.readable_loc desc LIMIT 10 OFFSET 0 PG::UndefinedTable: ERROR: missing FROM-clause entry for table "product" LINE 1: ...id" = 54 WHERE "products"."company_id" = 54 ORDER BY product_locati... ^ : SELECT "products".* FROM "products" INNER JOIN "locations" ON "locations"."product_id" = "products"."id" AND "locations"."company_id" = 54 WHERE "products"."company_id" = 54 ORDER BY location.product.readable_loc desc LIMIT 10 OFFSET 0 Completed 500 Internal Server Error in 22ms PG::UndefinedTable - ERROR: missing FROM-clause entry for table "product" LINE 1: ...id" = 54 WHERE "products"."company_id" = 54 ORDER BY product_locati... ^ : activerecord (4.0.3) lib/active_record/connection_adapters/postgresql_adapter.rb:774:in `exec_no_cache' activerecord (4.0.3) lib/active_record/connection_adapters/postgresql/database_statements.rb:138:in `block in exec_query' activerecord (4.0.3) lib/active_record/connection_adapters/abstract_adapter.rb:435:in `block in log' activesupport (4.0.3) lib/active_support/notifications/instrumenter.rb:20:in `instrument' activerecord (4.0.3) lib/active_record/connection_adapters/abstract_adapter.rb:430:in `log' activerecord (4.0.3) lib/active_record/connection_adapters/postgresql/database_statements.rb:137:in `exec_query' activerecord (4.0.3) lib/active_record/connection_adapters/postgresql_adapter.rb:891:in `select' activerecord (4.0.3) lib/active_record/connection_adapters/abstract/database_statements.rb:24:in `select_all' activerecord (4.0.3) lib/active_record/connection_adapters/abstract/query_cache.rb:61:in `block in select_all' activerecord (4.0.3) lib/active_record/connection_adapters/abstract/query_cache.rb:76:in `cache_sql' activerecord (4.0.3) lib/active_record/connection_adapters/abstract/query_cache.rb:61:in `select_all' activerecord (4.0.3) lib/active_record/querying.rb:36:in `find_by_sql' activerecord (4.0.3) lib/active_record/relation.rb:585:in `exec_queries' activerecord (4.0.3) lib/active_record/relation.rb:471:in `load' activerecord (4.0.3) lib/active_record/relation.rb:220:in `to_a' will_paginate (3.0.5) lib/will_paginate/active_record.rb:124:in `to_a' activerecord (4.0.3) lib/active_record/relation.rb:598:in `exec_queries' activerecord (4.0.3) lib/active_record/relation.rb:471:in `load' activerecord (4.0.3) lib/active_record/relation.rb:220:in `to_a' will_paginate (3.0.5) lib/will_paginate/active_record.rb:127:in `block in to_a' will_paginate (3.0.5) lib/will_paginate/collection.rb:96:in `create' will_paginate (3.0.5) lib/will_paginate/active_record.rb:126:in `to_a' D:65535:in `map' app/datatables/products_datatable.rb:20:in `data' app/datatables/products_datatable.rb:13:in `as_json' activesupport (4.0.3) lib/active_support/json/encoding.rb:50:in `block in encode' activesupport (4.0.3) lib/active_support/json/encoding.rb:81:in `check_for_circular_references' activesupport (4.0.3) lib/active_support/json/encoding.rb:49:in `encode' activesupport (4.0.3) lib/active_support/json/encoding.rb:34:in `encode' activesupport (4.0.3) lib/active_support/core_ext/object/to_json.rb:16:in `to_json' actionpack (4.0.3) lib/action_controller/metal/renderers.rb:90:in `block in <module:Renderers>' actionpack (4.0.3) lib/action_controller/metal/renderers.rb:33:in `block in _handle_render_options' D:/BitNami/rubystack-2.0.0-11/ruby/lib/ruby/2.0.0/set.rb:232:in `each' actionpack (4.0.3) lib/action_controller/metal/renderers.rb:30:in `_handle_render_options' actionpack (4.0.3) lib/action_controller/metal/renderers.rb:26:in `render_to_body' actionpack (4.0.3) lib/abstract_controller/rendering.rb:97:in `render' actionpack (4.0.3) lib/action_controller/metal/rendering.rb:16:in `render' actionpack (4.0.3) lib/action_controller/metal/instrumentation.rb:41:in `block (2 levels) in render' activesupport (4.0.3) lib/active_support/core_ext/benchmark.rb:12:in `block in ms' D:/BitNami/rubystack-2.0.0-11/ruby/lib/ruby/2.0.0/benchmark.rb:296:in `realtime' activesupport (4.0.3) lib/active_support/core_ext/benchmark.rb:12:in `ms' actionpack (4.0.3) lib/action_controller/metal/instrumentation.rb:41:in `block in render' actionpack (4.0.3) lib/action_controller/metal/instrumentation.rb:84:in `cleanup_view_runtime' activerecord (4.0.3) lib/active_record/railties/controller_runtime.rb:25:in `cleanup_view_runtime' actionpack (4.0.3) lib/action_controller/metal/instrumentation.rb:40:in `render' app/controllers/products_controller.rb:8:in `block (2 levels) in index' actionpack (4.0.3) lib/action_controller/metal/mime_responds.rb:191:in `respond_to' app/controllers/products_controller.rb:6:in `index' actionpack (4.0.3) lib/action_controller/metal/implicit_render.rb:4:in `send_action' actionpack (4.0.3) lib/abstract_controller/base.rb:189:in `process_action' actionpack (4.0.3) lib/action_controller/metal/rendering.rb:10:in `process_action' actionpack (4.0.3) lib/abstract_controller/callbacks.rb:18:in `block in process_action' activesupport (4.0.3) lib/active_support/callbacks.rb:453:in `_run__936629966__process_action__callbacks' activesupport (4.0.3) lib/active_support/callbacks.rb:80:in `run_callbacks' actionpack (4.0.3) lib/abstract_controller/callbacks.rb:17:in `process_action' actionpack (4.0.3) lib/action_controller/metal/rescue.rb:29:in `process_action' actionpack (4.0.3) lib/action_controller/metal/instrumentation.rb:31:in `block in process_action' activesupport (4.0.3) lib/active_support/notifications.rb:159:in `block in instrument' activesupport (4.0.3) lib/active_support/notifications/instrumenter.rb:20:in `instrument' activesupport (4.0.3) lib/active_support/notifications.rb:159:in `instrument' actionpack (4.0.3) lib/action_controller/metal/instrumentation.rb:30:in `process_action' actionpack (4.0.3) lib/action_controller/metal/params_wrapper.rb:245:in `process_action' activerecord (4.0.3) lib/active_record/railties/controller_runtime.rb:18:in `process_action' actionpack (4.0.3) lib/abstract_controller/base.rb:136:in `process' actionpack (4.0.3) lib/abstract_controller/rendering.rb:44:in `process' actionpack (4.0.3) lib/action_controller/metal.rb:195:in `dispatch' actionpack (4.0.3) lib/action_controller/metal/rack_delegation.rb:13:in `dispatch' actionpack (4.0.3) lib/action_controller/metal.rb:231:in `block in action' actionpack (4.0.3) lib/action_dispatch/routing/route_set.rb:80:in `dispatch' actionpack (4.0.3) lib/action_dispatch/routing/route_set.rb:48:in `call' actionpack (4.0.3) lib/action_dispatch/journey/router.rb:71:in `block in call' actionpack (4.0.3) lib/action_dispatch/journey/router.rb:59:in `call' actionpack (4.0.3) lib/action_dispatch/routing/route_set.rb:680:in `call' request_store (1.0.5) lib/request_store/middleware.rb:9:in `call' warden (1.2.3) lib/warden/manager.rb:35:in `block in call' warden (1.2.3) lib/warden/manager.rb:34:in `call' rack (1.5.2) lib/rack/etag.rb:23:in `call' rack (1.5.2) lib/rack/conditionalget.rb:25:in `call' rack (1.5.2) lib/rack/head.rb:11:in `call' actionpack (4.0.3) lib/action_dispatch/middleware/params_parser.rb:27:in `call' actionpack (4.0.3) lib/action_dispatch/middleware/flash.rb:241:in `call' rack (1.5.2) lib/rack/session/abstract/id.rb:225:in `context' rack (1.5.2) lib/rack/session/abstract/id.rb:220:in `call' actionpack (4.0.3) lib/action_dispatch/middleware/cookies.rb:486:in `call' activerecord (4.0.3) lib/active_record/query_cache.rb:36:in `call' activerecord (4.0.3) lib/active_record/connection_adapters/abstract/connection_pool.rb:626:in `call' activerecord (4.0.3) lib/active_record/migration.rb:369:in `call' actionpack (4.0.3) lib/action_dispatch/middleware/callbacks.rb:29:in `block in call' activesupport (4.0.3) lib/active_support/callbacks.rb:373:in `_run__103024161__call__callbacks' activesupport (4.0.3) lib/active_support/callbacks.rb:80:in `run_callbacks' actionpack (4.0.3) lib/action_dispatch/middleware/callbacks.rb:27:in `call' actionpack (4.0.3) lib/action_dispatch/middleware/reloader.rb:64:in `call' actionpack (4.0.3) lib/action_dispatch/middleware/remote_ip.rb:76:in `call' better_errors (1.1.0) lib/better_errors/middleware.rb:84:in `protected_app_call' better_errors (1.1.0) lib/better_errors/middleware.rb:79:in `better_errors_call' better_errors (1.1.0) lib/better_errors/middleware.rb:56:in `call' actionpack (4.0.3) lib/action_dispatch/middleware/debug_exceptions.rb:17:in `call' actionpack (4.0.3) lib/action_dispatch/middleware/show_exceptions.rb:30:in `call' railties (4.0.3) lib/rails/rack/logger.rb:38:in `call_app' railties (4.0.3) lib/rails/rack/logger.rb:20:in `block in call' activesupport (4.0.3) lib/active_support/tagged_logging.rb:67:in `block in tagged' activesupport (4.0.3) lib/active_support/tagged_logging.rb:25:in `tagged' activesupport (4.0.3) lib/active_support/tagged_logging.rb:67:in `tagged' railties (4.0.3) lib/rails/rack/logger.rb:20:in `call' quiet_assets (1.0.2) lib/quiet_assets.rb:18:in `call_with_quiet_assets' actionpack (4.0.3) lib/action_dispatch/middleware/request_id.rb:21:in `call' rack (1.5.2) lib/rack/methodoverride.rb:21:in `call' rack (1.5.2) lib/rack/runtime.rb:17:in `call' activesupport (4.0.3) lib/active_support/cache/strategy/local_cache.rb:83:in `call' rack (1.5.2) lib/rack/lock.rb:17:in `call' actionpack (4.0.3) lib/action_dispatch/middleware/static.rb:64:in `call' rack (1.5.2) lib/rack/sendfile.rb:112:in `call' railties (4.0.3) lib/rails/engine.rb:511:in `call' railties (4.0.3) lib/rails/application.rb:97:in `call' rack (1.5.2) lib/rack/content_length.rb:14:in `call' thin (1.6.2) lib/thin/connection.rb:86:in `block in pre_process' thin (1.6.2) lib/thin/connection.rb:84:in `pre_process' thin (1.6.2) lib/thin/connection.rb:53:in `process' thin (1.6.2) lib/thin/connection.rb:39:in `receive_data' eventmachine-1.0.3-x86 (mingw32) lib/eventmachine.rb:187:in `run' thin (1.6.2) lib/thin/backends/base.rb:73:in `start' thin (1.6.2) lib/thin/server.rb:162:in `start' rack (1.5.2) lib/rack/handler/thin.rb:16:in `run' rack (1.5.2) lib/rack/server.rb:264:in `start' railties (4.0.3) lib/rails/commands/server.rb:84:in `start' railties (4.0.3) lib/rails/commands.rb:76:in `block in <top (required)>' railties (4.0.3) lib/rails/commands.rb:71:in `<top (required)>' bin/rails:4:in `<top (required)>' ruby-debug-ide (0.4.23.beta1) lib/ruby-debug-ide.rb:86:in `debug_program' ruby-debug-ide (0.4.23.beta1) bin/rdebug-ide:110:in `<top (required)>' -e:1:in `<main>' ``` EDIT Adding Product method readable\_loc ``` def readable_loc if self.location.locator.class == Patron self.location.locator.name # Yields patron's name else self.location.locator.row.name + " " + self.location.locator.name # Yields row name and shelf name. end end ```
I ended up using a bit of a hack to solve this problem. Database gurus may want to look away... The problem is more complex than originally stated. Any non-native column was not sorting correctly, whether or not it was polymorphic. It was sorting by the foreign key, not by the value of the foreign column being displayed in the table. For every foreign column, I created a local shadow column within the original table that holds the desired value. The sort\_column uses that shadow column to sort the table, but that is the only use for the shadow column. The datatable continues to reference the foreign table through the foreign key for all values even for display. This did create the quandary as to how to make sure the shadow column was maintained in synchronization with the foreign table. I am using the after\_update callback to do this. If any column is out of synchronization, all shadow columns are updated and the record is saved. For me, this occurs even when the record is created due to its associations. I had initially tried to do it directly with before\_save, but the associations were causing problems because they could not be accessed before the record was saved without the record being saved. You could see that would be an issue. In any case, this works for me. YMMV, so be careful out there. ``` after_update :shadow_update ... def shadow_update # After record is updated, update shadow columns if needed and force an update to write them # This actually triggers after create as well, but with correct timing regarding associations # Since all columns are updated to resolve differences, one update should resolve all if self.locshadow != self.location or self.year != self.yrshadow or ... self.locshadow = self.location self.yrshadow = self.year ... self.save! end end ```
``` products = Product.all.joins(:location).order("location.product.readable_loc {sort_direction}") ``` This won't work, as you discovered. The argument to `order` must be SQL snippet, you can't call instance methods, associations, etc. from the model. So you need to use the value of a column, or write custom SQL to compute the order clause. This is made even more complicated in your example because of the polymorphic joins; we don't know how to order the result set without first digging through several associations. I don't see a way to do this without explicitly writing the joins and the order clause; see below for a possible solution: ``` scope :by_readable_loc, -> { joins(join_clause).order("#{ order_clause } #{sort_direction}") } def self.join_clause <<-EOS JOIN "locations" ON "locations"."product_id" = "products"."id" LEFT JOIN "patrons" ON "locations"."locator_id" = "patrons"."id" AND "locations"."locator_type" = 'Patron' LEFT JOIN "shelves" ON "locations"."locator_id" = "shelves"."id" AND "locations"."locator_type" = 'Shelf' LEFT JOIN "rows" ON "shelves"."row_id" = "rows"."id" EOS end def self.order_clause <<-EOS CASE WHEN locations.locator_type = 'Patron' THEN "patrons"."name" ELSE "rows"."name" || ' ' || "shelves"."name" END EOS end ``` This implements `readable_loc` as a class method called `order_clause`, which can be passed into a scope, e.g. Product.by\_readable\_loc. You'd need to add LIMIT and OFFSET as appropriate for pagination.
How do I sort a โ€œforeignโ€ column using datatable/will_paginate as in Railscast 340?
[ "", "sql", "ruby-on-rails", "postgresql", "sorting", "activerecord", "" ]
I have two tables, `sites` and `callback_votes` where sites holds the available sites to vote on, and `callback_votes` holds the votes that has been done on each site. callback\_votes is structured like this: [![img](https://i.stack.imgur.com/04pEz.png)](https://i.stack.imgur.com/04pEz.png) (source: [gyazo.com](http://gyazo.com/7aa672895bc2fb76cdf54539558ec221.png)) Each time someone votes, the callback adds a new `callback_votes` row with the site id he voted on into site\_id and the username, ip address he voted from. Now what I want to do is, check if user has voted on all sites in the table sites by going through ***all*** `callback_votes` rows for the given ip, and check if it's `site_id` **matches** in the `sites` table. *Here's what I tried to do:* ``` SELECT sites.* FROM sites WHERE ( SELECT COUNT(*) FROM callback_votes WHERE callback_votes.ip = '127.0.0.1') = ( SELECT COUNT(*) FROM sites); ``` But this query won't work how I want, it only checks if there are the same count of votes with that ip as the number of rows on the sites table, which is cheat-able. How can I do it the way I described? EDIT; My websites provides the sites to vote on, each site has its own id in the sites table. To vote on a site, the user clicks on the site (I load all site buttons using the sites table) and it sends him to the site's page. Once the user has voted, the site he voted on sends a callback to my site, and my site will create a new row in callback\_votes with the site id he voted on, his ip and username. Now I want to check if user has voted on all available sites by checking if the count of the votes in callback\_votes is same, and the site\_id matches to each other to prevent cheating.
Get the number of sites that are not in distinct number of sites in callback where ip is user's ip: ``` Select count(*) From sites Where site_id NOT IN (Select DISTINCT site_id From callback_votes Where ip='1.0.0.127') ``` This will return count of sites not yet voted by this ip. If count is 0, then user has voted all.
Check the count is 0 from ``` select Count(*) from Sites where Site_id NOT IN (select Site_id from Votes where votes.user_ip = '127.0.0.1') ``` You can get the list of non voted sites too with the same cost by using ``` select Site_id from Sites where Site_id NOT IN (select Site_id from Votes where votes.user_ip = '127.0.0.1') ```
Selecting if all row ids exists in other table with column check
[ "", "mysql", "sql", "" ]
I am Having table like this: ``` id candid candname status date time location jobcode 1 12 hhhhhhhhhh Introduce 2014-05-21 14:0 NewYork 10JN 3 12 hhhhhhhhhh Reject 2014-05-21 15:0 AM London 10JN 4 12 hhhhhhhhhh Interview 2014-05-21 15:0 PM Chicago 10JN 5 11 Pinky Bare Introduce 2014-05-21 65:6 India 10JN 6 11 Pinky Bare Interview 2014-05-21 4:56 AM 10JN 7 13 chetan Tae Introduce 2014-05-21 4:54 AM Nagpur faOl 8 13 chetan Tae Interview 2014-05-21 3:45 Pune faOl 9 14 manisha mane Introduce 2014-05-21 3:33 PM Pune faOl 10 18 ranju gondane Introduce 2014-05-28 3:44 Nagpur AQW-06 12 18 ranju gondane Interview 2014-05-28 5:45 45454 AQW-06 13 18 ranju gondane Reject 2014-05-28 43:43 rsds AQW-06 14 19 vandanna rai Introduce 2014-05-28 7:7 yyyr AQW-06 ``` if i use query ``` SELECT COUNT(*) FROM [tablename] WHERE (jobcode='AQW-06') AND ([status] <> 'Interview' AND [status] <> 'Reject' AND [status] <> 'ON-Hold' AND [status] <> 'Hire') ``` I get count 2 for introduce candidates.. if the candidate is interviewd after introduce, it will not counted as Introduce I want the count of Introduce, interviewd, rejected candidates of specofic jobcode Please help me for this.
You can try ``` select status, count(*) from [tablename] where jobcode = 'AQW-06' group by status ``` Edit: You can try use something like this ``` select count(x.candid) numofcandidates, x.statusnum from (select candid, max(case when status = 'Reject' then 3 when status = 'Interview' then 2 when status = 'Introduce' then 1 end) statusnum from [tablename] t where jobcode = 'AQW-06' group by candid) x group by x.statusnum; ``` What I actually did is to "translate" the status to a number, so I can use the highest status first. All you need to do then it to "translate" back the statusnum to the values of your table. In my opinion I would use a statusnum in my table directly
Try this: ``` ;with reftable as (select 1 'key', 'Introduce' 'val' union select 2 'key', 'Interview' 'val' union select 3 'key', 'Rejected' 'val' ), cte as (select e.candid, e.[status], row_number() over (partition by e.candid order by r.[key] desc) rn from yourtable e inner join reftable r on e.[status] = r.val where e.[status] in ('Introduce','Interview','Rejected') and e,jobcode = 'AQW-06') select [status], count([status]) from cte where rn = 1 group by [status] ``` Basically, we assign a numeric value to your text status to allow sorting. In the `over` clause, we sort by this numeric value in descending order to get the highest status of a candidate as you describe. Then, we just count the number of occurrences of each status. Note that you can extend this to include values for status like 'Hire'. To do this, you will need to add it to the list in `reftable` with appropriate numeric value, and also add it to the filter in `cte`.
Find count from specific table by specific filter sql server
[ "", "sql", "sql-server-2008", "" ]
I have this data: ``` | bid_id | created | auction_id | user_id | bid_credits | bid_credits_free | bid_rating | balance | bidded_price | last_for_user | bid_ip | bid_type | +--------+---------------------+------------+---------+-------------+------------------+------------+---------+--------------+---------------+--------------+----------+ | 735 | 2013-10-11 10:02:58 | 9438 | 62323 | 1 | 0 | 0.0000 | 100333 | 0.86 | Y | 72.28.166.61 | single | | 734 | 2013-10-11 10:02:56 | 9438 | 76201 | 1 | 1 | 0.0000 | 1115 | 0.85 | Y | 72.28.166.61 | single | | 733 | 2013-10-11 10:02:55 | 9438 | 62323 | 1 | 0 | 0.0000 | 100334 | 0.84 | N | 72.28.166.61 | single | | 732 | 2013-10-11 10:02:54 | 9438 | 76201 | 1 | 1 | 0.0000 | 1116 | 0.83 | N | 72.28.166.61 | single | | 731 | 2013-10-11 10:02:52 | 9438 | 62323 | 1 | 0 | 0.0000 | 100335 | 0.82 | N | 72.28.166.61 | single | ``` I'm trying to get the number of "bid\_credits" and "bid\_credits\_free" as SEPARATE VALUES... So the query should return me: ``` | user_id | count(bid_credits) | count(bid_credits_free) | +---------+--------------------+-------------------------+ | 62323 | 3 | 0 | | 76201 | 2 | 2 | ``` The query that I am using is: ``` select user_id, count(bid_credits), count(bid_credits_free) from bids_history where auction_id = 9438 and user_id in (62323,76201) group by user_id; ``` but it's not counting the bids correctly... Any ideas? Thanks
You're looking to SUM them, COUNT is just counting rows. Try this: ``` select user_id, sum(bid_credits), sum(bid_credits_free) from bids_history where auction_id = 9438 and user_id in (62323,76201) group by user_id; ```
use a sum instead of count when grouping.. should work I also reformatted so its easier to read :) ``` SELECT user_id, SUM(bid_credits), SUM(bid_credits_free) FROM bids_history WHERE auction_id = 9438 AND user_id IN (62323,76201) GROUP BY user_id; ``` the reason why you want to use a sum instead of count is the count will just count the number of rows in a table, but not the contents / value of whats inside it. so when you group by an id like that you need to do a sum to see the actual addition of the contents. hope that helps explain things a bit :)
MySQL Query select count where in group by
[ "", "mysql", "sql", "" ]
I have a site that posts articles, and certain articles are assigned a location and a profile so only people in that location and relevant profile can see it. I have a query which returns the location and the count of articles assigned to that location ``` SELECT locations.id, locations.location, COUNT( DISTINCT article.id) AS Number FROM ar.locations JOIN ar.articleLocation ON articleLocation.locationId = locations.id JOIN ar.article ON article.id=articleLocation.articleId JOIN ar.articleProfile ON article.id = articleProfile.articleId WHERE article.createDate >= '2013-11-30' AND article.startDate <= '2014-05-30' AND articleProfile.profileId IN ('1000000410','1000000408','1000000393') AND articleLocation.locationId IN ('250','194','195','204','281') GROUP BY locations.id, locations.location ORDER BY locations.location ``` This returns the results ``` id location Number 194 LocationA 1 250 LocationB 16 281 LocationC 2 ``` But in the query there are 2 other location Ids, and because there are no articles assigned to those locations, nothing is being returned for those IDS Ideally I would like ``` id location Number 194 LocationA 1 250 LocationB 16 281 LocationC 2 204 LocationD 0 195 LocationE 0 ``` I can't seem to figure out how to bring back 0 if no articles exist in that location. Any help/pointers in the right direction would be greatly appreciated. Also I'm more than open to suggestions if there is a more efficient/better way of doing what I'm currently doing.
You can use a `LEFT JOIN` for this purpose, as below. The parentheses were not located properly, which have been corrected below. ``` SELECT locations.id, locations.location, COALESCE(COUNT( DISTINCT article.id), 0) AS Number FROM ar.locations JOIN ar.articleLocation ON articleLocation.locationId = locations.id LEFT JOIN (SELECT article.* FROM ar.article JOIN ar.articleProfile ON article.id = articleProfile.articleId WHERE article.createDate >= '2013-11-30' AND article.startDate <= '2014-05-30' AND articleProfile.profileId IN ('1000000410','1000000408','1000000393') ) article ON article.id=articleLocation.articleId WHERE articleLocation.locationId IN ('250','194','195','204','281') GROUP BY locations.id, locations.location ORDER BY locations.location; ``` The COALESCE function will print 0 if there are no records (instead of returning NULL). **References**: [Using Outer Joins on TechNet](http://technet.microsoft.com/en-us/library/ms187518%28v=sql.105%29.aspx) [COALESCE (Transact-SQL) on MSDN](http://msdn.microsoft.com/en-us/library/ms190349.aspx)
Use Left Join on Article table. That will add NULL into rows where you expect to see 0. And to see 0, just you ISNULL(0,)
SQL to return rows where one column is 0
[ "", "sql", "sql-server-2008", "t-sql", "" ]
I would like to select every row from a table which includes a faulty social security number. In this simple example the social security number has to be exactly 4 numbers long, using the numbers 0-9. The table looks something like this: ``` |SSN |Name |Favorite Food| |1345|Mark |Meatballs | |1458|Connor|Tacos | |12 |Lisa |Pizza | |1487|Clark |Tomato Soup | |XQXQ|Hans |Sallad | ``` I would like to select the name/names of the person(s) which do not have a correct SSN. The query should result in **Lisa** since she only has **2** numbers in her SSN and **Hans** which has written invalid characters. Thank you for reading this, have a nice one. I don't know how important this is but just so I supply correct information, I use MySQL and the SSN is of the type "char".
Not sure which database you are using but using the LIKE statement ... ``` select * from Table Where SSN not Like '[0-9][0-9][0-9][0-9]' ``` Check you database help for exact syntax
you can use `Length` function to get char. count.. ``` select * from Table Where LENGTH(SSN)<>4 ``` for ms-sql use `LEN`
SQL - Select everything that does not follow proper format
[ "", "mysql", "sql", "select", "" ]
I have question in SQL (MySQL environment). I have two tables: ``` Airports -------------------- id type city_id 1 2 1 2 3 1 3 4 2 City ---------- id name 1 Paris 2 Lyon ``` I want cities with airports whose type is 2 and 3. I have try: ``` SELECT * FROM city c INNER JOIN airports a ON a.city_id = c.id WHERE a.type = 1 AND a.type = 2 ``` But it does not work. Any ideas ?
If you need cities where type 1 and 2 airports exist both then try to use this query: ``` SELECT * FROM CITY JOIN ( SELECT CITY_ID FROM Airports WHERE type in (1,2) GROUP BY CITY_ID HAVING COUNT(DISTINCT type) =2 ) as A ON City.ID=a.City_id ```
If you are after the record paris which has two different types(1 and 2), try this: ``` SELECT c.* FROM city c INNER JOIN airports a ON a.city_id = c.id WHERE a.type IN (2,3) HAVING COUNT(DISTINCT a.type)>1 ``` Result: ``` ID NAME 1 Paris ``` See result in [**SQL Fiddle**](http://www.sqlfiddle.com/#!2/57bf0/10). **To be more detailed:** ``` SELECT c.id as CID,a.Id as AID,type,city_id,name FROM city c INNER JOIN airports a ON a.city_id=c.id LEFT JOIN (SELECT c.id FROM city c INNER JOIN airports a ON a.city_id = c.id WHERE a.type IN (2,3) HAVING COUNT(DISTINCT a.type)>1) T ON T.id=c.id WHERE T.id IS NOT NULL ``` Result: ``` CID AID TYPE CITY_ID NAME 1 1 2 1 Paris 1 2 3 1 Paris ``` Fiddle [**Example**](http://www.sqlfiddle.com/#!2/57bf0/15).
SQL: Get parent Join child where child type = 1 AND child type = 2
[ "", "mysql", "sql", "join", "" ]
I am aware of what the problem is with my query, but am really struggling to find a solution here. **[SQL Fiddle](http://sqlfiddle.com/#!3/a992e/1)** I guess I'm not even really sure how to ask this. What I'm trying to achieve sum all tracking numbers for a date range grouped by branch, but (and this is the kicker) include **any** other records in the sum that have the same tracking number. I thought of doing something like this, but of course SQL Server doesn't like this because I can't have a subquery in an aggregate function. `MAX((select SUM(demo.NegotiatedRate) where #demo.Tracking = demo2.Tracking)) as NegotiatedRate` Here is the query I have so far if anyone doesnt want to click the SQL Fiddle link ``` select demo.Branch, SUM(demo.NegotiatedRate) as NegotiatedRate, SUM(demo2.BillRate) as BillRate from demo join demo2 on demo2.Tracking = demo.Tracking where demo.ShipDate = '2014-05-01' group by demo.Branch ``` **Expected Output** The output that I am trying to achieve would look something like this. The `GH6` negotiated rate and bill rate should match even though one of the `GH6` entries falls outside of desired date range. ``` Branch NegotiatedRate BillRate GH4 50 50 GH6 25 25 ```
You can pre-project the overall (non date-range bound, unfiltered) totals in a separate derived table or cte and then join back to it: ``` WITH totals AS ( SELECT demo.Tracking, SUM(demo.NegotiatedRate) as NegotiatedRate from demo group by demo.Tracking ) select demo.Branch, MIN(totals.NegotiatedRate) as NegotiatedRate, SUM(demo2.BillRate) as BillRate from demo join demo2 on demo2.Tracking = demo.Tracking join totals on totals.Tracking = demo.Tracking where demo.ShipDate = '2014-05-01' group by demo.Branch; ``` [SqlFiddle here](http://sqlfiddle.com/#!3/a992e/18) Given that there should only be one `NegotiatedRate` per tracking, you can circumvent the need to add the summed `totals.NegotiatedRate` to the outer query by applying an aggregate (I've used `MIN`), although this is just to pacify Sql.
As a bit of a simpler answer, you can do something like: ``` SELECT demo.Branch, SUM(demo.NegotiatedRate) AS NegotiatedRate, demo2.BillRate FROM demo JOIN demo2 on demo2.Tracking = demo.Tracking WHERE demo.Tracking IN ( SELECT Tracking FROM demo WHERE ShipDate = '2014-05-01' ) GROUP BY demo.Branch, demo2.BillRate ``` As I understand it, you get all the tracking numbers you want in a certain date range, then get all the information from those tracking numbers, no matter the date range. Then groups them by the Branch and BillRate, both of which should be one value for each tracking number.
T-SQL Group By and Sub Query
[ "", "sql", "sql-server", "t-sql", "group-by", "" ]
I have this PL/SQL piece of code RESTALLSEQUENCENUMBERLIST contains value '2,3,4' which i am converting this into '2','3','4' for the in clause in the third line by the second statement. But my update query is giving me invalid number. Can anybody help with this ``` restAllSequenceNumberInList varchar2(100); SELECT '''' || REPLACE( RESTALLSEQUENCENUMBERLIST, ',', ''',''' ) || '''' into restAllSequenceNumberInList FROM DUAL; UPDATE THIRD_PARTY_LOOKUP SET ADDRESS_ID = firstSequenceNumber WHERE ADDRESS_ID in (RESTALLSEQUENCENUMBERINLIST); ```
The `in` clause does not take a string argument that represents multiple values. It is as simple as that. If you pass a single string to `in` it behaves the same as `=`. You can do what you want using `like`: ``` UPDATE THIRD_PARTY_LOOKUP SET ADDRESS_ID = firstSequenceNumber WHERE ','||ADDRESS_ID||',' LIKE '%,' || RESTALLSEQUENCENUMBERLIST || '%,'; ``` However, instead of storing lists in a string, why not store them in a temporary table? After all, tables are the SQL construct designed explicitly for storing lists of things.
You cannot pass comma separated list in a variable and assume SQL to treat them as separate values, try this ``` --replace with " not ' SELECT REPLACE '''' || REPLACE( RESTALLSEQUENCENUMBERLIST, ',', '","' ) || '''' into restAllSequenceNumberInList FROM DUAL; --use xml to split them into rows UPDATE THIRD_PARTY_LOOKUP SET ADDRESS_ID = firstSequenceNumber WHERE ADDRESS_ID in (SELECT EXTRACTVALUE(COLUMN_VALUE,'text()') VALS FROM XMLTABLE(restAllSequenceNumberInList) ) ```
Variable inclause is not working for the PL/SQL
[ "", "sql", "oracle", "" ]
I'm having trouble creating a table using RODBC's sqlSave (or, more accurately, writing data to the created table). This is different than the existing sqlSave question/answers, as 1. the problems they were experiencing were different, I can create tables whereas they could not and 2. I've already unsuccesfully incorporated their solutions, such as closing and reopening the connection before running sqlSave, also 3. The error message is different, with the only exception being a post that was different in the above 2 ways I'm using MS SQL Server 2008 and 64-bit R on a Windows RDP. I have a simple data frame with only 1 column full of 3, 4, or 5-digit integers. ``` > head(df) colname 1 564 2 4336 3 24810 4 26206 5 26433 6 26553 ``` When I try to use sqlSave, no data is written to the table. Additionally, an error message makes it sound like the table can't be created though the table does in fact get created with 0 rows. Based on a suggestion I found, I've tried closing and re-opening the RODBC connection right before running sqlSave. Even though I use `append = TRUE`, I've tried dropping the table before doing this but it doesn't affect anything. ``` > sqlSave(db3, df, table = "[Jason].[dbo].[df]", append = TRUE, rownames = FALSE) Error in sqlSave(db3, df, table = "[Jason].[dbo].[df]", : 42S01 2714 [Microsoft][ODBC SQL Server Driver][SQL Server]There is already an object named 'df' in the database. [RODBC] ERROR: Could not SQLExecDirect 'CREATE TABLE [Jason].[dbo].[df] ("df" int)' ``` I've also tried using sqlUpdate() on the table once it's been created. It doesn't matter if I create it in R or SQL Server Management Studio, I get the error `table not found on channel` Finally, note that I have also tried this without append = TRUE and when creating a new table, as well as with and without the rownames option. Mr.Flick from Freenode's #R had me check if I could read in the empty table using sqlQuery and indeed, I can. *Update* I've gotten a bit closer with the following steps: 1. I created an ODBC connection that goes directly to my Database within the SQL Server, instead of just to the default (Master) DB then specifying the path to the table within the `table =` or `tablename =` statements 2. Created the table in SQL Server Management Studio as follows `GO` `CREATE TABLE [dbo].[testing123](` `[Person_DIMKey] [int] NULL` `) ON [PRIMARY]` `GO` 1. In R I used `sqlUpdate` with my new ODBC connection and no brackets around the tablename 2. Now sqlUpdate() sees the table, however it complains that it needs a unique column 3. Indicating that the only column in the table is the unique column with `index = colname` results in an error saying that the column does not exist 4. I dropped and recreated the table specifying a primary key, `GO` `CREATE TABLE [dbo].[jive_BNR_Person_DIMKey](` `[jive_BNR_Person_DIMKey] [int] NOT NULL PRIMARY KEY` `) ON [PRIMARY]` `GO` which generated both a Primary Key and Index (according to the GUI interface of SQL Sever Management Studio) named `PK__jive_BNR__2754EC2E30F848ED` 1. I specified this index/key as the unique column in sqlUpdate() but I get the following error: `Error in sqlUpdate(db4, jive_BNR_Person_DIMKey, tablename = "jive_BNR_Person_DIMKey", :` `index column(s) PK__jive_BNR__2754EC2E30F848ED not in database table` For the record, I was specifying the correct column name (not "colname") for index; thanks to MrFlick for requesting clarification. Also, these steps are numbered 1 through 7 in my post but StackOverflow resets the numbering of the list a few times when it gets displayed. If anyone can help me clean that aspect of this post up I'd appreciate it.
After re-reading the RODBC vignette, here's the simple solution that worked: ``` sqlDrop(db, "df", errors = FALSE) sqlSave(db, df) ``` Done. After experimenting with this a lot more for several days, it seems that the problems stemmed from the use of the additional options, particularlly `table =` or, equivalently, `tablename =`. Those should be valid options but somehow they manage to cause problems with my particular version of RStudio ((Windows, 64 bit, desktop version, current build), R (Windows, 64 bit, v3), and/or MS SQL Server 2008. `sqlSave(db, df)` will also work without `sqlDrop(db, "df")` if the table has never existed, but as a best practice I'm writing `try(sqlDrop(db, "df", errors = FALSE), silent = TRUE)` before all `sqlSave` statements in my code.
After hours of working on this, I was finally able to get sqlSave to work while specifying the table name--deep breathe, where to start. Here is the list of things I did to get this to work: * Open 32-bit ODBC Administrator and create a User DSN and configure it for your specific database. In my case, I am creating a global temp table so I linked to tempdb. Use this connection Name in your `odbcConnection(Name)`. Here is my code `myconn2 <- odbcConnect("SYSTEMDB")`. * Then I defined my data types with the following code: `columnTypes <- list(Record = "VARCHAR(10)", Case_Number = "VARCHAR(15)", Claim_Type = "VARCHAR(15)", Block_Date = "datetime", Claim_Processed_Date = "datetime", Status ="VARCHAR(100)")`. * I then updated my data frame class types using `as.character` and `as.Date` to match the data types listed above. * I already created the table since I've been working on it for hours so I had to drop the table using `sqlDrop(myconn2, "##R_Claims_Data")`. * I then ran: `sqlSave(myconn2, MainClmDF2, tablename = "##R_Claims_Data", verbose=TRUE, rownames= FALSE, varTypes=columnTypes)` Then my head fell off because it worked! I really hope this helps someone going forward. Here are the links that helped me get to this point: [Table not found](https://stackoverflow.com/questions/26130616/sqlfetch-table-not-found-error) [sqlSave in R](https://stackoverflow.com/questions/21934564/sqlsave-in-r-to-create-and-save-a-dataframe-to-an-sql-table) [RODBC](https://stackoverflow.com/questions/8526278/r-rodbc-sqlsave-mapping-dataframe-timestamps-to-sql-server-timestamps)
RODBC sqlSave table creation problems
[ "", "sql", "sql-server", "r", "sql-server-2008", "rodbc", "" ]
We have a lot of databases and a lot of tables within those databases. I'm searching for a specific one. I know the name of the table but it wouldn't be easy to search through every database manually. What SQL statement could I used to find the table by name? Btw, we're using Microsoft SQL Server Management Studio. Maybe there's another way to search for tables by name within this program?
Thought I would update with the solution I use now to find a table among many dBs. After some searching around I found this query: ``` /*Finds a table across multiple dBs and returns the dB(s) in which the table was found*/ SELECT DISTINCT DB_NAME(database_id) FROM [sys].[dm_db_index_operational_stats](NULL,NULL,NULL,NULL) WHERE OBJECT_NAME(object_id,database_id) = 'table name' ``` This query finds the dB which holds the table. Then, in Microsoft SQL Server Mgmt Studio, I go to Object Explorer Window, find the dB identified by the query, expand its contents, and click on the Tables folder. Then I use the Filter tool to find the table by name. It would be nice if the filter tool worked on the Databases folder but it does not. You must select the Tables folder before filtering. This may not be the best solution, but it works for me.
You said you did a search which should've led you to this article: <http://blog.sqlauthority.com/2008/04/29/sql-server-find-table-in-every-database-of-sql-server/> If not, follow that. Basically what he creates is a stored procedure which will search for every table name you specify in every database. If you were to do this: ``` select * from sys.tables where name like '%tablename%' ``` You would need to change the database every single time and if you have a lot, well you see the problem.
SQL statement to find a table by its name
[ "", "sql", "sql-server", "database", "find", "tablename", "" ]
I have a requirement to get a start time and end time in the condition where `val = 'Y'` and `col_val < 5`. I have a table named temp\_vvv structure like ``` date_time | val | col_val ------------------------------------------------------------------------- 03/06/2014 08:58:00 Y 4 03/06/2014 08:59:00 Y 3 03/06/2014 09:00:00 Y 1 03/06/2014 09:01:00 Y 0 03/06/2014 09:01:30 Y 0 03/06/2014 09:02:00 Y 2 03/06/2014 09:02:30 Y 5 03/06/2014 09:03:00 Y 20 03/06/2014 09:03:30 Y 50 03/06/2014 09:04:00 Y 55 03/06/2014 09:04:30 Y 60 03/06/2014 09:05:00 Y 10 03/06/2014 09:05:30 Y 4 03/06/2014 09:06:00 Y 2 03/06/2014 09:06:30 Y 0 03/06/2014 09:07:00 Y 0 03/06/2014 09:07:30 Y 1 03/06/2014 09:08:00 N 5 03/06/2014 09:08:30 N 0 ``` I am expecting output like ``` start_date_time | end_date_time -------------------------------------------------- 03/06/2014 08:58:00 03/06/2014 09:02:00 03/06/2014 09:05:30 03/06/2014 09:07:30 ``` How to get this output from a select query Please help me. **EDIT** The value of col\_val <5 is start from `'03/06/2014 08:58:00'` and ends at `'03/06/2014 09:02:00'` .so that first record is `start_date_time` is `'03/06/2014 08:58:00'` and `end_date_time` is `'03/06/2014 09:02:00'` and so on . How to achieve it?
You can use `ROW_NUMBER()` analytic function to group the records based on `COL` and `COL_VAL`. Try with the below query, ``` SELECT MIN(date_time) start_date_time, MAX(date_Time) end_date_time FROM( SELECT date_time, val, col_val, CASE WHEN val = 'Y' AND col_val < 5 THEN 1 ELSE 0 END col_val_limit, ROW_NUMBER() OVER (ORDER BY DATE_TIME) - ROW_NUMBER() OVER (PARTITION BY CASE WHEN val = 'Y' AND col_val < 5 THEN 1 ELSE 0 END ORDER BY DATE_tIME) grp FROM temp_vw ) WHERE col_val_limit = 1 GROUP BY grp ORDER BY start_date_time; ```
``` select min(date_time), max(date_time) from ( select date_time, sum(new_group) over (order by date_time) group_ from ( select date_time, val, col_val, case when case when lag(val ) over (order by date_time) = 'Y' and lag(col_val) over (order by date_time) < 5 then 1 else 0 end != case when val = 'Y' and col_val < 5 then 1 else 0 end then 1 else 0 end new_group from tq84_t ) where val = 'Y' and col_val < 5 ) group by group_; ``` Here's the [SQL Fiddle](http://sqlfiddle.com/#!4/64545/1).
Grouping data with in time range
[ "", "sql", "oracle", "oracle11g", "oracle10g", "" ]
I have two tables, in the first table the course id is stored and in the second table the course id and different subject areas description are stored as shown below. ``` Table PA_CPNT CPNT_ID( Course ID) Course Title 06201826 AAAA 06201827 BBBB 06201828 CCCC Table PA_CPNT_SUBJ CPNT_ID SUBJ_ID 06201826 PLNT_DEV 06201826 WRKS_COUN 06201827 WRKS_COUN1 06201827 WRKS_COUN2 06201827 WRKS_COUN3 06201828 WRKS_COUN My requirement is to have an output in the below format CPNT_ID COUrse Title SUBJ_ID1 SUBJ_ID2 SUBJ_ID3 06201826 AAAA PLNT_DEV WRKS_COUN 06201827 BBBB WRKS_COUN1 WRKS_COUN2 WRKS_COUN3 06201828 CCCC WRKS_COUN ``` I have written the below code, how can I modify this code to achieve the above requirement. ``` select distinct CPNT_ID, cpnt_desc, SUBJ_ID1, SUBJ_ID2, SUBJ_ID3 from ( select a.cpnt_id, a.cpnt_desc, b.subj_id as subj_id1, c.subj_id as subj_id2, d.subj_id as subj_id3 from PA_CPNT a inner join PA_CPNT_SUBJ b on a.cpnt_id=b.cpnt_id inner join PA_CPNT_SUBJ c on a.cpnt_id=c.cpnt_id inner join PA_CPNT_SUBJ d on a.cpnt_id=d.cpnt_id ) X where subj_id1 ! = subj_id2 and subj_id2 ! = subj_id3 and subj_id3 ! = subj_id1 ``` Please help
You can use row\_number to give each subject in a course a number, then show subject #1, #2 and #3. ``` select pa_cpnt.cpnt_id, pa_cpnt.cpnt_desc, min(case when subj.rn = 1 then subj.subj_id end) as subj_id1, min(case when subj.rn = 2 then subj.subj_id end) as subj_id2, min(case when subj.rn = 3 then subj.subj_id end) as subj_id3 from pa_cpnt left outer join ( select cpnt_id, subj_id, row_number() over (partition by cpnt_id order by subj_id) as rn from pa_cpnt_subj ) subj on subj.cpnt_id = pa_cpnt.cpnt_id group by pa_cpnt.cpnt_id, pa_cpnt.cpnt_desc; ```
``` select DISTINCT a.cpnt_id, a.cpnt_desc, b.subj_id as subj_id1, c.subj_id as subj_id2, d.subj_id as subj_id3 from PA_CPNT a left join PA_CPNT_SUBJ b on a.cpnt_id=b.cpnt_id left join PA_CPNT_SUBJ c on a.cpnt_id=c.cpnt_id and b.subj_id < c.subj_id left join PA_CPNT_SUBJ d on a.cpnt_id=d.cpnt_id and c.subj_id < d.subj_id ``` Using `<` rather than `!=` prevents it from producing duplicates with all the different permutations of the subjects.
How to show spaces in columns which do not have data in oracle
[ "", "sql", "oracle", "" ]
I have a SQL Server 2008 table with data like this: ``` Contract_No Property_No Start_Dte End_Dte 12345 123 01/01/2014 01/31/2014 12345 123 01/15/2014 02/15/2014 12345 123 03/01/2014 03/31/2014 12345 124 01/01/2014 01/31/2014 ``` I cannot have the same Contract/Property # with an overlapping date range. So, the second row above would be a problem since its `Start_Dte` starts in the middle of the 1st row's date range. All other rows are ok. I'm really at a loss on how to do this with a SQL query. I know how to check for this using a language like C# or VB, but my lousy attempts at writing a query have failed. Anyone have any ideas?
The following query will show all records that have conflicting date ranges with other records ([**SQL Fiddle**](http://sqlfiddle.com/#!6/bae6e/6/0)): ``` WITH x AS ( SELECT *,ROW_NUMBER() OVER (ORDER BY Contract_No, Property_No, Start_Dte) AS r FROM MyTable ) SELECT * FROM x m1 INNER JOIN x m2 ON m2.Contract_No = m1.Contract_No AND m2.Property_No = m1.Property_No AND m1.r <> m2.r AND ( ( m2.Start_Dte >= m1.Start_Dte AND m2.Start_Dte <= m1.End_Dte ) OR ( m2.End_Dte >= m1.Start_Dte AND m2.End_Dte <= m1.End_Dte ) ) ```
(Edited) The following query will generate a list of contract/property pairs for which there are two or more overlapping periods: ``` SELECT distinct t1.Contract_No, t1.Property_No, t1.Start_Dte, t1.End_Dte from MyTable t1 inner join MyTable t2 on t2.Contract_No = t1.Contract_No and t2.Property_No = t1.Property_No and t1.Start_Dte <> t2.Start_Dte -- PK check and t1.End_Dte <> t2.End_Dte -- PK check and t2.Start_Dte < t1.End_Dte and t2.End_Dte > t1.Start_Dte ``` This worked on the sample data provided, but there may yet be fringe cases to take into account, such as... The ugly hard part is that there's no way to uniquely identify a row in the table without referencing every column... which incidentally means that if two or more rows have identical times, they won't be caught by this query, and you'll need to use one of the other solutions that use `row-number`. (Of course, not having a primary key, you'll have tons of other problems as well...) If there is a primary key available, the two `-- PKcheck` lines can be replaced with a simple primary key check. As mentioned, I never get this Aztec Math stuff right the first time. Below is my initial pre-debugging response. --- The following query will generate a list of contract/property pairs for which there are two or more overlapping periods: ``` SELECT distinct t1.Contract_No, t1.Property_No from MyTable t1 inner join MyTable t2 on t2.Contract_No = t1.Contract_No and t2.Property_No = t1.Property_No and (t2.Start_Dte > t1.End_Dte or t2.End_Dte < t1.Start_Dte) ``` Be sure to test the logic, I always find it tricky to get these temporal queries exactly right on the first go. The idea is * Either the second contract starts after the first one ends * Or the second contract ends before the first one starts * If one of the other does not apply, the periods will overlap If start/stop dates can overlap, use >= and <=. Be wary if you are using the datetime or smalldatetime datatype -- besides the date, you will also have a time "element" in the value. That's the start. The next step is listing out all the contract/properties: ``` SELECT tt.* -- Being lazy here, you should always specify precisely which columns to return from MyTable tt inner join (-- Make it a subquery select distinct t1.Contract_No, t1.Property_No from MyTable t1 inner join MyTable t2 on t2.Contract_No = t1.Contract_No and t2.Property_No = t1.Property_No and (t2.Start_Dte > t1.End_Dte or t2.End_Dte < t1.Start_Dte)) xx on xx.Contract_No = tt.Contract_No and xx.Property_No = tt.Property_No ``` Next up: determine which contract is wrong. That, of course, requires application of business logic, putting the ball back in your court.
Check for overlapping date ranges based on a compound key
[ "", "sql", "sql-server", "join", "" ]
I am new to SQL Server, having two select statements - and i am trying to Put UNION to combine the two. But i am getting error message = **A SELECT statement that assigns a value to a variable must not be combined with data-retrieval operations**. how do i assign @NEWComments variable ?? ``` DECLARE @NEWComments VARCHAR(max) SELECT @NEWComments select cast(Comments1 as varchar(max)),cast( Comments2 as varchar(max)), cast(Comments3 as varchar(max)), from TableComments where CommentsID = 1234 UNION SELECT TOP 2 @NEWComments = COALESCE(@NEWComments,'') + cast(comments as varchar(max)) + ';' FROM [ChildTableComments] where CommentsID = 1234 ``` please help!! I am trying to retrieve records from two tables TableComments and ChildTableComments. TableComments will have atmax 1 record but the ChildTableComments will have many records for the same CommentsID. so i tried to get the Top2 records from ChildTableComments and add them and return 1 single column which can be added with TableComments using UNION.
You can also try with this ``` DECLARE @NEWComments VARCHAR(max); SELECT TOP 2 @NEWComments = @NEWComments + COALESCE(comments,'') FROM [ChildTableComments] where CommentsID = 1234; select @NEWComments AS Comments, cast(Comments1 as varchar(max)) AS com1, cast( Comments2 as varchar(max)) as com2, cast(Comments3 as varchar(max)) as com3 from TableComments where CommentsID = 1234 ```
If you want to merge all child comments into one row, you can do it like this: ``` DECLARE @NEWComments VARCHAR(max); SET @NEWComments = ''; SELECT TOP 2 @NEWComments = @NEWComments + COALESCE(comments,'') FROM [ChildTableComments] where CommentsID = 1234; select cast(Comments1 as varchar(max)) AS com1, cast( Comments2 as varchar(max)) as com2, cast(Comments3 as varchar(max)) as com3 from TableComments where CommentsID = 1234 UNION SELECT @NEWComments AS com1, null as com2, null as com3; ```
Trying to Add two sql statements and getting error with SELECT Statements
[ "", "sql", "sql-server", "sql-server-2005", "select", "" ]
I am using the following schema: ``` CREATE TABLE person ( person_name VARCHAR PRIMARY KEY ); CREATE TABLE pet ( animal_name VARCHAR, person_name VARCHAR REFERENCES person(person_name), PRIMARY KEY (animal_name, person_name) ); ``` I wish to create a table where, for each `person_name`, I get an array with the pets of that person. I am using *PostgreSQL 9.3.4*. I have the following values in each table: **Person** ``` PERSON_NAME ----------- Alice Bob ``` **Pet** ``` ANIMAL_NAME | PERSON_NAME ------------------------- Woof | Alice Meow | Alice ``` I wish to create the following table: ``` PERSON_NAME | PETS -------------------------- Alice | {Woof, Meow} Bob | {} ``` I cannot, however, create the empty array. What I get is the following: ``` PERSON_NAME | PETS -------------------------- Alice | {Woof, Meow} Bob | {NULL} ``` This is the query I am using: ``` SELECT person.person_name, array_agg(pet.animal_name) AS pets FROM person LEFT JOIN pet ON person.person_name = pet.person_name GROUP BY person.person_name; ``` I understand why I am getting the array with the `NULL` value inside, I want to know how to get an empty array instead. Here is a [fiddle](http://sqlfiddle.com/#!15/953b7/3) with the code needed to create the schema, insert the values and with the query I am using. The result shown in the website doesn't show the `NULL` value, although it is there. **EDIT** The result will be parsed to JSON, that is why `{NULL}` is not an acceptable result, as it will be parsed to `[null]`, which is different from the `[]` I require. For the same reason, something like `{""}` is not an acceptable result either.
The most simple way of doing this, is to use the [`ARRAY` constructor's sub-query variant](http://www.postgresql.org/docs/current/static/sql-expressions.html#SQL-SYNTAX-ARRAY-CONSTRUCTORS): ``` SELECT person.person_name, ARRAY(SELECT animal_name FROM pet WHERE person.person_name = pet.person_name) AS pets FROM person; ``` [SQLFiddle](http://sqlfiddle.com/#!15/24ccc/11)
I just wanna add tho this is 6 years old > array[]::varchar[]
Create an empty array in an SQL query using PostgreSQL instead of an array with NULL inside
[ "", "sql", "arrays", "database", "postgresql", "null", "" ]
Lets say i have a Moder `User` that has many `Posts`. So now for example i choose the Users who are male: ``` SELECT "users".* FROM "users" WHERE (gender = 'male') ``` How can i extend my search query and select only the male User who have a Post with a tagging `[sport,holiday]`? If i only had to use `Posts` i would simply call: ``` SELECT "posts".* FROM "posts" WHERE "posts"."tag" IN ('sport','holiday') ``` So what i tried was: ``` SELECT "users".* FROM "users" INNER JOIN "posts" ON "posts"."user_id" = "users"."id" WHERE "posts"."tag" IN ('sport', 'holiday') AND gender = 'male' ``` The problem with this search query is that it returns the same User several times if he has more Posts with the tags ['sport','holiday']. How do i have to change my search query so that it returns a User only once? Thanks
Try `GROUP BY` `GROUP BY users.id` or `DISTINCT`
``` SELECT DISTINCT "users".* FROM "users" INNER JOIN "posts" ON "posts"."user_id" = "users"."id" WHERE "posts"."tag" IN ('sport', 'holiday') AND gender = 'male' ``` Will return unique rows or as the first comment said use a group by ``` SELECT "users".* FROM "users" INNER JOIN "posts" ON "posts"."user_id" = "users"."id" WHERE "posts"."tag" IN ('sport', 'holiday') AND gender = 'male' Group by users.id ```
Find all male User with specific Posts SQL
[ "", "sql", "postgresql", "" ]
I have an SQL table called messages, it has three columns ``` 1. UserFrom uniqueidentifier 2. UserTo uniqueidentifier 3. Messagen varchar(50) ``` This table is used to store messages sent from one user to another, it stores the `UserId` from the `aspnet_Users` instead of the `username`, now I need to create a view that shows the `UserFrom` and `UserTo` as names by getting the `Usename` from the `aspnet_Users` table using the `UserId` in the table messages. Thanks in advance
You need to join aspnet\_Users table twice with different alias names: ``` SELECT U1.Username as UserFrom,U2.Username as UserTo, M.Message FROM Messages M JOIN aspnet_Users U1 ON U1.UserId=M.UserFrom JOIN aspnet_Users U2 ON U2.UserId=M.UserTo ``` **Explanation:** Here aspnet\_Users table it joined twice with different alias names U1,U2. And each username is fetched from the respective table.
Just join with the user table twice. ``` SELECT t2.name AS userFrom, t3.name AS userTo, t1.Message FROM messages t1 LEFT JOIN aspnet_Users t2 ON t1.UserFrom = t2.UserId LEFT JOIN aspnet_Users t3 ON t1.UserTo = t3.UserId ```
How to join two columns to the same table
[ "", "mysql", "sql", "sql-server", "" ]
I've got a stored procedure that I'm passing values to from a C# app. What I need to do is filter the data based on a selection from a dropdown in the app. So, I've got something like this: ``` ALTER procedure [dbo].[usp_AllCompletedTasks] @StDate smalldatetime, @EnDate smalldatetime, @xRole int, @xFunction int AS BEGIN declare @StrDate smalldatetime, @EndDate smalldatetime, @MyRole int, @MyFunction int set @StrDate = @StDate set @EndDate = @EnDate set @MyRole = @xRole set @MyFunction = @xFunction SELECT ... Some Fields FROM MyTable WHERE AND Work_Start >= DATEADD(dd,0,DATEDIFF(dd,0,@strDate)) AND Work_End <= DATEADD(dd,1,DATEDIFF(dd,0,@endDate)) AND ROLE_ID = @MyRole AND FUNCTION_ID = @MyFunction ``` Now, what the powers upstairs decided is that they want a "Select All" option in the Role and Function dropdowns. So, what I did was put a Select All option in on the C# side and set the value to 0. What I figured I'd do is change the ROLE\_ID and FUNCTION\_ID filters to use **LIKE** instead of **=**, and then use an If/Then/Else statement at the top to say, "If the value of @MyRole is 0, change it to &" or something. Is this even possible in SQL Server?
Since the `ROLE_ID` and `FUNCTION_ID` columns are integers you have to convert them to one of the character types in order to use the `LIKE`operator. I suggest that when the 'Select All' is selected pass the `NULL` 'value' to the desired parameter and rewrite your conditions like this: ``` AND (@MyRole IS NULL OR ROLE_ID = @MyRole) AND (@MyFunction IS NULL OR FUNCTION_ID = @MyFunction) ``` **EDIT (+note)** Do not use a specific value as the 'not specified value', you have that symbol already in SQL: the `NULL`.
If I follow correctly, you can get what you want by just adding another condition to your `WHERE` clause: ``` WHERE AND Work_Start >= DATEADD(dd,0,DATEDIFF(dd,0,@strDate)) AND Work_End <= DATEADD(dd,1,DATEDIFF(dd,0,@endDate)) AND ((ROLE_ID = @MyRole AND FUNCTION_ID = @MyFunction) OR (@MyRole = 0 AND @MyFunction = 0)) ``` If your variables are `0` then the results are not filtered by `ROLE_ID` and `FUNCTION_ID`, if they need to be evaluated independently then you would have to add additional logic.
Using If/Then/Else in a stored procedure in SQL Server 2008
[ "", "sql", "sql-server-2008", "" ]
I have two table with different appointment dates. Table 1 ``` id start date 1 5/1/14 2 3/2/14 3 4/5/14 4 9/6/14 5 10/7/14 ``` Table 2 ``` id start date 1 4/7/14 1 4/10/14 1 7/11/13 2 2/6/14 2 2/7/14 3 1/1/14 3 1/2/14 3 1/3/14 ``` If i had set date ranges i can count each appointment date just fine but i need to change the date ranges. For each id in table 1 I need to add the distinct appointment dates from table 2 BUT only 6 months prior to the start date from table 1. Example: count all distinct appointment dates for id 1 (in table 2) with appointment dates between 12/1/13 and 5/1/14 (6 months prior). So the result is 2...4/7/14 and 4/10/14 are within and 7/1/13 is outside of 6 months. So my issue is that the range changes for each record and i can not seem to figure out how to code this.For id 2 the date range will be 9/1/14-3/2/14 and so on. Thanks everyone in advance!
Try this out: ``` SELECT id, ( SELECT COUNT(*) FROM table2 WHERE id = table1.id AND table2.start_date >= DATEADD(MM,-6,table1.start_date) ) AS table2records FROM table1 ``` The DATEADD subtracts 6 months from the date in table1 and the subquery returns the count of related records.
I think what you want is a type of join. ``` select t1.id, count(t2.id) as numt2dates from table1 t1 left outer join table2 t2 on t1.id = t2.id and t2.startdate between dateadd(month, -6, t1.startdate) and t1.startdate group by t1.id; ``` The exact syntax for the date arithmetic depends on the database.
sql count statement with multiple date ranges
[ "", "sql", "sql-server", "" ]